Humphreys & Ferron-Jones | Trusted security by design, Compute Engineered for your Hybrid World
(upbeat music) >> Welcome back, everyone, to our Cube special programming on "Securing Compute, Engineered for the Hybrid World." We got Cole Humphreys who's with HPE, global server security product manager, and Mike Ferron-Jones with Intel. He's the product manager for data security technology. Gentlemen, thank you for coming on this special presentation. >> All right, thanks for having us. >> So, securing compute, I mean, compute, everyone wants more compute. You can't have enough compute as far as we're concerned. You know, more bits are flying around the internet. Hardware's mattering more than ever. Performance markets hot right now for next-gen solutions. When you're talking about security, it's at the center of every single conversation. And Gen11 for the HPE has been big-time focus here. So let's get into the story. What's the market for Gen11, Cole, on the security piece? What's going on? How do you see this impacting the marketplace? >> Hey, you know, thanks. I think this is, again, just a moment in time where we're all working towards solving a problem that doesn't stop. You know, because we are looking at data protection. You know, in compute, you're looking out there, there's international impacts, there's federal impacts, there's state-level impacts, and even regulation to protect the data. So, you know, how do we do this stuff in an environment that keeps changing? >> And on the Intel side, you guys are a Tier 1 combination partner, Better Together. HPE has a deep bench on security, Intel, We know what your history is. You guys have a real root of trust with your code, down to the silicon level, continuing to be, and you're on the 4th Gen Xeon here. Mike, take us through the Intel's relationship with HPE. Super important. You guys have been working together for many, many years. Data security, chips, HPE, Gen11. Take us through the relationship. What's the update? >> Yeah, thanks and I mean, HPE and Intel have been partners in delivering technology and delivering security for decades. And when a customer invests in an HPE server, like at one of the new Gen11s, they're getting the benefit of the combined investment that these two great companies are putting into product security. On the Intel side, for example, we invest heavily in the way that we develop our products for security from the ground up, and also continue to support them once they're in the market. You know, launching a product isn't the end of our security investment. You know, our Intel Red Teams continue to hammer on Intel products looking for any kind of security vulnerability for a platform that's in the field. As well as we invest heavily in the external research community through our bug bounty programs to harness the entire creativity of the security community to find those vulnerabilities, because that allows us to patch them and make sure our customers are staying safe throughout that platform's deployed lifecycle. You know, in 2021, between Intel's internal red teams and our investments in external research, we found 93% of our own vulnerabilities. Only a small percentage were found by unaffiliated external entities. >> Cole, HPE has a great track record and long history serving customers around security, actually, with the solutions you guys had. With Gen11, it's more important than ever. Can you share your thoughts on the talent gap out there? People want to move faster, breaches are happening at a higher velocity. They need more protection now than ever before. Can you share your thoughts on why these breaches are happening, and what you guys are doing, and how you guys see this happening from a customer standpoint? What you guys fill in with Gen11 with solution? >> You bet, you know, because when you hear about the relentless pursuit of innovation from our partners, and we in our engineering organizations in India, and Taiwan, and the Americas all collaborating together years in advance, are about delivering solutions that help protect our customer's environments. But what you hear Mike talking about is it's also about keeping 'em safe. Because you look to the market, right? What you see in, at least from our data from 2021, we have that breaches are still happening, and lot of it has to do with the fact that there is just a lack of adequate security staff with the necessary skills to protect the customer's application and ultimately the workloads. And then that's how these breaches are happening. Because ultimately you need to see some sort of control and visibility of what's going on out there. And what we were talking about earlier is you see time. Time to seeing some incident happen, the blast radius can be tremendous in today's technical, advanced world. And so you have to identify it and then correct it quickly, and that's why this continued innovation and partnership is so important, to help work together to keep up. >> You guys have had a great track record with Intel-based platforms with HPE. Gen11's a really big part of the story. Where do you see that impacting customers? Can you explain the benefits of what's going on with Gen11? What's the key story? What's the most important thing we should be paying attention to here? >> I think there's probably three areas as we look into this generation. And again, this is a point in time, we will continue to evolve. But at this particular point it's about, you know, a fundamental approach to our security enablement, right? Partnering as a Tier 1 OEM with one of the best in the industry, right? We can deliver systems that help protect some of the most critical infrastructure on earth, right? I know of some things that are required to have a non-disclosure because it is some of the most important jobs that you would see out there. And working together with Intel to protect those specific compute workloads, that's a serious deal that protects not only state, and local, and federal interests, but, really, a global one. >> This is a really- >> And then there's another one- Oh sorry. >> No, go ahead. Finish your thought. >> And then there's another one that I would call our uncompromising focus. We work in the industry, we lead and partner with those in the, I would say, in the good side. And we want to focus on enablement through a specific capability set, let's call it our global operations, and that ability to protect our supply chain and deliver infrastructure that can be trusted and into an operating environment. You put all those together and you see very significant and meaningful solutions together. >> The operating benefits are significant. I just want to go back to something you just said before about the joint NDAs and kind of the relationship you kind of unpacked, that to me, you know, I heard you guys say from sand to server, I love that phrase, because, you know, silicone into the server. But this is a combination you guys have with HPE and Intel supply-chain security. I mean, it's not just like you're getting chips and sticking them into a machine. This is, like, there's an in-depth relationship on the supply chain that has a very intricate piece to it. Can you guys just double down on that and share that, how that works and why it's important? >> Sure, so why don't I go ahead and start on that one. So, you know, as you mentioned the, you know, the supply chain that ultimately results in an end user pulling, you know, a new Gen11 HPE server out of the box, you know, started, you know, way, way back in it. And we've been, you know, Intel, from our part are, you know, invest heavily in making sure that all of our entire supply chain to deliver all of the Intel components that are inside that HPE platform have been protected and monitored ever since, you know, their inception at one of any of our 14,000, you know, Intel vendors that we monitor as part of our supply-chain assurance program. I mean we, you know, Intel, you know, invests heavily in compliance with guidelines from places like NIST and ISO, as well as, you know, doing best practices under things like the Transported Asset Protection Alliance, TAPA. You know, we have been intensely invested in making sure that when a customer gets an Intel processor, or any other Intel silicone product, that it has not been tampered with or altered during its trip through the supply chain. HPE then is able to pick up that, those components that we deliver, and add onto that their own supply-chain assurance when it comes down to delivering, you know, the final product to the customer. >> Cole, do you want to- >> That's exactly right. Yeah, I feel like that integration point is a really good segue into why we're talking today, right? Because that then comes into a global operations network that is pulling together these servers and able to deploy 'em all over the world. And as part of the Gen11 launch, we have security services that allow 'em to be hardened from our factories to that next stage into that trusted partner ecosystem for system integration, or directly to customers, right? So that ability to have that chain of trust. And it's not only about attestation and knowing what, you know, came from whom, because, obviously, you want to trust and make sure you're get getting the parts from Intel to build your technical solutions. But it's also about some of the provisioning we're doing in our global operations where we're putting cryptographic identities and manifests of the server and its components and moving it through that supply chain. So you talked about this common challenge we have of assuring no tampering of that device through the supply chain, and that's why this partnering is so important. We deliver secure solutions, we move them, you're able to see and control that information to verify they've not been tampered with, and you move on to your next stage of this very complicated and necessary chain of trust to build, you know, what some people are calling zero-trust type ecosystems. >> Yeah, it's interesting. You know, a lot goes on under the covers. That's good though, right? You want to have greater security and platform integrity, if you can abstract the way the complexity, that's key. Now one of the things I like about this conversation is that you mentioned this idea of a hardware-root-of-trust set of technologies. Can you guys just quickly touch on that, because that's one of the major benefits we see from this combination of the partnership, is that it's not just one, each party doing something, it's the combination. But this notion of hardware-root-of-trust technologies, what is that? >> Yeah, well let me, why don't I go ahead and start on that, and then, you know, Cole can take it from there. Because we provide some of the foundational technologies that underlie a root of trust. Now the idea behind a root of trust, of course, is that you want your platform to, you know, from the moment that first electron hits it from the power supply, that it has a chain of trust that all of the software, firmware, BIOS is loading, to bring that platform up into an operational state is trusted. If you have a breach in one of those lower-level code bases, like in the BIOS or in the system firmware, that can be a huge problem. It can undermine every other software-based security protection that you may have implemented up the stack. So, you know, Intel and HPE work together to coordinate our trusted boot and root-of-trust technologies to make sure that when a customer, you know, boots that platform up, it boots up into a known good state so that it is ready for the customer's workload. So on the Intel side, we've got technologies like our trusted execution technology, or Intel Boot Guard, that then feed into the HPE iLO system to help, you know, create that chain of trust that's rooted in silicon to be able to deliver that known good state to the customer so it's ready for workloads. >> All right, Cole, I got to ask you, with Gen11 HPE platforms that has 4th Gen Intel Xeon, what are the customers really getting? >> So, you know, what a great setup. I'm smiling because it's, like, it has a good answer, because one, this, you know, to be clear, this isn't the first time we've worked on this root-of-trust problem. You know, we have a construct that we call the HPE Silicon Root of Trust. You know, there are, it's an industry standard construct, it's not a proprietary solution to HPE, but it does follow some differentiated steps that we like to say make a little difference in how it's best implemented. And where you see that is that tight, you know, Intel Trusted Execution exchange. The Intel Trusted Execution exchange is a very important step to assuring that route of trust in that HPE Silicon Root of Trust construct, right? So they're not different things, right? We just have an umbrella that we pull under our ProLiant, because there's ILO, our BIOS team, CPLDs, firmware, but I'll tell you this, Gen11, you know, while all that, keeping that moving forward would be good enough, we are not holding to that. We are moving forward. Our uncompromising focus, we want to drive more visibility into that Gen11 server, specifically into the PCIE lanes. And now you're going to be able to see, and measure, and make policies to have control and visibility of the PCI devices, like storage controllers, NICs, direct connect, NVME drives, et cetera. You know, if you follow the trends of where the industry would like to go, all the components in a server would be able to be seen and attested for full infrastructure integrity, right? So, but this is a meaningful step forward between not only the greatness we do together, but, I would say, a little uncompromising focus on this problem and doing a little bit more to make Gen11 Intel's server just a little better for the challenges of the future. >> Yeah, the Tier 1 partnership is really kind of highlighted there. Great, great point. I got to ask you, Mike, on the 4th Gen Xeon Scalable capabilities, what does it do for the customer with Gen11 now that they have these breaches? Does it eliminate stuff? What's in it for the customer? What are some of the new things coming out with the Xeon? You're at Gen4, Gen11 for HP, but you guys have new stuff. What does it do for the customer? Does it help eliminate breaches? Are there things that are inherent in the product that HP is jointly working with you on or you were contributing in to the relationship that we should know about? What's new? >> Yeah, well there's so much great new stuff in our new 4th Gen Xeon Scalable processor. This is the one that was codenamed Sapphire Rapids. I mean, you know, more cores, more performance, AI acceleration, crypto acceleration, it's all in there. But one of my favorite security features, and it is one that's called Intel Control-Flow Enforcement Technology, or Intel CET. And why I like CET is because I find the attack that it is designed to mitigate is just evil genius. This type of attack, which is called a return, a jump, or a call-oriented programming attack, is designed to not bring a whole bunch of new identifiable malware into the system, you know, which could be picked up by security software. What it is designed to do is to look for little bits of existing, little bits of existing code already on the server. So if you're running, say, a web server, it's looking for little bits of that web-server code that it can then execute in a particular order to achieve a malicious outcome, something like open a command prompt, or escalate its privileges. Now in order to get those little code bits to execute in an order, it has a control mechanism. And there are different, each of the different types of attacks uses a different control mechanism. But what CET does is it gets in there and it disrupts those control mechanisms, uses hardware to prevent those particular techniques from being able to dig in and take effect. So CET can, you know, disrupt it and make sure that software behaves safely and as the programmer intended, rather than picking off these little arbitrary bits in one of these return, or jump, or call-oriented programming attacks. Now it is a technology that is included in every single one of the new 4th Gen Xeon Scalable processors. And so it's going to be an inherent characteristic the customers can benefit from when they buy a new Gen11 HPE server. >> Cole, more goodness from Intel there impacting Gen11 on the HPE side. What's your reaction to that? >> I mean, I feel like this is exactly why you do business with the big Tier 1 partners, because you can put, you know, trust in from where it comes from, through the global operations, literally, having it hardened from the factory it's finished in, moving into your operating environment, and then now protecting against attacks in your web hosting services, right? I mean, this is great. I mean, you'll always have an attack on data, you know, as you're seeing in the data. But the more contained, the more information, and the more control and trust we can give to our customers, it's going to make their job a little easier in protecting whatever job they're trying to do. >> Yeah, and enterprise customers, as you know, they're always trying to keep up to date on the skills and battle the threats. Having that built in under the covers is a real good way to kind of help them free up their time, and also protect them is really killer. This is a big, big part of the Gen11 story here. Securing the data, securing compute, that's the topic here for this special cube conversation, engineering for a hybrid world. Cole, I'll give you the final word. What should people pay attention to, Gen11 from HPE, bottom line, what's the story? >> You know, it's, you know, it's not the first time, it's not the last time, but it's our fundamental security approach to just helping customers through their digital transformation defend in an uncompromising focus to help protect our infrastructure in these technical solutions. >> Cole Humphreys is the global server security product manager at HPE. He's got his finger on the pulse and keeping everyone secure in the platform integrity there. Mike Ferron-Jones is the Intel product manager for data security technology. Gentlemen, thank you for this great conversation, getting into the weeds a little bit with Gen11, which is great. Love the hardware route-of-trust technologies, Better Together. Congratulations on Gen11 and your 4th Gen Xeon Scalable. Thanks for coming on. >> All right, thanks, John. >> Thank you very much, guys, appreciate it. Okay, you're watching "theCube's" special presentation, "Securing Compute, Engineered for the Hybrid World." I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
for the Hybrid World." And Gen11 for the HPE has So, you know, how do we do this stuff And on the Intel side, you guys in the way that we develop and how you guys see this happening and lot of it has to do with the fact that Gen11's a really big part of the story. that you would see out there. And then Finish your thought. and that ability to that to me, you know, I heard you guys say out of the box, you know, and manifests of the is that you mentioned this idea is that you want your is that tight, you know, that HP is jointly working with you on and as the programmer intended, impacting Gen11 on the HPE side. and the more control and trust and battle the threats. you know, it's not the first time, is the global server security for the Hybrid World."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
India | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
ISO | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Transported Asset Protection Alliance | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Mike Ferron-Jones | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
TAPA | ORGANIZATION | 0.99+ |
Gen11 | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
14,000 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Humphreys | PERSON | 0.98+ |
each party | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
Gen11 | COMMERCIAL_ITEM | 0.97+ |
Americas | LOCATION | 0.97+ |
Gen11s | COMMERCIAL_ITEM | 0.96+ |
Securing Compute, Engineered for the Hybrid World | TITLE | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.94+ |
4th Gen Xeon Scalable processor | COMMERCIAL_ITEM | 0.94+ |
each | QUANTITY | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.92+ |
Ferron-Jones | PERSON | 0.91+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.91+ |
first electron | QUANTITY | 0.9+ |
two great companies | QUANTITY | 0.89+ |
decades | QUANTITY | 0.86+ |
three areas | QUANTITY | 0.85+ |
Gen11 | EVENT | 0.84+ |
ILO | ORGANIZATION | 0.83+ |
Control-Flow Enforcement Technology | OTHER | 0.82+ |
Ed Macosky, Boomi | AWS re:Invent 2022
(upbeat music) >> Hello, CUBE friends and welcome back to Vegas. Lisa Martin here with John Furrier. This is our third day of coverage of AWS re:Invent. There are somewhere between 50,000 and 60, 70,000 people here. The excitement is palpable. The energy in the room has been on fire since Monday night. John, we love talking, we love re:Invent. We love talking about AWS and it's incredible ecosystem of partners and we're going to be doing that next. >> Yeah, I mean 10 years of theCUBE, we've been here since 2013. Watching it grow as the cloud computing invention. And then the ecosystem has just been growing, growing, growing at the same time innovation. And that's this next segment with the company that we both have covered deeply. Boomi is going to be a great segment. Looking forward to it. >> We have, we have. And speaking of innovation and Boomi, we have a four-time cube guests back with us. Ed Macosky joined us, Chief Innovation Officer at Boomi. And it's great to see you in person. >> Yeah, great to be here. Thanks for having me. >> What's going on at Boomi? I mean, I know up and to the right, continues we'll go this way. What's going on? >> Yeah, we continue to grow. We're really focused with AWS on the cloud and app modernization. Most of our projects and many of our customers are in this modernization journey from an enterprise perspective, moving from on-premises, trying to implement multicloud, hybrid cloud, that sort of thing. But what we're really seeing is this modernization choke point that a lot of our customers are facing in that journey where they just can't get over the hump. And a lot of their, they come to us with failing projects where they're saying, "Hey, I've got maybe this anchor of a legacy data source or applications that I need to bring in temporarily or I need to keep filling that." So we help with integrating these workflows, integrating these applications and help that lift and shift and help our customers projects from failing and quickly bringing themselves to the cloud. >> You know, Ed, we've been talking with you guys for many many years with theCUBE and look at the transition, how the market's evolved. If you look at the innovation going on now, I won't say it's an innovator's dilemma because there's a lot of innovation happening. It's becoming an integrator's dilemma. And I was talking with some of your staff. Booth traffic's up, great leads coming in. You mentioned on the keynote in a slide. I mean, the world spun in the direction of Boomi with all your capabilities around integration, understanding how data works. All the themes here at re:Invent kind of like are in that conversation top track that we've been mentioning and Boomi, you guys have been building around. Explain why that's happening. Am I right? Am I getting that right, or can you share your thoughts? >> Yeah, absolutely. We're in a great spot. I mean, given the way the economy's going today, people are, again, trying to do more with less. But there is this modernization journey that I talked about and there's an explosion of SaaS applications, cloud technologies, data sources, et cetera. And not only is it about integrating data sources and automating workflows, but implementing things at scale, making sure you have high data quality, high data governance, security, et cetera. And Boomi sits right in the middle of providing solutions of all of that to make a business more efficient. Not only that, but you can implement things very very quickly 'cause we're a low-code platform. It's not just about this hardcore technology that's really hard to implement. You can do it really quickly with our platform. >> Speaking of transformation, one of the things John does every year ahead of re:Invent is he gets to sit down with the CEO of re:Invent and really does a great, if you haven't seen it, check it out on siliconangle.com. Really kind of a preview of what we're going to expect at the show. And one of the things Adam said to you was CIOs, CEOs are coming to me not wanting to talk about technology. They want to talk about transformation, business transformation. It's no more, not so much about digital transformation anymore, it's about transforming businesses. Are you hearing customers come to you with the same help us transform our business so we can be competitive, so we can meet customer demand? >> Oh, absolutely. It's no longer about tools and technology and providing people with paint to paint on a canvas. We're offering solutions on the AWS marketplace. We have five solutions that we launched this year to get people up and running very quickly based on business problems from disbursement to lead to cash with Salesforce and NetSuite to business-to-business integrations and EDI dashboarding and that sort of thing. We also have our own marketplace that provide these solutions and give our customers the ability to visualize what they can do with our platform to actually solve business problems. Again, not just about tooling and technology and how to connect things. >> How's the marketplace relationship going for you? Are you guys seeing success there? >> Yeah, we're seeing a lot of success. I mean, in fact, we're going to be doubling down in the next year. We're going to be, we haven't announced it yet, but we're going to be announcing some new solutions. >> John: I guess we're announcing it now. >> No, I'm not going to get to specifics. But we're going to be putting more and more solutions on the marketplace and we're going to be offering more ways to consume and purchase our platform on the marketplace in the next couple of months. >> Ed, talk about what's new with Boomi real quick. I know you guys have new connectors Early Access. What's been announced? What have you guys announced? What's coming? What's the new things folks should pay attention from a product standpoint? >> Yeah, so you mentioned the connectors. We have 32 new connectors. And by the way in our ecosystem, our customers have connected 199,970 unique things. Amazon SQS is one of those in that number. So that's the kind of scale. >> What's the number again? >> 199,970. At least that's the last I checked earlier. >> That's a good recall right there. Exact number. >> It's an exciting number 'cause we're scaling very, very rapidly. But the other things that are exciting are we announced our event streaming service that we want to bring to our cloud. We've relied on partners in the past to do that for us, but it's been a very critical need that our customers have asked for. So we're integrating that into our platform. We're also going to be focusing more and more on our data management capabilities because I mentioned it a little earlier, connecting things, if bad data's going in and bad data's going out, bad data's going everywhere. So we have the tools and capability to govern data, manage data, high quality solutions. So we're going to invest more and more in that 'cause that's what our customers are asking us for. >> Data governance is a challenge for any business in any industry. Too much access is a huge risk, not enough access to the right people means you can't really extract the insights from data to be able to make data-driven decisions. How do you help customers really on that fine line of data governance? >> Very specifically, we have as part of our iPaaS platform, we have a data catalog and data prep capability within the platform itself that gives citizens in the organization the ability to catalog data in a secure way based on what they have capabilities to. But not only that, the integrator can use data catalog to actually catalog the data and understand what needs to be integrated and how they can make their business more efficient by automating the movement of data and sharing the data across the organization. >> On the innovation side, I want to get back to that again because I think this integration innovation angle is something that we talked about with Adams Selipsky in our stories hitting SiliconANGLE right now are all about the partner ecosystems. We've been highlighting some of the bigger players emerging. You guys are out there. You got Databricks, Snowflake, MongoDB where they're partnering with Amazon, but they're not just an ISV, they're platforms. You guys have your own ISVs. You have your own customers. You're doing low-code before no-code is popular. So where are you guys at on that wave? You got a good customer base, share some names. What's going on with the customers? Are they becoming more developer oriented? 'Cause let's face it, your customers that working on Boomi, they're developers. >> Yes. >> And so they got tools. You're enablers, so you're a platform on Amazon. >> We are a platform on Amazon. >> We call that supercloud, but that's where this new shift is happening. What's your reaction to that? >> Yes, so I guess we are a supercloud on Amazon and our customers and our partners are developers of our platforms themselves. So most of our partners are also customers of ours and they will be implementing their own integrations in the backend of their platforms into their backend systems to do things like billing and monitoring of their own usage of their platforms. But with our customers, they're also Amazon customers who are trying to connect in a multicloud way or many times just within the Amazon ecosystem. Or even customers like Kenco and Tim Heger who did a presentation from HealthBridge. They're also doing B2B connectivity to bring information from their partners into their ecosystem within their platform. So we handle all of the above. So now we are an independent company and it's nice to be a central part of all of these different ecosystems. And where I find myself in my role a lot of times is literally connecting different platforms and applications and SI partners to solve these problems 'cause nobody can really see it themselves. I had a conversation earlier today where someone would say, "Hey, you're going to talk with that SI partner later today. They're a big SI partner of ours. Why don't they develop solutions that we can go to market together to solve problems for our customers?" >> Lisa, this is something that we've been talking about a lot where it's an and conversation. My big takeaway from Adam's one-on-one and re:Invent so far is they're not mutually exclusive. There's an and. You can be an ISV and this platforms in the ecosystem because you're enabling software developers, ISV as they call it. I think that term is old school, but still independent software vendors. That's not a platform. They can coexist and they are, but they're becoming on your platform. So you're one of the most advanced Amazon partners. So as cloud grows and we mature and what, 13 years old Amazon is now, so okay, you're becoming bigger as a platform. That's the next wave. What happens in that next five years from there? What happens next? Because if your platform continues to grow, what happens next? >> So for us, where we're going is connecting platform providers, cloud providers are getting bigger. A lot of these cloud providers are embracing partnerships with other vendors and things and we're helping connect those. So when I talk about business-to-business and sharing data between those, there are still some folks that have legacy applications that need to connect and bring things in and they're just going to ride them until they go away. That is a requirement, but at some point that's all going to fall by the wayside. But where the industry is really going for us is it is about automation and quickly automating things and again, doing more with less. I think Tim Heger had a quote where he said, "I don't need to use Michelangelo to come paint my living room." And that's the way he thinks about low-code. It's not about, you don't want to just sit there and code things and make an art out of coding. You want to get things done quickly and you want to keep automating your business to keep pushing things forward. So a lot of the things we're looking at is not just about connecting and automating data transformation and that's all valuable, but how do I get someone more productive? How do I automate the business in an intelligent way more and more to push them forward. >> Out of the box solutions versus platforms. You can do both. You can build a platform. >> Yes. >> Or you can just buy out of the box. >> Well, that's what's great about us too is because we don't just provide solutions. We provide solutions many times as a starting point or the way I look at it, it's art of the possible a lot of what we give 'cause then our customers can take our low-code tooling and say, wow, I like this solution, but I can really take it to the next step, almost in like an open source model and just quickly iterate and drive innovation that way. And I just love seeing our, a lot of it for me is just our ecosystem and our partners driving the innovation for us. >> And driving that speed for customers. When I had the chance to interview Tim Heger myself last month and he was talking about Boomi integration and Flow are enabling him to do integration 10x faster than before and HealthBridge built their business on Boomi. They didn't replace the legacy solution, but he had experience with some of your big competitors and chose Boomi and said, "It is 10x faster." So he's able to deliver to those and it's a great business helping people pay for health issues if they don't have the funds to do that. So much faster than they could have if had they chosen a different technology. >> Yeah, and also what I like about the HealthBridge story is you said they started with Boomi's technology. So I like to think we scale up and scale down. So many times when I talk to prospects or new customers, they think that our technology is too advanced or too expensive or too big for them to go after and they don't think they can solve these problems like we do with enterprises. We can start with you as a startup going with SaaS applications, trying to be innovative in your organization to automate things and scale. As you scale the company will be right there along with you to scale into very very advanced solutions all in a low-code way. >> And also helping folks to scale up and down during what we're facing these macroeconomic headwinds. That's really important for businesses to be able to do for cost optimization. But at the end of the day, that company has to be a data company. They have to be able to make sure that the data matches. It's there. They know what they have. They can actually facilitate communications, conversations and deliver the end user customer is demanding whether it's a retailer, a healthcare organization, a bank, you name it. >> Exactly. And another thing with today's economy, a lot of people forget with integration or automation tooling, once you get things implemented, in many traditional forms you got to manage that long term. You have to have a team to do that. Our technology runs autonomously. I hear from our customers over and over again. I just said it, sometimes I'll walk away for a month and come back and wow, Boomi's still running. I didn't realize it. 'Cause we have technology that continues to patch itself, heal itself, continue running autonomously. That also saves in a time like now where you don't have to worry about sending teams out to patch and upgrade things on a continuous basis. We take care of that for our customers. >> I think you guys can see a lot of growth with this recession and looming. You guys fit well in the marketplace. As people figure out how to right size, you guys fit right nicely into that equation. I got to ask you, what's ahead for 2023 for Boomi? What can we expect to see? >> Yeah, what's ahead? I briefly mentioned it earlier, but the new service we're really excited about that 'cause it's going to help our customers to scale even further and bring more workloads into AWS and more workloads that we can solve challenges for our customers. We've also got additional solutions. We're looking at launching on AWS marketplace. We're going to continue working with SIs and GSIs and our ISV ecosystem to identify more and more enterprise great solutions and verticals and industry-based solutions that we can take out of the box and give to our customers. So we're just going to keep growing. >> What are some of those key verticals? Just curious. >> So we're focusing on manufacturing, the financial services industry. I don't know, maybe it's vertical, but higher ed's another big one for us. So we have over a hundred universities that use our technology in order to automate, grant submissions, student management of different aspects, that sort of thing. Boise State is one of them that's modernized on AWS with Boomi technology. So we're going to continue rolling in that front as well. >> Okay. Is it time for the challenge? >> It's time for the challenge. Are you ready for the challenge, Ed? We're springing this on you, but we know you so we know you can nail this. >> Oh no. >> If you were going to create your own sizzle reel and we're creating sizzle reel that's going to go on Instagram reels and you're going to be a star of it, what would that sizzle reel say? Like if you had a billboard or a bumper sticker, what's that about Boomi boom powerful story? >> Well, we joked about this earlier, but I'd have to say, Go Boomi it. This isn't real. >> Go Boomi it, why? >> Go Boomi it because it's such a succinct way of saying our customer, that terminology came to us from our customers because Boomi becomes a verb within an organization. They'll typically start with us and they'll solve an integration challenge or something like that. And then we become viral in a good way with an organization where our customers, Lisa, you mentioned it earlier before the show, you love talking to our customers 'cause they're so excited and happy and love our technology. They just keep finding more ways to solve challenges and push their business forward. And when a problem comes up, an employee will typically say to another, go Boomi it. >> When you're a verb, that's a good thing. >> Ed: Yes it is. >> Splunk, go Splunk it. That was a verb for log files. Kleenex, tissue. >> Go Boomi it. Ed, thank you so much for coming back on your fourth time. So next time we see you will be fifth time. We'll get you that five-timers club jacket like they have on SNL next time. >> Perfect, can't wait. >> We appreciate your insight, your time. It's great to hear what's going on at Boomi. We appreciate it. >> Ed: Cool. Thank you. >> For Ed Macosky and John Furrier, I'm Lisa Martin. You're watching theCUBE, the leader in live enterprise and emerging tech coverage. (upbeat music)
SUMMARY :
and it's incredible ecosystem of partners Boomi is going to be a great segment. And it's great to see you in person. Yeah, great to be here. What's going on at Boomi? that I need to bring in temporarily and look at the transition, of all of that to make a And one of the things Adam said to you was and how to connect things. We're going to be, we going to be offering more ways What's the new things So that's the kind of scale. the last I checked earlier. That's a good recall right there. the past to do that for us, to be able to make data-driven decisions. and sharing the data is something that we talked And so they got tools. We call that supercloud, and it's nice to be a central part continues to grow, So a lot of the things we're looking at Out of the box but I can really take it to the next step, have the funds to do that. So I like to think we that company has to be a data company. You have to have a team to do that. I got to ask you, what's and our ISV ecosystem to What are some of those key verticals? in order to automate, but we know you so we but I'd have to say, Go Boomi it. that terminology came to us that's a good thing. That was a verb for log files. So next time we see It's great to hear For Ed Macosky and John
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Heger | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
fifth time | QUANTITY | 0.99+ |
five solutions | QUANTITY | 0.99+ |
32 new connectors | QUANTITY | 0.99+ |
fourth time | QUANTITY | 0.99+ |
Boomi | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
last month | DATE | 0.99+ |
HealthBridge | ORGANIZATION | 0.99+ |
60, 70,000 people | QUANTITY | 0.99+ |
Monday night | DATE | 0.99+ |
10x | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
next year | DATE | 0.99+ |
2013 | DATE | 0.99+ |
SNL | TITLE | 0.98+ |
199,970 | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Adams Selipsky | PERSON | 0.98+ |
this year | DATE | 0.98+ |
siliconangle.com | OTHER | 0.97+ |
Michelangelo | PERSON | 0.96+ |
13 years old | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Databricks | ORGANIZATION | 0.96+ |
ORGANIZATION | 0.95+ | |
iPaaS | TITLE | 0.95+ |
a month | QUANTITY | 0.93+ |
MongoDB | ORGANIZATION | 0.93+ |
four-time cube | QUANTITY | 0.92+ |
Kleenex | ORGANIZATION | 0.91+ |
NetSuite | TITLE | 0.9+ |
later today | DATE | 0.9+ |
earlier today | DATE | 0.88+ |
next couple of months | DATE | 0.84+ |
199,970 unique things | QUANTITY | 0.84+ |
wave | EVENT | 0.83+ |
over a hundred universities | QUANTITY | 0.83+ |
SQS | COMMERCIAL_ITEM | 0.81+ |
Boise State | ORGANIZATION | 0.79+ |
Ajay Patel, VMware | AWS re:Invent 2022
>>Hello everyone. Welcome back to the Cube Live, AWS Reinvent 2022. This is our first day of three and a half days of wall to wall coverage on the cube. Lisa Martin here with Dave Valante. Dave, it's getting louder and louder behind us. People are back. They're excited. >>You know what somebody told me today? Hm? They said that less than 15% of the audience is developers. I'm like, no way. I don't believe it. But now maybe there's a redefinition of developers because it's all about the data and it's all about the developers in my mind. And that'll never change. >>It is. And one of the things we're gonna be talking about is app modernization. As customers really navigate the journey to do that so that they can be competitive and, and meet the demands of customers. We've got an alumni back with us to talk about that. AJ Patel joins us, the SVP and GM Modern Apps and Management business group at VMware. Aj, welcome back. Thank >>You. It's always great to be here, so thank you David. Good to see >>You. Isn't great. It's great to be back in person. So the VMware Tansu team here back at Reinvent on the Flow Shore Flow show floor. There we go. Talk about some of the things that you guys are doing together, innovating with aws. >>Yeah, so it's, it's great to be back after in person after multiple years and the energy level continues to amaze me. The partnership with AWS started on the infrastructure side with VMware cloud on aws. And when with tanza, we're extending it to the application space. And the work here is really about how do you make developers productive To your earlier point, it's all about developers. It's all about getting applications in production securely, safely, continuously. And tanza is all about making that bridge between great applications being built, getting them deployed and running, running and operating at scale. And EKS is a dominant Kubernetes platform. And so the better together story of tanu and EKS is a great one for us, and we're excited to announce some sort of innovations in that area. >>Well, Tanu was so front and center at VMware Explorer. I wasn't at in, in VMware Explorer, Europe. Right. But I'm sure it was a similar kind of focus. When are customers choosing Tanu? Why are they choosing Tanu? What's, what's, what's the update since last August when >>We, you know, the market settled into three main use cases. One is all about developer productivity. You know, consistently we're all dealing with skill set gap issues. How do we make every developer productive, modern developer? And so 10 is all about enabling that develop productivity. And we can talk quite a bit about it. Second one is security's front and center and security's being shifted left right into how you build great software. How do you secure that through the entire supply chain process? And how do you run and operationalize secure at runtime? So we're hearing consistently about making secure software supply chain heart of what our solution is. And third one is, how do I run and operate the modern application at scale across any Kubernetes, across any cloud? These are the three teams that are continuing to get resonance and empowering. All of this is exciting. David is this formation of platform teams. I just finished a study with Bain Consulting doing some research for me. 40% of our organization now have some form of a central team that's responsive for, for we call platform engineering and building platforms to make developers productive. That is a big change since about two years ago even. So this is becoming mainstream and customers are really focusing on delivering in value to making developers productive. >>Now. And, and, and the other nuance that I see, and you kinda see it here in the ecosystem, but when you talk about your customers with platform engineering, they're actually building their, they're pointing their business. They gonna page outta aws, pointing their businesses to their customers, right? Becoming software companies, becoming cloud companies and really generating new forms of revenue. >>You know, the interesting thing is, some of my customers I would never have thought as leading edge are retailers. Yeah. And not your typical Starbucks that you get a great example. I have an auto parts company that's completely modernizing how they deliver point of sale all the way to the supply chain. All built on ES at scale. You're typically think of that a financial services or a telco leading the pack. But I'm seeing innovation in India. I'm seeing the innovation in AMEA coming out of there, across the board. Every industry is becoming a product company. A digital twin as we would call it. Yeah. And means they become software houses. Yeah. They behave more like you and I in this event versus a, a traditional enterprise. >>And they're building their own ecosystems and that ecosystem's generating data that's generating more value. And it's just this cycle. It's, >>It's a amazing, it's a flywheel. So innovation continues to grow. Talk about really unlocking the developer experience and delivering to them what they need to modernize apps to move as fast and quickly as they want to. >>So, you know, I think AWS coin this word undifferentiated heavy lifting. If you think of a typical developer today, how much effort does he have to put in before he can get a single line of code out in production? If you can take away all the complexity, typically security compliance is a big headache for them, right? Developer doesn't wanna worry about that. Infrastructure provisioning, getting all the configurations right, is a headache for them. Being able to understand what size of infrastructure or resource to use cost effectively. How do you run it operationally? Cuz the application team is responsible for the operational cost of the product or service. So these are the un you know, heavy lifting that developers want to get away from. So they wanna write great code, build great experiences. And we've always talked about frameworks a way to abstract with the complexity. And so for us, there's a massive opportunity to say, how do I simplify and take away all the heavy lifting to get an idea into production seamlessly, continuously, securely. >>Is that part of your partnership? Because you think about a aws, they're really not about frameworks, they're about primitives. I mean, Warner Vos even talks about that in his, in his speech, you know, but, but that makes it more challenging for developers. >>No, actually, if you look at some of their initial investments around proton and et cetera work, they're starting to do, they're recognized, you know, PS is a bad, bad word, but the outcomes a platform as a service offers is what everybody wants. Just talking to the AWS leaders, responsible area, he actually has a separate build team. He didn't know what to call the third team. He has a Kubernetes team, he has a serverless team and has a build team. And that build team is everything above Kubernetes to make the developer productive. Right. And the ecosystem to bring together to make that happen. So I think AWS is recognizing that primitives are great for the elite developers, but if they want to get the mass scale and adoption in the business, it, if you will, they're gonna have to provide richer set of building blocks and reduce the complex and partnership like ours. Make that a reality. And what I'm excited about is there's a clear gap here, and t's the best platform to kind of fill that gap. Well, >>And I, I think that, you know, they're gonna double down triple, I just wrote about this double down, triple down on the primitives. Yes. They have to have the best, you know, servers and storage and database. And I think the way they, they, I call it taping the seams is with the ecosystem. Correct. You know, and they, nobody has a, a better ecosystem. I mean, you guys are, you know, the, the postage child for the ecosystem and now this even exceeds that. But partnering up, that's how they >>Continue to, and they're looking for someone who's open, right? Yeah. Yeah. And so one of the first question is, you know, are you proprie or open? Because one of the things they're fighting against is the lock in. So they can find a friendly partner who is open source, led, you know, upstream committing to the code, delivering that innovation, and bring the ecosystem into orchestrated choreography. It's like singing a music, right? They're running a, running an application delivery team is like running a, a musical orchestra. There's so many moving parts here, right? How do you make them sing together? And so if Tan Zoo and our platform can help them sing and drive more of their services, it's only more valuable for them. And >>I think the partners would generally say, you know, AWS always talking about customer obsession. It's like becomes this bromine, you go, yeah, yeah. But I actually think in the field, the the sellers would say, yeah, we're gonna do what the customer, if that means we're gonna partner up. Yeah. And I think AWS's comp structure makes it sort >>Of, I learned today how, how incentives with marketplaces work. Yeah. And it is powerful. It's very powerful. Yeah. Right. So you line up the sales incentive, you line up the customer and the benefits, you line up bringing the ecosystem to drive business results and everybody, and so everybody wins. And which is what you're seeing here, the excitement and the crowd is really the whole, all boats are rising. Yeah. Yeah. Right, right. And it's driven by the fact that customers are getting true value out of it. >>Oh, absolutely. Tremendous value. Speaking of customers, give us an example of a customer story that you think really articulates the value of what Tanzi was delivering, especially making that developer experience far simpler. What are some of those big business outcomes that that delivers? >>You know, at Explorer we had the CIO of cvs and with their acquisition of Aetna and CVS Health, they're transforming the, the health industry. And they talked about the whole covid and then how they had to deliver the number of, you know, vaccines to u i and how quickly they had to deliver on that. It talked about Tanu and how they leverage, leverage a Tanza platform to get those new applications out and start to build that. And Ro was basically talking about his number one prior is how does he get his developers more productive? Number to priority? How does he make sure the apps are secure? Number three, priority, how does he do it cost effectively in the world? Particularly where we're heading towards where, you know, the budgets are gonna get tighter. So how do I move more dollars to innovation while I continue to drive more efficiency in my platform? And so cloud is the future. How does he make the best use of the cloud both for his developers and his operations team? Right? >>What's happening in serverless, I, in 2017, Andy Chassy was in the cube. He said if AWS or if Amazon had to build all over again, they would build in, in was using serverless. And that was a big quote. We've mined that for years. And as you were talking about developer productivity, I started writing down all the things developers have to do. Yep. With it, they gotta, they gotta build a container image. They said they gotta deploy an EC two instance. They gotta allocate memory, they gotta fence off the apps in a virtual machine. They gotta run the, you know, compute against the app goes, they gotta pay for all that. So, okay, what's your story on, what's the market asking for in terms of serverless? Because there's still some people who want control over the run time. Help us sift through that. >>And it really comes back to the application pattern or the type you're running. If it's a stateless application that you need to spin up and spin down. Serverless is awesome. Why would I wanna worry about scaling it up in, I wanna set up some SLAs, SLIs service level objectives or, or, or indicators and then let the systems bring the resources I need as I need them. That's a perfect example for serverless, right? On the other hand, if you have a, a more of a workflow type application, there's a sequence, there's state, try building an application using serverless where you had to maintain state between two, two steps in the process. Not so much fun, right? So I don't think serverless is the answer for everything, but many use cases, the scale to zero is a tremendous benefit. Events happen. You wanna process something, work is done, you quietly go away. I don't wanna shut down the server started up, I want that to happen magically. So I think there's a role of serverless. So I believe Kubernetes and servers are the new runtime platform. It's not one or the other. It's about marrying that around the application patterns. I DevOps shouldn't care about it. That's an infrastructure concern. Let me just run application, let the infrastructure manage the operations of it, whether it's serverless, whether it's Kubernetes clusters, whether it's orchestration, that's details right. I I I shouldn't worry about it. Right. >>So we shouldn't think of those as separate architectures. We should think of it as an architecture, >>The continuum in some ways Yeah. Of different application workload types. And, and that's a toolkit that the operator has at his disposal to configure and saying, where does, should that application run? Should I want control? You can run it on a, a conveyance cluster. Can I just run it on a serverless infrastructure and and leave it to the cloud provider? Do it all for me. Sure. What, what was PAs? PAs was exactly that. Yeah. Yeah. Write the code once you do the rest. Yeah. Okay. Those are just elements of that. >>And then K native is kinda in the middle, >>Right? K native is just a technology that's starting to build that capability out in a standards way to make serverless available consistently across all clouds. So I'm not building to a, a lambda or a particular, you know, technology type. I'm building it in a standard way, in a standard programming model. And infrastructure just >>Works for me on any cloud. >>The whole idea portability. Consistency. >>Right. Powerful. Yep. >>What are some of the things that, that folks can expect to learn from VMware Tan to AWS this week at the >>Show? Yeah, so there's some really great announcements. First of all, we're excited to extend our, our partnership with AWS in the area of eks. What I mean by that is we traditionally, we would manage an EKS cluster, you visibility of what's running in there, but we weren't able to manage the lifecycle With this announcement. We can give you a full management of lifecycle of S workloads. Our customers have 400 plus EKS clusters, multiple teams sharing those in a multi-tenanted way with common policy. And they wanna manage a full life cycle, including all the upstream open source component that make up Kubernetes people. That ES is the one thing, it's a collection of a lot of open, open source packages. We're making it simple to manage it consistently from a single place on the security front. We're now making tons of service mesh available in the marketplace. >>And if you look at what service MeSHs, it's an overlay. It's an abstraction. I can create an idea of a global name space that cuts across multiple VPCs. I'm, I'm hearing at Amazon's gonna make some announcements around VPC and how they stitch VPCs together. It's all moving towards this idea of abstractions. I can set policy at logical level. I don't have to worry about data security and the communication between services. These are the things we're now enabling, which are really an, and to make EKS even more productive, making enterprise grade enterprise ready. And so a lot of excitement from the EKS development teams as well to partner closely with us to make this an end to end solution for our >>Customers. Yeah. So I mean it's under chasy, it was really driving those primitives and helping developers under continuing that path, but also recognizing the need for solutions. And that's where the ecosystem comes in, >>Right? And the question is, what is that box? As you said last time, right? For the super cloud, there is a cloud infrastructure, which is becoming the new palette, but how do you make sense of the 300 plus primitives? How do you bring them together? What are the best practices, patterns? How do I manage that when something goes wrong? These are real problems that we're looking to solve. >>And if you're gonna have deeper business integration with the cloud and technology in general, you have to have that >>Abstraction. You know, one of the simple question I ask is, how do you know you're getting value from your cloud investment? That's a very hard question. What's your trade off between performance and cost? Do you know where your security, when a lock 4G happens, do you know all the open source packages you need to patch? These are very simple questions, but imagine today having to do that when everybody's doing in a bespoke manner using the set of primitives. You need a platform. The industry is shown at scale. You have to start standardizing and building a consistent way of delivering and abstracting stuff. And that's where the next stage of the cloud journey >>And, and with the economic environment, I think people are also saying, okay, how do we get more? Exactly. We're in the cloud now. How do we get more? How do we >>Value out of the cloud? >>Exactly. Totally. >>How do we transform the business? Last question, AJ for you, is, if you had a bumper sticker and you're gonna put it on your fancy car, what would it say about VMware tan zone aws? >>I would say tan accelerates apps. >>Love >>It. Thank you so much. >>Thank you. Thank you so much for joining us. >>Appreciate it. Always great to be here. >>Pleasure. Likewise. For our guest, I'm Dave Ante. I'm Lisa Martin. You're watching The Cube, the leader in emerging and enterprise tech coverage.
SUMMARY :
Welcome back to the Cube Live, AWS Reinvent 2022. They said that less than 15% of the audience is developers. And one of the things we're gonna be talking about is app modernization. Good to see Talk about some of the things that you guys are doing together, innovating with aws. And so the better together Why are they choosing Tanu? And how do you run and operationalize secure at runtime? but when you talk about your customers with platform engineering, they're actually building their, You know, the interesting thing is, some of my customers I would never have thought as leading edge are retailers. And it's just this cycle. So innovation continues to grow. how do I simplify and take away all the heavy lifting to get an idea into production in his speech, you know, but, but that makes it more challenging for developers. And the ecosystem to bring together to make that happen. And I, I think that, you know, they're gonna double down triple, I just wrote about this double down, triple down on the primitives. And so one of the first question is, I think the partners would generally say, you know, AWS always talking about customer And it's driven by the fact that customers are getting true value out of it. that you think really articulates the value of what Tanzi was delivering, especially making that developer experience far And so cloud is the future. And as you were talking about developer productivity, On the other hand, if you have a, So we shouldn't think of those as separate architectures. Write the code once you do the rest. you know, technology type. The whole idea portability. Yep. And they wanna manage a full life cycle, including all the upstream open source component that make up Kubernetes people. And if you look at what service MeSHs, it's an overlay. continuing that path, but also recognizing the need for solutions. And the question is, what is that box? You know, one of the simple question I ask is, how do you know you're getting value from your cloud investment? We're in the cloud now. Exactly. Thank you so much for joining us. Always great to be here. the leader in emerging and enterprise tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Andy Chassy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
AJ Patel | PERSON | 0.99+ |
Aetna | ORGANIZATION | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
CVS Health | ORGANIZATION | 0.99+ |
last August | DATE | 0.99+ |
three teams | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
two steps | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
third team | QUANTITY | 0.99+ |
less than 15% | QUANTITY | 0.99+ |
Bain Consulting | ORGANIZATION | 0.99+ |
Ro | PERSON | 0.99+ |
The Cube | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Tanu | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
first day | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Second one | QUANTITY | 0.98+ |
400 plus | QUANTITY | 0.98+ |
Tanza | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
first question | QUANTITY | 0.97+ |
Cube Live | COMMERCIAL_ITEM | 0.97+ |
this week | DATE | 0.96+ |
Europe | LOCATION | 0.96+ |
VMware Tansu | ORGANIZATION | 0.96+ |
three and a half days | QUANTITY | 0.95+ |
Warner Vos | PERSON | 0.95+ |
EC two | TITLE | 0.94+ |
aws | ORGANIZATION | 0.94+ |
ES | TITLE | 0.94+ |
EKS | ORGANIZATION | 0.92+ |
First | QUANTITY | 0.92+ |
zero | QUANTITY | 0.92+ |
single place | QUANTITY | 0.91+ |
about two years ago | DATE | 0.9+ |
twin | QUANTITY | 0.89+ |
tanza | ORGANIZATION | 0.88+ |
single line | QUANTITY | 0.87+ |
one thing | QUANTITY | 0.86+ |
GM | ORGANIZATION | 0.85+ |
tanu | ORGANIZATION | 0.84+ |
Tanzi | PERSON | 0.83+ |
AMEA | ORGANIZATION | 0.83+ |
three main use cases | QUANTITY | 0.82+ |
Kubernetes | TITLE | 0.81+ |
Explorer | ORGANIZATION | 0.79+ |
10 | QUANTITY | 0.78+ |
VMware Explorer | TITLE | 0.75+ |
Apps | ORGANIZATION | 0.74+ |
EKS | TITLE | 0.74+ |
tanza | PERSON | 0.73+ |
AJ | PERSON | 0.73+ |
300 plus primitives | QUANTITY | 0.68+ |
Kate Hall Slade, dentsu & Flo Ye, dentsu | UiPath Forward5
>>The Cube Presents UI Path Forward five. Brought to you by UI Path. >>Welcome back to the Cube's Coverage of Forward five UI Path Customer event. This is the fourth forward that we've been at. We started in Miami, had some great events. It's all about the customer stories. Dave Valante with Dave Nicholson, Flow Yees here. She's the director of engineering and development at dsu and Kate Hall is to her right. And Kate is the director of Automation Solutions at dsu. Ladies, welcome to the Cube. Thanks so much. Thanks >>You to >>Be here. Tell us about dsu. You guys are huge company, but but give us the focus. >>Yeah, absolutely. Dentsu, it's one of the largest advertising networks out there. One of the largest in the world with over 66,000 employees and we're operating in a hundred plus countries. We're really proud to serve 95% of the Fortune 100 companies. Household names like Microsoft Factor and Gamble. If you seen the Super Bowls ads last year, Larry, Larry Davids ads for the crypto brand. That's a hilarious one for anyone who haven't seen it. So we're just really proud to be here and we really respect the creatives of our company. >>That was the best commercial, the Super Bowl by far. For sure. I, I said at the top of saying that Dave and I were talking UI pass, a cool company. You guys kinda look like cool people. You got cool jobs. Tell, tell us about your respective roles. What do you guys do? Yeah, >>Absolutely, absolutely. Well, I'm the director of engineering and automation, so what I really do is to implement the automation operating model and connecting developers across five continents together, making sure that we're delivering and deploying automation projects up to our best standards setting by the operating model. So it's a really, really great job. And when we get to see all these brilliant minds across the world >>And, And Kate, what's your role? Yeah, >>And the Automation Solutions vertical that I head up, the focus is really on converting business requirements into technical designs for flows, developers to deliver. So making sure that we are managing our pipeline, sourcing the right ideas, prioritizing them according to the business businesses objectives and making sure that we route them to the right place. So is it, does it need to be an automation first? Do we need to optimize the process? Does this make sense for citizen developers or do we need to bring in the professional resources on flow's >>Team? So you're bilingual, you speak, you're like the translator, you speak geek and wall, right? Is that fair? Okay. So take me back to the, let's, let's do a little mini case study here. How did you guys get started? I'm always interested, was this a top down? Is, is is top down required to be successful? Cuz it does feel like you can have bottom up bottoms up with rpa, but, but how did you guys get started? What was the journey like? >>Yeah, we started back in 2017, very traditional top down approach. So we delivered a couple POCs working directly with UiPath. You know, going back those five years, delivered those really highly scalable top down solutions that drove hundreds of thousands of hours of ROI for the business. However, as people kind of began to embrace automation and they learned that this is something that they could, that could help them, it's not something that they should be afraid of to take away their jobs. You know, DSU is a young company with a lot of young, young creatives. They wanna make their lives better. So we were absolutely inundated with all of these use cases of, hey I, I need a bot to do this. I need a bot to do that i's gonna save me, you know, 10 hours a week. It's gonna save my team a hundred hours a month, et cetera, et cetera. All of these smaller use cases that were gonna be hugely impactful for the individuals, their teams, even in entire department, but didn't have that scalable ROI for us to put professional development resources against it. So starting in 2020 we really introduced the citizen development program to put the power into those people's hands so that they could create their own solutions. And that was really just a snowball effect to tackle it from the bottom up as well as the top down. >>So a lot of young people, Dave, they not not threatened by robots that racing it. So >>They've grown up with the technology, they know that they can order an Uber from their phone, right? Why am I, you know, sitting here at MITs typing data from Excel into a program that might be older than some of our youngest employees. >>Yeah. Now, now the way you described it, correct me if I'm wrong, the way you described it, it sounds like there's sort of a gating function though. You're not just putting these tools in the hands of people sitting, especially creatives who are there to create. You're not saying, Oh you want things automated, here are the tools. Go ahead. Automated. We'll we, for those of you who want to learn how to use the tools, we'll have you automate that there. Did I hear that right? You're, you're sort of making decisions about what things will be developed even by citizen developers. >>Let me, Do you wanna talk to them about governance? Yeah, absolutely. >>Yeah, so I think we started out with assistant development program, obviously the huge success, right? Last year we're also here at the Cubes. We're very happy to be back again. But I think a lot, a lot had changed and we've grown a lot since last year. One, I have the joy being a part of this team. And then the other thing is that we really expanded and implemented an automation operating model that I mentioned briefly just earlier. So what that enabled us to do is to unite developers from five continents together organically and we're now able to tap into their talent at a global scale. So we are really using this operating model to grow our automation practice in a scalable and also controlled manner. Okay. What I mean by that is that these developer originally were sitting in 18 plus markets, right? There's not much communication collaboration between them. >>And then we went in and bridged them together. What happened is that originally they were only delivering projects and use cases within their region and sometimes these use cases could be very, very much, you know, small scale and not really maximizing their talent. What we are now able to do is tap into a global automation pipeline. So we connecting these highly skilled people to the pipeline elsewhere, the use cases elsewhere that might not be within their regions because one of our focus, a lot of change I mentioned, right? One thing that will never change with our team, it's used automation to elevate people's potential. Now it's really a win-win situation cuz we are connecting the use cases from different pipelines. So the business is happy cuz we are delivering these high scalable solutions. We also utilizing these developers and they're happy because their skills are being maximized and then at the same time growing our automation program. So then that way the citizen development program so that the lower complexities projects are being delivered at a local level and we are able to innovate at a local level. >>I, I have so many questions flow based on what you just said. It's blowing my mind >>Here. It's a whole cycle. >>So let me start with how do you, you know, one of the, one of the concerns I had initially with RPA, cuz just you're talking about some very narrow use cases and your goal is to expand that to realize the potential of each individual, right? But early days I saw a lot of what I call paving the cow path, taking a process that was not a great process and then automating it, right? And that was limiting the potential. So how do you guys prioritize which processes to focus on and maybe which processes should be rethought, >>Right? Exactly. A lot of time when we do automation, right, we talk about innovations and all that stuff, but innovation doesn't happen with the same people sitting in the same room doing the same thing. So what we are doing now, able to connect all these people, different developers from different groups, we really bring the diversity together. That's diversity D diverse diversity in the mindset, diversity in the skill. So what are we really able to do and we see how we tackle this problem is to, and that's a problem for a lot of business out there is the short-termism. So there's something, what we do is that we take two approaches. One, before we, you know, for example, when we used to receive a use case, right? Maybe it's for the China market involving a specific tool and we just go right into development and start coding and all that good stuff, which is great. >>But what we do with this automation framework, which we think it's a really great service for any company out there that want to grow and mature their automation practice, it's to take a step back, think about, okay, so the China market would be beneficial from this automation. Can we also look at the Philippine market? Can we also look at the Thailand market? Because we also know that they have similar processes and similar auto tools that they use. So we are really able to make our automation in a more meaningful way by scaling a project just beyond one market. Now it's impacting the entire region and benefiting people in the entire region. That is what we say, you know, putting automation for good and then that's what we talked about at dsu, Teaming without limits. And that's a, so >>By taking, we wanna make sure that we're really like taking a step back, connecting all of the dots, building the one thing the right way, the first time. Exactly. And what's really integral into being able to have that transparency, that visibility is that now we're all working on the same platform. So you know, Brian spoke to you last year about our migration into automation cloud, having everything that single pipeline in the cloud. Anybody at DSU can often join the automation community and get access to automation hub, see what's out there, submit their own ideas, use the launchpad to go and take training. Yeah. And get started on their own automation journey as a citizen developer and you know, see the different paths that are available to them from that one central space. >>So by taking us a breath, stepping back, pausing just a bit, the business impact at the tail end is much, much higher. Now you start in 2017 really before you UI path made it's big enterprise play, it acquired process gold, you know, cloud elements now most recently referenced some others. How much of what you guys are, are, are doing is platform versus kind of the initial sort of robot installation? Yeah, >>I mean platforms power people and that's what we're here to do as the global automation team. Whether it's powering the citizen developers, the professional developers, anybody who's interacting with our automations at dsu, we wanna make sure that we're connecting the docs for them on a platform basis so that developers can develop and they don't need to develop those simple use cases that could be done by a citizen developer. You know, they're super smart technical people, they wanna do the cool shit with the new stuff. They wanna branch into, you know, using AI center and doing document understanding. That's, you know, the nature of human curiosity. Citizen developers, they're thrilled that we're making an investment to upscale them, to give them a new capability so that they can automate their own work. And they don't, they, they're the process experts. They don't need to spend a month talking to us when they could spend that time taking the training, learning how to create something themselves. >>How, how much sort of use case runway when you guys step back and look at your business, do you see a limit to the use cases? I mean where are you, if you had on a spectrum of, you know, maturity, how much more opportunity is there for DSU to automate? >>There's so much I think the, you feel >>Like it's limitless? >>No, I absolutely feel like it's limitless because there one thing, it's, there's the use cases and I think it's all about connecting the talent and making sure that something we do really, you know, making sure that we deliver these use cases, invest the time in our people so we make sure our professional developers part of our team spending 10 to 20% of the time to do learning and development because only limitless if our people are getting the latest and the greatest technology and we want to invest the time and we see this as an investment in the people making sure that we deliver the promise of putting people first. And the second thing, it's also investment in our company's growth. And that's a long term goal. And overcoming just focusing on things our short term. So that is something we really focus to do. And not only the use cases we are doing what we are doing as an operating model for automation. That is also something that we really value because then this is a kind of a playbook and a success model for many companies out there to grow their automation practice. So that's another angle that we are also focusing >>On. Well that, that's a relief because you guys are both seem really cool and, and I'm sitting here thinking they don't realize they're working themselves out of a job once they get everything automated, what are they gonna do? Right? But, but so, so it sounds like it's a never ending process, but because you guys are, are such a large global organization, it seems like you might have a luxury of being able to benchmark automations from one region and then benchmark them against other regions that aren't using that automation to be able to see very, very quickly not only realize ROI really quickly from the region where it's been implemented, but to be able to compare it to almost a control. Is that, is that part of your process? Yeah, >>Absolutely. Because we are such a global brand and with the automation, automation operating model, what we are able to do, not only focusing on the talent and the people, but also focusing on the infrastructure. So for example, right, maybe there's a first use case developing in Argentina and they have never done these automation before. And when they go to their security team and asking for an Okta bypass service account and the security team Argentina, like we never heard of automation, we don't know what UiPath is, why would I give you a service account for good reason, right? They're doing their job right. But what we able to do with automation model, it's to establish trust between the developers and the security team. So now we have a set up standing infrastructure that we are ready to go whenever an automation's ready to deploy and we're able to get the set up standing infrastructure because we have the governance to make sure the quality would delivered and making sure anything that we deployed, automation that we deploy are developed and governed by the best practice. So that's how we able to kind of get this automation expand globally in a very control and scalable manner because the people that we have build a relationship with. What are >>The governors to how fast you can adopt? Is it just expertise or bandwidth of that expertise or what's the bottleneck? >>Yeah, >>If >>You wanna talk more about, >>So in terms of the pipeline, we really wanna make sure that we are taking that step back and instead of just going, let's develop, develop, develop, here are the requirements like get started and go, we've prove the value of automation at Densu. We wanna make sure we are taking that step back and observing the pipeline. And it's, it's up to us to work with the business to really establish their priorities and the priorities. It's a, it's a big global organization. There might be different priorities in APAC than there are in EM for a good reason. APAC may not be adopted on the same, you know, e r P system for example. So they might have those smaller scale ROI use cases, but that's where we wanna work with them to identify, you know, maybe this is a legitimate need, the ROI is not there, let's upscale some citizen developers so that they can start, you know, working for themselves and get those results faster for those simpler use cases. >>Does, does the funding come from the line of business or IT or a combination? I mean there are obviously budget constraints are very concerned about the macro and the recession. You guys have some global brands, you know, as, as things ebb and flow in the economy, you're competing with other budgets. But where are the budgets coming from inside of dsu? Is it the business, is it the tech >>Group? Yeah, we really consider our automation group is the cause of doing business because we are here connecting people with bridging people together and really elevating. And the reason why we structure it that way, it's people, we do automation at dsu not to reduce head count, not to, you know, not, not just those matrix number that we measure, but really it's to giving time back to the people, giving time back to our business. So then that way they can focus on their wellbeing and that way they can focus on the work-life balance, right? So that's what we say. We are forced for good and by using automation for good as one really great example. So I think because of this agenda and because DSU do prioritize people, you know, so that's why we're getting the funding, we're getting the budget and we are seeing as a cause of doing business. So then we can get these time back using innovation to make people more fulfilling and applying automation in meaningful ways. >>Kate and Flo, congratulations. Your energy is palpable and really great success, wonderful story. Really appreciate you sharing. Thank you so >>Much for having us today. >>You're very welcome. All keep it right there. Dave Nicholson and Dave Ante. We're live from UI path forward at five from Las Vegas. We're in the Venetian Consent Convention Center. Will be right back, right for the short break.
SUMMARY :
Brought to you by And Kate is the director You guys are huge company, but but give us the focus. we really respect the creatives of our company. What do you guys do? Well, I'm the director of engineering and automation, So making sure that we are managing our pipeline, sourcing the right ideas, up with rpa, but, but how did you guys get started? So we were absolutely inundated with all of these use cases So a lot of young people, Dave, they not not threatened by robots that racing it. Why am I, you know, sitting here at MITs typing data from Excel into to use the tools, we'll have you automate that there. Let me, Do you wanna talk to them about governance? So we are really using So we connecting these highly skilled people to I, I have so many questions flow based on what you just said. So how do you guys prioritize which processes to focus on and Maybe it's for the China market involving a specific tool and we just go right into So we are really able to So you know, of what you guys are, are, are doing is platform versus kind of the initial sort They wanna branch into, you know, using AI center and doing document understanding. And not only the use cases we are doing what On. Well that, that's a relief because you guys are both seem really cool and, and the security team Argentina, like we never heard of automation, we don't know what UiPath So in terms of the pipeline, we really wanna make sure that we are taking that step back You guys have some global brands, you know, as, as things ebb and flow in the So then we can get these time back using innovation to Thank you so We're in the Venetian Consent Convention Center.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Kate | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
Larry | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Argentina | LOCATION | 0.99+ |
95% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Flo | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Kate Hall | PERSON | 0.99+ |
Excel | TITLE | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Flo Ye | PERSON | 0.99+ |
last year | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
Larry Davids | PERSON | 0.99+ |
DSU | ORGANIZATION | 0.99+ |
Kate Hall Slade | PERSON | 0.99+ |
18 plus markets | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
Super Bowl | EVENT | 0.99+ |
Thailand | LOCATION | 0.99+ |
10 hours a week | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
APAC | ORGANIZATION | 0.99+ |
two approaches | QUANTITY | 0.99+ |
Venetian Consent Convention Center | LOCATION | 0.99+ |
dentsu | PERSON | 0.98+ |
over 66,000 employees | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
dsu | ORGANIZATION | 0.98+ |
Densu | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
China | LOCATION | 0.98+ |
Super Bowls | EVENT | 0.98+ |
second thing | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Cubes | ORGANIZATION | 0.98+ |
one market | QUANTITY | 0.98+ |
MITs | ORGANIZATION | 0.97+ |
20% | QUANTITY | 0.97+ |
five years | QUANTITY | 0.96+ |
five continents | QUANTITY | 0.96+ |
one region | QUANTITY | 0.96+ |
first use case | QUANTITY | 0.95+ |
Okta | ORGANIZATION | 0.95+ |
five | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Microsoft Factor | ORGANIZATION | 0.94+ |
a hundred hours a month | QUANTITY | 0.94+ |
single pipeline | QUANTITY | 0.93+ |
Philippine | LOCATION | 0.92+ |
each individual | QUANTITY | 0.91+ |
Cube | ORGANIZATION | 0.91+ |
One thing | QUANTITY | 0.9+ |
Dentsu | ORGANIZATION | 0.89+ |
hundred plus countries | QUANTITY | 0.88+ |
hundreds of thousands of hours | QUANTITY | 0.86+ |
first | QUANTITY | 0.83+ |
fourth forward | QUANTITY | 0.78+ |
one central | QUANTITY | 0.75+ |
UI Path | ORGANIZATION | 0.73+ |
example | QUANTITY | 0.7+ |
Gamble | ORGANIZATION | 0.69+ |
Fortune 100 companies | QUANTITY | 0.67+ |
Uri May, Hunters | CUBE Conversation, August 2022
(upbeat music) >> Hey everyone. And welcome to this CUBE Conversation which is part of the AWS startup showcase. Season two, episode four of our ongoing series. The theme of this episode is cybersecurity, detect and protect against threats. I'm your host, Lisa Martin, and I'm pleased to be joined by the founder and CEO of Hunters.AI, Uri May. Uri, welcome to theCUBE. It's great to have you here. >> Thank you, Lisa. It's great to be here. >> Tell me a little bit about your background and the founders story. This company was only founded in 2018, so you're quite young. But gimme that backstory about what you saw in the market that really determined, this is needed. >> Yeah, absolutely. So, I mean, I think the biggest thing for us was the understanding that significant things have happened in the cybersecurity landscape for customers and technology stayed the same. I mean, we tried on solving the same... We tried on solving a big problem with the same old tools when we actually noticed that the problem has changed significantly. And we saw that change happening in two different dimensions. The first is the types of attacks that we're defending against. A decade ago, we were mostly focused on these highly sophisticated nation state efforts that included unknown techniques and tactics and highly sophisticated kind of methods. Nowadays, we're talking a lot about cyber crime gangs, whoops of people that are financially motivated or using off the shelf tools, of the shelf malware, coordinating in the dark web, attacking for money and ransom basically, versus sophisticated intelligence kind of objectives. And in the same time of that happening, we also saw what we like to refer to as explosion of the securities stack. So some of our customers are using more than 60 or 70 different security tools that are generating sometimes tens of terabytes a day of flows. That explosion of data, together with a very persistent and consistent threat that is continuously affecting customers, create a very different environment, where you need to analyze a big variety of data and you need to constantly defend yourself against stuff that are happening all the time. And that was kind of like our wake moment when we understand that the tools that are out there now might have been the right tools a decade ago, they are probably not the right tools to solve the problem now. So yeah, I think that that was kind of what led us to Hunters. And in the same time, and I think that that's my personal kind of story behind it. We used to talk a lot about the fact that we want to solve a fundamental problem. And we, as part of the ideation around Hunters and us zooming in on exactly the areas that we want to focus on in security, we talked with a lot of CSOs, we talked with a lot of industry experts, everyone directed us to the security operation center. I mean the notion that there's a lot of tools and there's always going to be a lot of tools, but eventually decisions are being made by people that are running security operation center, that are actually acting as the first line of defense. And that's where you feel that the processes are woke. That's where you feel that that technology doesn't really meet the rabel, and the rabel doesn't really meet the hold. And for us, it was a very clear sign that this is where we need to focus on. And that set us on a journey to explore red hunting and then understand that we can solve something bigger than that. And then eventually get to where we are today, which is go to market around. So holistic a platform that can help SOC analysts doing the day to day job defending the organizations. >> So you saw back in 2018, probably even before that that the SIEM market was prime and right for disruption. And only in a four year time period, there's been some pretty significant milestones and accomplishment that the team at Hunters has made in that short timeframe. Talk to me about some of those big milestones that the company has reached in just four years. >> Yeah, I think that the biggest thing and I know that it's going to sound like a cliche, but we're actually believing that I think it's the team. I mean, we're able to go to an organization of around 150 employees. All over the world, the course, I think I mean the last time that I checked, like 15 countries. That's the most amazing feeling that you can have. That ability to attract people to a single mission from all over the world and to get them collaborate and do amazing things and achieve unbelievable accomplishment. I think that's the biggest thing. The other thing for us was customers. I mean, think about it like, SIEM it's such a central and critical system. So for us as a young startup from Tel Aviv to go out to Enterprise America and convince the biggest enterprise around the world to rip and replace the the existing solutions that are being built by the biggest software brands out there and install Hunters instead, that's a huge leap of trust, that we are very grateful for, and we're trying to handle with a lot of care and a lot of responsibility. And obviously, I think that other than that, is all of the investors that we were able to attract that basically enabled all of that customer acquisition and team building and product development. And we're very fortunate to work with the biggest names out there, both from a strategic perspective and also from tier one VCs from mainly from the U.S., but from all over the world, actually that are backing us. >> Great customers, solid foundation. Hunters is built for the clouds, is powered by Snowflake. This is AWS built. Talk to me about what's in it for me from an AWS customer perspective. What's that value in it for them? >> Yeah, so I think that the most important thing, in my opinion, at least, is the security value that you're getting from it. Other than the fact that Hunters is a multi-tenant SaaS application running in AWS, it's also a system that is highly tuned and specifically built to be very effective against detecting threats inside AWS environments. So we invested a lot of time in research, in analyzing the way attackers are operating inside cloud environments, specifically in AWS. And then we model these techniques and tactics and procedures into the system. We're leveraging data sets like AWS CloudRail and CloudWatch and VPC Flow Logs, obviously AWS GuardDuty which is an amazing detection system that AWS offer to its customer, and we're able to leverage it, correlate it with other signals. And at the same time, there's also the commercial aspect and the business aspect. I mean, we're allowing AWS customers to leverage the AWS credits to the marketplace to fund same projects like Hunters that comes with a lot of efficiencies also. And with a lot of additional capabilities like I mentioned earlier. >> So let's crack open Hunters.AI. What makes this approach different? You talked about the challenges that you guys saw in the market that were gaps there, and why technology needed to come in from a disruption standpoint. But describe the differentiators. When you're talking to perspective customers, what are those key differentiators that Hunters brings to the table? >> Yeah, absolutely. So we like to divide it into three main pillars. The first pillar is everything that we do with data, that is very different from our competitors. We believe that data should be completely liberated from the analytical layer. And that's why we're storing data in a dedicated data warehouse. Snowflake, as you mentioned earlier, is one of our go to data warehouses. And that give customers the ability to own their own data. So you as a customer can opt in into using Hunters on top of your Snowflake. It's not the only way. You can also get Snowflake bundled as part of that, your Hunter subscription, but for some customers that ability to reduce vendor lock risk on data on your own and also level security data for other kind of workflows is something that is really huge. So that's the first thing that is very different. The second thing is what we like to call security engineering as a service. So when you buy Hunters, you don't just buy a data platform. You actually buy a system, a SOC platform that is already populated with use cases. So what we are saying is that in today's world the threats that we're handling as a SOC, as security operations center professionals are actually shared by 80% of the customers out there. So 80% of the customers share around 80% of the threat. And what we're basically saying is let us as a vendor, solve the detection response around that 80%. So you as a customer could focus on the 20% that is unique to your environment. Then in a lot of cases generate 80% of the impact. So that means that you are getting a lot of rebuilt tools and detections, data modeling to your integrations, automatic investigations, scoring correlations. All of these things are being continuously deployed and delivered by us because we're multi tenant SaaS. And also allowing you again to get this effortless tail key kind of solution that is very different from your experience with your current SIEM tools that usually involves a lot of tuning, professional services, configuration, et cetera. And the last aspect of it, is everything that we're doing around automation. We're leveraging very unique graph technology and what we call automatic investigation enrichments that allows us to take all of these signals that we're extracting from all over the attacks, of say AWS included, but also the endpoint and the email and the network and IOT environments and whatever automatically investigate them, load them into a graph and then automatically correlate them to what we call stones, which are basically representation of incidents that are happening across your tax office. And that's a very unique capability that we bring into the table that demonstrates our focus on the analytical lens. So it's not just log aggregation, and querying and dashboarding kind of system. It's actually a security analytic system that is able to drive real insights on top of the data that you're plugging into it. >> So talk to me, Uri, when you're in customer conversations these days the market is there's so many dynamics and flux that customers are dealing with. Obviously, the threat landscape continues to expand and really become quite amorphous as that perimeter blends. What are some of the specific challenges that security operation center or SOC teams come to you saying, help us eliminate this. We have so many tools, we've probably got limited resources. What are those challenges and how does Hunters really wipe those off the plate? >> Yeah, so I think the first and foremost has to do with the second pillar that I mentioned earlier and that's security engineering. So for most security operations centers and most organizations around the world, the feeling is that they're kind of like stuck on this third wheel. They keep on buying tools and then implementing these tools and then writing rules and then generating noise and then fine tuning the rules. And then testing the rules and understanding that the fine tuning actually generated misdetections. And they're kind of like stuck on this vicious side. And no one can really help because a lot of the stuff that they're building, they're building it in their environment. And what we're saying is that, let us do it for you. Well, that 80% that we've mentioned earlier and allows you to really focus on the stuff that you're doing and even offset your talent. So, we're not talking about really a talent reduction. Because everyone needs more talent in cybersecurity nowadays but we're talking a lot about offset. I mean, if we had a team of five people investing efforts in building walls, building automation, and now three or four of these people can go and do advanced investigations, instant response, threat hunting interval, that's meaningful. For a lot of SOCs, in a lot of cases that means either identifying and analyzing a threat in time or missing it. So, I mean, I think that that's the biggest thing. And the other thing has to do with the first thing that I mentioned earlier, and these are the data challenges. Data challenges in terms of cost, performance, the ability to absorb data sets that today's tools can't really support. I mean, for example, one of the biggest data sets that we're loading that is tremendously helpful is raw data for EDR products. Raw data for EDR products in large enterprises can get to 10, 15, 20 terabytes a day. In today's SIEMs and SOC platforms that the customers are using, this thing is just as prohibited from SOC. They can't really analyze it because it's so costly. So what we're saying is a lot of what we're seeing is a lot of customers, either not analyzing it at all, or saving it for a very little amount of time, account of days. Because they can't support the retention around it. So the ability to store huge data sets for longer period of time makes it something that a lot of big enterprises need. And to be honest, I think that in the next couple of years they would also be forced to have these kind of capabilities, even from a compliance perspective. >> So in terms of outcomes, I'm hearing reduction in costs really helping security teams utilize their resources, the ability to analyze growing volumes of data. That's only going to continue to increase as we know. Is there a customer story, Uri that you have that really, where the value proposition of Hunters really shines through? >> Yeah, I think that one thing comes to mind from those hospitality vertical and actually it's a reference customer. I mean, we can share the name. His name is booking.com. It's also publicly shown on our website. And they think the coolest thing that we were able to do with booking is give them that capability to stay up to date with the threats that they're facing. So it's not just that we saved a lot of efforts from them because we came with a lot of out of the box capabilities that they can use. We also kept them up to date with everything that they were facing. And there was a couple of cases, where we were able to detect threats that were very recently from threat perspective. Based on our ability to invest research time and efforts in everything that is going on in the ecosystem and the feedback that we got from the customer, and it's not a single of feedback. Like we're getting it a lot, is that, without you guys we wouldn't be able to do the effective research and then the implementation of this and the threat modeling and the implementation of these things in time. And walking with you kind of like made the difference between analyzing it and reacting in time and potentially blocking like a very serious bridge versus maybe finding out when it's too late. >> Huge impact there. And I'm kind of thinking, Hunters aim, might be one of the reasons that booking.com's tagline it's booking.com, booking.yeah. Yeah, we're secure. We know if we can demonstrate that to everyone that uses our service. I noticed kind of wrapping things up here, Uri. I noticed that back in I think it was January of 2022, Hunters raised about 60 million in series C. You talked about kind of being in the GTM phase, where are some of those strategic investments? What have you been doing, focusing on this year and what's to come as we round out 22? >> Yeah, absolutely. So, I mean, there's a lot of building going on. Yeah. Still, right. I mean, we're getting into that scale mode and scale phase but we're very much also building our capabilities, building our infrastructure, building our teams, building our business processes. So there's a lot of efforts going into that, but in the same time, I mean, we've being able to vary, to depending our relationship with DataBlitz which is a very important partner of us. And we got some big news coming up on that. And they were a strategic investor that participated in our series C. And in the same time we're walking in the air market which is a very interesting market for us. And we get a lot of support from one other strategic investor that joined the series C, Deutsche Telekom. And they are a huge provider in IT and security in email, other than doing a lot of other things and including T-systems and T-Mobile and everything that has to do with that. So we're getting a lot of support from them. And regardless, I think, and that ties back to what we've mentioned earlier, the ability for us to come to really big customers with the quality of investors that we have is a very important external validation. It's basically saying like this company is here to stay. We're aiming at disrupting the market. We're building something big. You can count on us by replacing this critical system that we're talking about. And sometimes it makes a difference, like sometimes for some of the customers, it means that this is something that I can rely on. Like it's not a startup that is going to be sold two months after I'm deploying it. And it's not a founder that is going to disappear on me. And for a lot of customers, these things happen, especially in an ecosystem like cybersecurity, that is so big with such a huge variety of different systems. So, yeah, I think that we're getting ready for that scale mode and hopefully it'll happen sooner than what we think. >> A lot of growth already as we mentioned in the beginning of the program. Since just 2018 it sounds like from a foundation perspective, you guys are strong, you're rocking away and ready to really take things into 2023 with such force. Uri, thank you so much for joining me on the program, talking about what Hunters.AI is up to and how you're different and why you're disrupting the SIEM market. We appreciate your insights and your time. >> Absolutely. Lisa, the pleasure was all mine. Thank you for having me. >> Likewise. For Uri May, I'm Lisa Martin. Thank you for watching our CUBE Conversation as part of the AWS startup showcase. Keep it right here for more actions on theCUBE, your leader in tech coverage. (upbeat music)
SUMMARY :
and I'm pleased to be joined and the founders story. that the tools that are out there now that the SIEM market was prime that are being built by the biggest Hunters is built for the that AWS offer to its customer, that Hunters brings to the table? And that give customers the and flux that customers are dealing with. And the other thing has to do the ability to analyze and the feedback that we being in the GTM phase, and everything that has to do with that. and ready to really take things Lisa, the as part of the AWS startup showcase.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
DataBlitz | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Uri May | PERSON | 0.99+ |
January of 2022 | DATE | 0.99+ |
August 2022 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
15 countries | QUANTITY | 0.99+ |
booking.com | ORGANIZATION | 0.99+ |
Uri | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
second pillar | QUANTITY | 0.99+ |
more than 60 | QUANTITY | 0.99+ |
Hunters | ORGANIZATION | 0.99+ |
tens of terabytes | QUANTITY | 0.99+ |
Hunters.AI | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
about 60 million | QUANTITY | 0.98+ |
T-Mobile | ORGANIZATION | 0.98+ |
Hunters | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
around 150 employees | QUANTITY | 0.98+ |
four year | QUANTITY | 0.98+ |
two different dimensions | QUANTITY | 0.98+ |
A decade ago | DATE | 0.98+ |
today | DATE | 0.97+ |
first line | QUANTITY | 0.97+ |
two months | QUANTITY | 0.96+ |
three main pillars | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
single mission | QUANTITY | 0.95+ |
single | QUANTITY | 0.94+ |
around 80% | QUANTITY | 0.94+ |
third wheel | QUANTITY | 0.94+ |
70 different security tools | QUANTITY | 0.93+ |
series C. | OTHER | 0.93+ |
a decade ago | DATE | 0.92+ |
Snowflake | TITLE | 0.92+ |
booking.yeah | ORGANIZATION | 0.92+ |
15 | QUANTITY | 0.9+ |
20 terabytes a day | QUANTITY | 0.9+ |
CUBE Conversation | EVENT | 0.88+ |
Season two | QUANTITY | 0.86+ |
tier one | QUANTITY | 0.86+ |
Hunters.AI | TITLE | 0.86+ |
Predictions 2022: Top Analysts See the Future of Data
(bright music) >> In the 2010s, organizations became keenly aware that data would become the key ingredient to driving competitive advantage, differentiation, and growth. But to this day, putting data to work remains a difficult challenge for many, if not most organizations. Now, as the cloud matures, it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible. We've also seen better tooling in the form of data workflows, streaming, machine intelligence, AI, developer tools, security, observability, automation, new databases and the like. These innovations they accelerate data proficiency, but at the same time, they add complexity for practitioners. Data lakes, data hubs, data warehouses, data marts, data fabrics, data meshes, data catalogs, data oceans are forming, they're evolving and exploding onto the scene. So in an effort to bring perspective to the sea of optionality, we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond. Hello everyone, my name is Dave Velannte with theCUBE, and I'd like to welcome you to a special Cube presentation, analysts predictions 2022: the future of data management. We've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade. Let me introduce our six power panelists. Sanjeev Mohan is former Gartner Analyst and Principal at SanjMo. Tony Baer, principal at dbInsight, Carl Olofson is well-known Research Vice President with IDC, Dave Menninger is Senior Vice President and Research Director at Ventana Research, Brad Shimmin, Chief Analyst, AI Platforms, Analytics and Data Management at Omdia and Doug Henschen, Vice President and Principal Analyst at Constellation Research. Gentlemen, welcome to the program and thanks for coming on theCUBE today. >> Great to be here. >> Thank you. >> All right, here's the format we're going to use. I as moderator, I'm going to call on each analyst separately who then will deliver their prediction or mega trend, and then in the interest of time management and pace, two analysts will have the opportunity to comment. If we have more time, we'll elongate it, but let's get started right away. Sanjeev Mohan, please kick it off. You want to talk about governance, go ahead sir. >> Thank you Dave. I believe that data governance which we've been talking about for many years is now not only going to be mainstream, it's going to be table stakes. And all the things that you mentioned, you know, the data, ocean data lake, lake houses, data fabric, meshes, the common glue is metadata. If we don't understand what data we have and we are governing it, there is no way we can manage it. So we saw Informatica went public last year after a hiatus of six. I'm predicting that this year we see some more companies go public. My bet is on Culebra, most likely and maybe Alation we'll see go public this year. I'm also predicting that the scope of data governance is going to expand beyond just data. It's not just data and reports. We are going to see more transformations like spark jawsxxxxx, Python even Air Flow. We're going to see more of a streaming data. So from Kafka Schema Registry, for example. We will see AI models become part of this whole governance suite. So the governance suite is going to be very comprehensive, very detailed lineage, impact analysis, and then even expand into data quality. We already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management, data catalogs, also data access governance. So what we are going to see is that once the data governance platforms become the key entry point into these modern architectures, I'm predicting that the usage, the number of users of a data catalog is going to exceed that of a BI tool. That will take time and we already seen that trajectory. Right now if you look at BI tools, I would say there a hundred users to BI tool to one data catalog. And I see that evening out over a period of time and at some point data catalogs will really become the main way for us to access data. Data catalog will help us visualize data, but if we want to do more in-depth analysis, it'll be the jumping off point into the BI tool, the data science tool and that is the journey I see for the data governance products. >> Excellent, thank you. Some comments. Maybe Doug, a lot of things to weigh in on there, maybe you can comment. >> Yeah, Sanjeev I think you're spot on, a lot of the trends the one disagreement, I think it's really still far from mainstream. As you say, we've been talking about this for years, it's like God, motherhood, apple pie, everyone agrees it's important, but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking. I think one thing that deserves mention in this context is ESG mandates and guidelines, these are environmental, social and governance, regs and guidelines. We've seen the environmental regs and guidelines and posts in industries, particularly the carbon-intensive industries. We've seen the social mandates, particularly diversity imposed on suppliers by companies that are leading on this topic. We've seen governance guidelines now being imposed by banks on investors. So these ESGs are presenting new carrots and sticks, and it's going to demand more solid data. It's going to demand more detailed reporting and solid reporting, tighter governance. But we're still far from mainstream adoption. We have a lot of, you know, best of breed niche players in the space. I think the signs that it's going to be more mainstream are starting with things like Azure Purview, Google Dataplex, the big cloud platform players seem to be upping the ante and starting to address governance. >> Excellent, thank you Doug. Brad, I wonder if you could chime in as well. >> Yeah, I would love to be a believer in data catalogs. But to Doug's point, I think that it's going to take some more pressure for that to happen. I recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the nineties and that didn't happen quite the way we anticipated. And so to Sanjeev's point it's because it is really complex and really difficult to do. My hope is that, you know, we won't sort of, how do I put this? Fade out into this nebula of domain catalogs that are specific to individual use cases like Purview for getting data quality right or like data governance and cybersecurity. And instead we have some tooling that can actually be adaptive to gather metadata to create something. And I know its important to you, Sanjeev and that is this idea of observability. If you can get enough metadata without moving your data around, but understanding the entirety of a system that's running on this data, you can do a lot. So to help with the governance that Doug is talking about. >> So I just want to add that, data governance, like any other initiatives did not succeed even AI went into an AI window, but that's a different topic. But a lot of these things did not succeed because to your point, the incentives were not there. I remember when Sarbanes Oxley had come into the scene, if a bank did not do Sarbanes Oxley, they were very happy to a million dollar fine. That was like, you know, pocket change for them instead of doing the right thing. But I think the stakes are much higher now. With GDPR, the flood gates opened. Now, you know, California, you know, has CCPA but even CCPA is being outdated with CPRA, which is much more GDPR like. So we are very rapidly entering a space where pretty much every major country in the world is coming up with its own compliance regulatory requirements, data residents is becoming really important. And I think we are going to reach a stage where it won't be optional anymore. So whether we like it or not, and I think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption. We were focused on features and these features were disconnected, very hard for business to adopt. These are built by IT people for IT departments to take a look at technical metadata, not business metadata. Today the tables have turned. CDOs are driving this initiative, regulatory compliances are beating down hard, so I think the time might be right. >> Yeah so guys, we have to move on here. But there's some real meat on the bone here, Sanjeev. I like the fact that you called out Culebra and Alation, so we can look back a year from now and say, okay, he made the call, he stuck it. And then the ratio of BI tools to data catalogs that's another sort of measurement that we can take even though with some skepticism there, that's something that we can watch. And I wonder if someday, if we'll have more metadata than data. But I want to move to Tony Baer, you want to talk about data mesh and speaking, you know, coming off of governance. I mean, wow, you know the whole concept of data mesh is, decentralized data, and then governance becomes, you know, a nightmare there, but take it away, Tony. >> We'll put this way, data mesh, you know, the idea at least as proposed by ThoughtWorks. You know, basically it was at least a couple of years ago and the press has been almost uniformly almost uncritical. A good reason for that is for all the problems that basically Sanjeev and Doug and Brad we're just speaking about, which is that we have all this data out there and we don't know what to do about it. Now, that's not a new problem. That was a problem we had in enterprise data warehouses, it was a problem when we had over DoOP data clusters, it's even more of a problem now that data is out in the cloud where the data is not only your data lake, is not only us three, it's all over the place. And it's also including streaming, which I know we'll be talking about later. So the data mesh was a response to that, the idea of that we need to bait, you know, who are the folks that really know best about governance? It's the domain experts. So it was basically data mesh was an architectural pattern and a process. My prediction for this year is that data mesh is going to hit cold heart reality. Because if you do a Google search, basically the published work, the articles on data mesh have been largely, you know, pretty uncritical so far. Basically loading and is basically being a very revolutionary new idea. I don't think it's that revolutionary because we've talked about ideas like this. Brad now you and I met years ago when we were talking about so and decentralizing all of us, but it was at the application level. Now we're talking about it at the data level. And now we have microservices. So there's this thought of have we managed if we're deconstructing apps in cloud native to microservices, why don't we think of data in the same way? My sense this year is that, you know, this has been a very active search if you look at Google search trends, is that now companies, like enterprise are going to look at this seriously. And as they look at it seriously, it's going to attract its first real hard scrutiny, it's going to attract its first backlash. That's not necessarily a bad thing. It means that it's being taken seriously. The reason why I think that you'll start to see basically the cold hearted light of day shine on data mesh is that it's still a work in progress. You know, this idea is basically a couple of years old and there's still some pretty major gaps. The biggest gap is in the area of federated governance. Now federated governance itself is not a new issue. Federated governance decision, we started figuring out like, how can we basically strike the balance between getting let's say between basically consistent enterprise policy, consistent enterprise governance, but yet the groups that understand the data and know how to basically, you know, that, you know, how do we basically sort of balance the two? There's a huge gap there in practice and knowledge. Also to a lesser extent, there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data. You know, basically through the full life cycle, from develop, from selecting the data from, you know, building the pipelines from, you know, determining your access control, looking at quality, looking at basically whether the data is fresh or whether it's trending off course. So my prediction is that it will receive the first harsh scrutiny this year. You are going to see some organization and enterprises declare premature victory when they build some federated query implementations. You going to see vendors start with data mesh wash their products anybody in the data management space that they are going to say that where this basically a pipelining tool, whether it's basically ELT, whether it's a catalog or federated query tool, they will all going to get like, you know, basically promoting the fact of how they support this. Hopefully nobody's going to call themselves a data mesh tool because data mesh is not a technology. We're going to see one other thing come out of this. And this harks back to the metadata that Sanjeev was talking about and of the catalog just as he was talking about. Which is that there's going to be a new focus, every renewed focus on metadata. And I think that's going to spur interest in data fabrics. Now data fabrics are pretty vaguely defined, but if we just take the most elemental definition, which is a common metadata back plane, I think that if anybody is going to get serious about data mesh, they need to look at the data fabric because we all at the end of the day, need to speak, you know, need to read from the same sheet of music. >> So thank you Tony. Dave Menninger, I mean, one of the things that people like about data mesh is it pretty crisply articulate some of the flaws in today's organizational approaches to data. What are your thoughts on this? >> Well, I think we have to start by defining data mesh, right? The term is already getting corrupted, right? Tony said it's going to see the cold hard light of day. And there's a problem right now that there are a number of overlapping terms that are similar but not identical. So we've got data virtualization, data fabric, excuse me for a second. (clears throat) Sorry about that. Data virtualization, data fabric, data federation, right? So I think that it's not really clear what each vendor means by these terms. I see data mesh and data fabric becoming quite popular. I've interpreted data mesh as referring primarily to the governance aspects as originally intended and specified. But that's not the way I see vendors using it. I see vendors using it much more to mean data fabric and data virtualization. So I'm going to comment on the group of those things. I think the group of those things is going to happen. They're going to happen, they're going to become more robust. Our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half, so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access. Again, whether you define it as mesh or fabric or virtualization isn't really the point here. But this notion that there are different elements of data, metadata and governance within an organization that all need to be managed collectively. The interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not, it's almost double, 68% of organizations, I'm sorry, 79% of organizations that were using virtualized access express satisfaction with their access to the data lake. Only 39% express satisfaction if they weren't using virtualized access. >> Oh thank you Dave. Sanjeev we just got about a couple of minutes on this topic, but I know you're speaking or maybe you've always spoken already on a panel with (indistinct) who sort of invented the concept. Governance obviously is a big sticking point, but what are your thoughts on this? You're on mute. (panelist chuckling) >> So my message to (indistinct) and to the community is as opposed to what they said, let's not define it. We spent a whole year defining it, there are four principles, domain, product, data infrastructure, and governance. Let's take it to the next level. I get a lot of questions on what is the difference between data fabric and data mesh? And I'm like I can't compare the two because data mesh is a business concept, data fabric is a data integration pattern. How do you compare the two? You have to bring data mesh a level down. So to Tony's point, I'm on a warpath in 2022 to take it down to what does a data product look like? How do we handle shared data across domains and governance? And I think we are going to see more of that in 2022, or is "operationalization" of data mesh. >> I think we could have a whole hour on this topic, couldn't we? Maybe we should do that. But let's corner. Let's move to Carl. So Carl, you're a database guy, you've been around that block for a while now, you want to talk about graph databases, bring it on. >> Oh yeah. Okay thanks. So I regard graph database as basically the next truly revolutionary database management technology. I'm looking forward for the graph database market, which of course we haven't defined yet. So obviously I have a little wiggle room in what I'm about to say. But this market will grow by about 600% over the next 10 years. Now, 10 years is a long time. But over the next five years, we expect to see gradual growth as people start to learn how to use it. The problem is not that it's not useful, its that people don't know how to use it. So let me explain before I go any further what a graph database is because some of the folks on the call may not know what it is. A graph database organizes data according to a mathematical structure called a graph. The graph has elements called nodes and edges. So a data element drops into a node, the nodes are connected by edges, the edges connect one node to another node. Combinations of edges create structures that you can analyze to determine how things are related. In some cases, the nodes and edges can have properties attached to them which add additional informative material that makes it richer, that's called a property graph. There are two principle use cases for graph databases. There's semantic property graphs, which are use to break down human language texts into the semantic structures. Then you can search it, organize it and answer complicated questions. A lot of AI is aimed at semantic graphs. Another kind is the property graph that I just mentioned, which has a dazzling number of use cases. I want to just point out as I talk about this, people are probably wondering, well, we have relation databases, isn't that good enough? So a relational database defines... It supports what I call definitional relationships. That means you define the relationships in a fixed structure. The database drops into that structure, there's a value, foreign key value, that relates one table to another and that value is fixed. You don't change it. If you change it, the database becomes unstable, it's not clear what you're looking at. In a graph database, the system is designed to handle change so that it can reflect the true state of the things that it's being used to track. So let me just give you some examples of use cases for this. They include entity resolution, data lineage, social media analysis, Customer 360, fraud prevention. There's cybersecurity, there's strong supply chain is a big one actually. There is explainable AI and this is going to become important too because a lot of people are adopting AI. But they want a system after the fact to say, how do the AI system come to that conclusion? How did it make that recommendation? Right now we don't have really good ways of tracking that. Machine learning in general, social network, I already mentioned that. And then we've got, oh gosh, we've got data governance, data compliance, risk management. We've got recommendation, we've got personalization, anti money laundering, that's another big one, identity and access management, network and IT operations is already becoming a key one where you actually have mapped out your operation, you know, whatever it is, your data center and you can track what's going on as things happen there, root cause analysis, fraud detection is a huge one. A number of major credit card companies use graph databases for fraud detection, risk analysis, tracking and tracing turn analysis, next best action, what if analysis, impact analysis, entity resolution and I would add one other thing or just a few other things to this list, metadata management. So Sanjeev, here you go, this is your engine. Because I was in metadata management for quite a while in my past life. And one of the things I found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it, but graphs can, okay? Graphs can do things like say, this term in this context means this, but in that context, it means that, okay? Things like that. And in fact, logistics management, supply chain. And also because it handles recursive relationships, by recursive relationships I mean objects that own other objects that are of the same type. You can do things like build materials, you know, so like parts explosion. Or you can do an HR analysis, who reports to whom, how many levels up the chain and that kind of thing. You can do that with relational databases, but yet it takes a lot of programming. In fact, you can do almost any of these things with relational databases, but the problem is, you have to program it. It's not supported in the database. And whenever you have to program something, that means you can't trace it, you can't define it. You can't publish it in terms of its functionality and it's really, really hard to maintain over time. >> Carl, thank you. I wonder if we could bring Brad in, I mean. Brad, I'm sitting here wondering, okay, is this incremental to the market? Is it disruptive and replacement? What are your thoughts on this phase? >> It's already disrupted the market. I mean, like Carl said, go to any bank and ask them are you using graph databases to get fraud detection under control? And they'll say, absolutely, that's the only way to solve this problem. And it is frankly. And it's the only way to solve a lot of the problems that Carl mentioned. And that is, I think it's Achilles heel in some ways. Because, you know, it's like finding the best way to cross the seven bridges of Koenigsberg. You know, it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique, it's still unfortunately kind of stands apart from the rest of the community that's building, let's say AI outcomes, as a great example here. Graph databases and AI, as Carl mentioned, are like chocolate and peanut butter. But technologically, you think don't know how to talk to one another, they're completely different. And you know, you can't just stand up SQL and query them. You've got to learn, know what is the Carl? Specter special. Yeah, thank you to, to actually get to the data in there. And if you're going to scale that data, that graph database, especially a property graph, if you're going to do something really complex, like try to understand you know, all of the metadata in your organization, you might just end up with, you know, a graph database winter like we had the AI winter simply because you run out of performance to make the thing happen. So, I think it's already disrupted, but we need to like treat it like a first-class citizen in the data analytics and AI community. We need to bring it into the fold. We need to equip it with the tools it needs to do the magic it does and to do it not just for specialized use cases, but for everything. 'Cause I'm with Carl. I think it's absolutely revolutionary. >> Brad identified the principal, Achilles' heel of the technology which is scaling. When these things get large and complex enough that they spill over what a single server can handle, you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down. So that's still a problem to be solved. >> Sanjeev, any quick thoughts on this? I mean, I think metadata on the word cloud is going to be the largest font, but what are your thoughts here? >> I want to (indistinct) So people don't associate me with only metadata, so I want to talk about something slightly different. dbengines.com has done an amazing job. I think almost everyone knows that they chronicle all the major databases that are in use today. In January of 2022, there are 381 databases on a ranked list of databases. The largest category is RDBMS. The second largest category is actually divided into two property graphs and IDF graphs. These two together make up the second largest number databases. So talking about Achilles heel, this is a problem. The problem is that there's so many graph databases to choose from. They come in different shapes and forms. To Brad's point, there's so many query languages in RDBMS, in SQL. I know the story, but here We've got cipher, we've got gremlin, we've got GQL and then we're proprietary languages. So I think there's a lot of disparity in this space. >> Well, excellent. All excellent points, Sanjeev, if I must say. And that is a problem that the languages need to be sorted and standardized. People need to have a roadmap as to what they can do with it. Because as you say, you can do so many things. And so many of those things are unrelated that you sort of say, well, what do we use this for? And I'm reminded of the saying I learned a bunch of years ago. And somebody said that the digital computer is the only tool man has ever device that has no particular purpose. (panelists chuckle) >> All right guys, we got to move on to Dave Menninger. We've heard about streaming. Your prediction is in that realm, so please take it away. >> Sure. So I like to say that historical databases are going to become a thing of the past. By that I don't mean that they're going to go away, that's not my point. I mean, we need historical databases, but streaming data is going to become the default way in which we operate with data. So in the next say three to five years, I would expect that data platforms and we're using the term data platforms to represent the evolution of databases and data lakes, that the data platforms will incorporate these streaming capabilities. We're going to process data as it streams into an organization and then it's going to roll off into historical database. So historical databases don't go away, but they become a thing of the past. They store the data that occurred previously. And as data is occurring, we're going to be processing it, we're going to be analyzing it, we're going to be acting on it. I mean we only ever ended up with historical databases because we were limited by the technology that was available to us. Data doesn't occur in patches. But we processed it in patches because that was the best we could do. And it wasn't bad and we've continued to improve and we've improved and we've improved. But streaming data today is still the exception. It's not the rule, right? There are projects within organizations that deal with streaming data. But it's not the default way in which we deal with data yet. And so that's my prediction is that this is going to change, we're going to have streaming data be the default way in which we deal with data and how you label it and what you call it. You know, maybe these databases and data platforms just evolved to be able to handle it. But we're going to deal with data in a different way. And our research shows that already, about half of the participants in our analytics and data benchmark research, are using streaming data. You know, another third are planning to use streaming technologies. So that gets us to about eight out of 10 organizations need to use this technology. And that doesn't mean they have to use it throughout the whole organization, but it's pretty widespread in its use today and has continued to grow. If you think about the consumerization of IT, we've all been conditioned to expect immediate access to information, immediate responsiveness. You know, we want to know if an item is on the shelf at our local retail store and we can go in and pick it up right now. You know, that's the world we live in and that's spilling over into the enterprise IT world We have to provide those same types of capabilities. So that's my prediction, historical databases become a thing of the past, streaming data becomes the default way in which we operate with data. >> All right thank you David. Well, so what say you, Carl, the guy who has followed historical databases for a long time? >> Well, one thing actually, every database is historical because as soon as you put data in it, it's now history. They'll no longer reflect the present state of things. But even if that history is only a millisecond old, it's still history. But I would say, I mean, I know you're trying to be a little bit provocative in saying this Dave 'cause you know, as well as I do that people still need to do their taxes, they still need to do accounting, they still need to run general ledger programs and things like that. That all involves historical data. That's not going to go away unless you want to go to jail. So you're going to have to deal with that. But as far as the leading edge functionality, I'm totally with you on that. And I'm just, you know, I'm just kind of wondering if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way applications work. Saying that an application should respond instantly, as soon as the state of things changes. What do you say about that? >> I think that's true. I think we do have to think about things differently. It's not the way we designed systems in the past. We're seeing more and more systems designed that way. But again, it's not the default. And I agree 100% with you that we do need historical databases you know, that's clear. And even some of those historical databases will be used in conjunction with the streaming data, right? >> Absolutely. I mean, you know, let's take the data warehouse example where you're using the data warehouse as its context and the streaming data as the present and you're saying, here's the sequence of things that's happening right now. Have we seen that sequence before? And where? What does that pattern look like in past situations? And can we learn from that? >> So Tony Baer, I wonder if you could comment? I mean, when you think about, you know, real time inferencing at the edge, for instance, which is something that a lot of people talk about, a lot of what we're discussing here in this segment, it looks like it's got a great potential. What are your thoughts? >> Yeah, I mean, I think you nailed it right. You know, you hit it right on the head there. Which is that, what I'm seeing is that essentially. Then based on I'm going to split this one down the middle is that I don't see that basically streaming is the default. What I see is streaming and basically and transaction databases and analytics data, you know, data warehouses, data lakes whatever are converging. And what allows us technically to converge is cloud native architecture, where you can basically distribute things. So you can have a node here that's doing the real-time processing, that's also doing... And this is where it leads in or maybe doing some of that real time predictive analytics to take a look at, well look, we're looking at this customer journey what's happening with what the customer is doing right now and this is correlated with what other customers are doing. So the thing is that in the cloud, you can basically partition this and because of basically the speed of the infrastructure then you can basically bring these together and kind of orchestrate them sort of a loosely coupled manner. The other parts that the use cases are demanding, and this is part of it goes back to what Dave is saying. Is that, you know, when you look at Customer 360, when you look at let's say Smart Utility products, when you look at any type of operational problem, it has a real time component and it has an historical component. And having predictive and so like, you know, my sense here is that technically we can bring this together through the cloud. And I think the use case is that we can apply some real time sort of predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction, we have this real-time input. >> Sanjeev, did you have a comment? >> Yeah, I was just going to say that to Dave's point, you know, we have to think of streaming very different because in the historical databases, we used to bring the data and store the data and then we used to run rules on top, aggregations and all. But in case of streaming, the mindset changes because the rules are normally the inference, all of that is fixed, but the data is constantly changing. So it's a completely reversed way of thinking and building applications on top of that. >> So Dave Menninger, there seem to be some disagreement about the default. What kind of timeframe are you thinking about? Is this end of decade it becomes the default? What would you pin? >> I think around, you know, between five to 10 years, I think this becomes the reality. >> I think its... >> It'll be more and more common between now and then, but it becomes the default. And I also want Sanjeev at some point, maybe in one of our subsequent conversations, we need to talk about governing streaming data. 'Cause that's a whole nother set of challenges. >> We've also talked about it rather in two dimensions, historical and streaming, and there's lots of low latency, micro batch, sub-second, that's not quite streaming, but in many cases its fast enough and we're seeing a lot of adoption of near real time, not quite real-time as good enough for many applications. (indistinct cross talk from panelists) >> Because nobody's really taking the hardware dimension (mumbles). >> That'll just happened, Carl. (panelists laughing) >> So near real time. But maybe before you lose the customer, however we define that, right? Okay, let's move on to Brad. Brad, you want to talk about automation, AI, the pipeline people feel like, hey, we can just automate everything. What's your prediction? >> Yeah I'm an AI aficionados so apologies in advance for that. But, you know, I think that we've been seeing automation play within AI for some time now. And it's helped us do a lot of things especially for practitioners that are building AI outcomes in the enterprise. It's helped them to fill skills gaps, it's helped them to speed development and it's helped them to actually make AI better. 'Cause it, you know, in some ways provide some swim lanes and for example, with technologies like AutoML can auto document and create that sort of transparency that we talked about a little bit earlier. But I think there's an interesting kind of conversion happening with this idea of automation. And that is that we've had the automation that started happening for practitioners, it's trying to move out side of the traditional bounds of things like I'm just trying to get my features, I'm just trying to pick the right algorithm, I'm just trying to build the right model and it's expanding across that full life cycle, building an AI outcome, to start at the very beginning of data and to then continue on to the end, which is this continuous delivery and continuous automation of that outcome to make sure it's right and it hasn't drifted and stuff like that. And because of that, because it's become kind of powerful, we're starting to actually see this weird thing happen where the practitioners are starting to converge with the users. And that is to say that, okay, if I'm in Tableau right now, I can stand up Salesforce Einstein Discovery, and it will automatically create a nice predictive algorithm for me given the data that I pull in. But what's starting to happen and we're seeing this from the companies that create business software, so Salesforce, Oracle, SAP, and others is that they're starting to actually use these same ideals and a lot of deep learning (chuckles) to basically stand up these out of the box flip-a-switch, and you've got an AI outcome at the ready for business users. And I am very much, you know, I think that's the way that it's going to go and what it means is that AI is slowly disappearing. And I don't think that's a bad thing. I think if anything, what we're going to see in 2022 and maybe into 2023 is this sort of rush to put this idea of disappearing AI into practice and have as many of these solutions in the enterprise as possible. You can see, like for example, SAP is going to roll out this quarter, this thing called adaptive recommendation services, which basically is a cold start AI outcome that can work across a whole bunch of different vertical markets and use cases. It's just a recommendation engine for whatever you needed to do in the line of business. So basically, you're an SAP user, you look up to turn on your software one day, you're a sales professional let's say, and suddenly you have a recommendation for customer churn. Boom! It's going, that's great. Well, I don't know, I think that's terrifying. In some ways I think it is the future that AI is going to disappear like that, but I'm absolutely terrified of it because I think that what it really does is it calls attention to a lot of the issues that we already see around AI, specific to this idea of what we like to call at Omdia, responsible AI. Which is, you know, how do you build an AI outcome that is free of bias, that is inclusive, that is fair, that is safe, that is secure, that its audible, et cetera, et cetera, et cetera, et cetera. I'd take a lot of work to do. And so if you imagine a customer that's just a Salesforce customer let's say, and they're turning on Einstein Discovery within their sales software, you need some guidance to make sure that when you flip that switch, that the outcome you're going to get is correct. And that's going to take some work. And so, I think we're going to see this move, let's roll this out and suddenly there's going to be a lot of problems, a lot of pushback that we're going to see. And some of that's going to come from GDPR and others that Sanjeev was mentioning earlier. A lot of it is going to come from internal CSR requirements within companies that are saying, "Hey, hey, whoa, hold up, we can't do this all at once. "Let's take the slow route, "let's make AI automated in a smart way." And that's going to take time. >> Yeah, so a couple of predictions there that I heard. AI simply disappear, it becomes invisible. Maybe if I can restate that. And then if I understand it correctly, Brad you're saying there's a backlash in the near term. You'd be able to say, oh, slow down. Let's automate what we can. Those attributes that you talked about are non trivial to achieve, is that why you're a bit of a skeptic? >> Yeah. I think that we don't have any sort of standards that companies can look to and understand. And we certainly, within these companies, especially those that haven't already stood up an internal data science team, they don't have the knowledge to understand when they flip that switch for an automated AI outcome that it's going to do what they think it's going to do. And so we need some sort of standard methodology and practice, best practices that every company that's going to consume this invisible AI can make use of them. And one of the things that you know, is sort of started that Google kicked off a few years back that's picking up some momentum and the companies I just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing. You know, so like for the SAP example, we know, for example, if it's convolutional neural network with a long, short term memory model that it's using, we know that it only works on Roman English and therefore me as a consumer can say, "Oh, well I know that I need to do this internationally. "So I should not just turn this on today." >> Thank you. Carl could you add anything, any context here? >> Yeah, we've talked about some of the things Brad mentioned here at IDC and our future of intelligence group regarding in particular, the moral and legal implications of having a fully automated, you know, AI driven system. Because we already know, and we've seen that AI systems are biased by the data that they get, right? So if they get data that pushes them in a certain direction, I think there was a story last week about an HR system that was recommending promotions for White people over Black people, because in the past, you know, White people were promoted and more productive than Black people, but it had no context as to why which is, you know, because they were being historically discriminated, Black people were being historically discriminated against, but the system doesn't know that. So, you know, you have to be aware of that. And I think that at the very least, there should be controls when a decision has either a moral or legal implication. When you really need a human judgment, it could lay out the options for you. But a person actually needs to authorize that action. And I also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases. In some extent, they always will. So we'll always be chasing after them. But that's (indistinct). >> Absolutely Carl, yeah. I think that what you have to bear in mind as a consumer of AI is that it is a reflection of us and we are a very flawed species. And so if you look at all of the really fantastic, magical looking supermodels we see like GPT-3 and four, that's coming out, they're xenophobic and hateful because the people that the data that's built upon them and the algorithms and the people that build them are us. So AI is a reflection of us. We need to keep that in mind. >> Yeah, where the AI is biased 'cause humans are biased. All right, great. All right let's move on. Doug you mentioned mentioned, you know, lot of people that said that data lake, that term is not going to live on but here's to be, have some lakes here. You want to talk about lake house, bring it on. >> Yes, I do. My prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering. I say offering that doesn't mean it's going to be the dominant thing that organizations have out there, but it's going to be the pro dominant vendor offering in 2022. Now heading into 2021, we already had Cloudera, Databricks, Microsoft, Snowflake as proponents, in 2021, SAP, Oracle, and several of all of these fabric virtualization/mesh vendors joined the bandwagon. The promise is that you have one platform that manages your structured, unstructured and semi-structured information. And it addresses both the BI analytics needs and the data science needs. The real promise there is simplicity and lower cost. But I think end users have to answer a few questions. The first is, does your organization really have a center of data gravity or is the data highly distributed? Multiple data warehouses, multiple data lakes, on premises, cloud. If it's very distributed and you'd have difficulty consolidating and that's not really a goal for you, then maybe that single platform is unrealistic and not likely to add value to you. You know, also the fabric and virtualization vendors, the mesh idea, that's where if you have this highly distributed situation, that might be a better path forward. The second question, if you are looking at one of these lake house offerings, you are looking at consolidating, simplifying, bringing together to a single platform. You have to make sure that it meets both the warehouse need and the data lake need. So you have vendors like Databricks, Microsoft with Azure Synapse. New really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements, can meet the user and query concurrency requirements. Meet those tight SLS. And then on the other hand, you have the Oracle, SAP, Snowflake, the data warehouse folks coming into the data science world, and they have to prove that they can manage the unstructured information and meet the needs of the data scientists. I'm seeing a lot of the lake house offerings from the warehouse crowd, managing that unstructured information in columns and rows. And some of these vendors, Snowflake a particular is really relying on partners for the data science needs. So you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement. >> Thank you Doug. Well Tony, if those two worlds are going to come together, as Doug was saying, the analytics and the data science world, does it need to be some kind of semantic layer in between? I don't know. Where are you in on this topic? >> (chuckles) Oh, didn't we talk about data fabrics before? Common metadata layer (chuckles). Actually, I'm almost tempted to say let's declare victory and go home. And that this has actually been going on for a while. I actually agree with, you know, much of what Doug is saying there. Which is that, I mean I remember as far back as I think it was like 2014, I was doing a study. I was still at Ovum, (indistinct) Omdia, looking at all these specialized databases that were coming up and seeing that, you know, there's overlap at the edges. But yet, there was still going to be a reason at the time that you would have, let's say a document database for JSON, you'd have a relational database for transactions and for data warehouse and you had basically something at that time that resembles a dupe for what we consider your data life. Fast forward and the thing is what I was seeing at the time is that you were saying they sort of blending at the edges. That was saying like about five to six years ago. And the lake house is essentially on the current manifestation of that idea. There is a dichotomy in terms of, you know, it's the old argument, do we centralize this all you know in a single place or do we virtualize? And I think it's always going to be a union yeah and there's never going to be a single silver bullet. I do see that there are also going to be questions and these are points that Doug raised. That you know, what do you need for your performance there, or for your free performance characteristics? Do you need for instance high concurrency? You need the ability to do some very sophisticated joins, or is your requirement more to be able to distribute and distribute our processing is, you know, as far as possible to get, you know, to essentially do a kind of a brute force approach. All these approaches are valid based on the use case. I just see that essentially that the lake house is the culmination of it's nothing. It's a relatively new term introduced by Databricks a couple of years ago. This is the culmination of basically what's been a long time trend. And what we see in the cloud is that as we start seeing data warehouses as a check box items say, "Hey, we can basically source data in cloud storage, in S3, "Azure Blob Store, you know, whatever, "as long as it's in certain formats, "like, you know parquet or CSP or something like that." I see that as becoming kind of a checkbox item. So to that extent, I think that the lake house, depending on how you define is already reality. And in some cases, maybe new terminology, but not a whole heck of a lot new under the sun. >> Yeah. And Dave Menninger, I mean a lot of these, thank you Tony, but a lot of this is going to come down to, you know, vendor marketing, right? Some people just kind of co-op the term, we talked about you know, data mesh washing, what are your thoughts on this? (laughing) >> Yeah, so I used the term data platform earlier. And part of the reason I use that term is that it's more vendor neutral. We've tried to sort of stay out of the vendor terminology patenting world, right? Whether the term lake houses, what sticks or not, the concept is certainly going to stick. And we have some data to back it up. About a quarter of organizations that are using data lakes today, already incorporate data warehouse functionality into it. So they consider their data lake house and data warehouse one in the same, about a quarter of organizations, a little less, but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake. So it's pretty obvious that three quarters of organizations need to bring this stuff together, right? The need is there, the need is apparent. The technology is going to continue to converge. I like to talk about it, you know, you've got data lakes over here at one end, and I'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a server and you ignore it, right? That's not what a data lake is. So you've got data lake people over here and you've got database people over here, data warehouse people over here, database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities. So it's obvious that they're going to meet in the middle. I mean, I think it's like Tony says, I think we should declare victory and go home. >> As hell. So just a follow-up on that, so are you saying the specialized lake and the specialized warehouse, do they go away? I mean, Tony data mesh practitioners would say or advocates would say, well, they could all live. It's just a node on the mesh. But based on what Dave just said, are we gona see those all morphed together? >> Well, number one, as I was saying before, there's always going to be this sort of, you know, centrifugal force or this tug of war between do we centralize the data, do we virtualize? And the fact is I don't think that there's ever going to be any single answer. I think in terms of data mesh, data mesh has nothing to do with how you're physically implement the data. You could have a data mesh basically on a data warehouse. It's just that, you know, the difference being is that if we use the same physical data store, but everybody's logically you know, basically governing it differently, you know? Data mesh in space, it's not a technology, it's processes, it's governance process. So essentially, you know, I basically see that, you know, as I was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring, but there are going to be cases where, for instance, if I need, let's say like, Upserve, I need like high concurrency or something like that. There are certain things that I'm not going to be able to get efficiently get out of a data lake. And, you know, I'm doing a system where I'm just doing really brute forcing very fast file scanning and that type of thing. So I think there always will be some delineations, but I would agree with Dave and with Doug, that we are seeing basically a confluence of requirements that we need to essentially have basically either the element, you know, the ability of a data lake and the data warehouse, these need to come together, so I think. >> I think what we're likely to see is organizations look for a converge platform that can handle both sides for their center of data gravity, the mesh and the fabric virtualization vendors, they're all on board with the idea of this converged platform and they're saying, "Hey, we'll handle all the edge cases "of the stuff that isn't in that center of data gravity "but that is off distributed in a cloud "or at a remote location." So you can have that single platform for the center of your data and then bring in virtualization, mesh, what have you, for reaching out to the distributed data. >> As Dave basically said, people are happy when they virtualized data. >> I think we have at this point, but to Dave Menninger's point, they are converging, Snowflake has introduced support for unstructured data. So obviously literally splitting here. Now what Databricks is saying is that "aha, but it's easy to go from data lake to data warehouse "than it is from databases to data lake." So I think we're getting into semantics, but we're already seeing these two converge. >> So take somebody like AWS has got what? 15 data stores. Are they're going to 15 converge data stores? This is going to be interesting to watch. All right, guys, I'm going to go down and list do like a one, I'm going to one word each and you guys, each of the analyst, if you would just add a very brief sort of course correction for me. So Sanjeev, I mean, governance is going to to be... Maybe it's the dog that wags the tail now. I mean, it's coming to the fore, all this ransomware stuff, which you really didn't talk much about security, but what's the one word in your prediction that you would leave us with on governance? >> It's going to be mainstream. >> Mainstream. Okay. Tony Baer, mesh washing is what I wrote down. That's what we're going to see in 2022, a little reality check, you want to add to that? >> Reality check, 'cause I hope that no vendor jumps the shark and close they're offering a data niche product. >> Yeah, let's hope that doesn't happen. If they do, we're going to call them out. Carl, I mean, graph databases, thank you for sharing some high growth metrics. I know it's early days, but magic is what I took away from that, so magic database. >> Yeah, I would actually, I've said this to people too. I kind of look at it as a Swiss Army knife of data because you can pretty much do anything you want with it. That doesn't mean you should. I mean, there's definitely the case that if you're managing things that are in fixed schematic relationship, probably a relation database is a better choice. There are times when the document database is a better choice. It can handle those things, but maybe not. It may not be the best choice for that use case. But for a great many, especially with the new emerging use cases I listed, it's the best choice. >> Thank you. And Dave Menninger, thank you by the way, for bringing the data in, I like how you supported all your comments with some data points. But streaming data becomes the sort of default paradigm, if you will, what would you add? >> Yeah, I would say think fast, right? That's the world we live in, you got to think fast. >> Think fast, love it. And Brad Shimmin, love it. I mean, on the one hand I was saying, okay, great. I'm afraid I might get disrupted by one of these internet giants who are AI experts. I'm going to be able to buy instead of build AI. But then again, you know, I've got some real issues. There's a potential backlash there. So give us your bumper sticker. >> I'm would say, going with Dave, think fast and also think slow to talk about the book that everyone talks about. I would say really that this is all about trust, trust in the idea of automation and a transparent and visible AI across the enterprise. And verify, verify before you do anything. >> And then Doug Henschen, I mean, I think the trend is your friend here on this prediction with lake house is really becoming dominant. I liked the way you set up that notion of, you know, the data warehouse folks coming at it from the analytics perspective and then you get the data science worlds coming together. I still feel as though there's this piece in the middle that we're missing, but your, your final thoughts will give you the (indistinct). >> I think the idea of consolidation and simplification always prevails. That's why the appeal of a single platform is going to be there. We've already seen that with, you know, DoOP platforms and moving toward cloud, moving toward object storage and object storage, becoming really the common storage point for whether it's a lake or a warehouse. And that second point, I think ESG mandates are going to come in alongside GDPR and things like that to up the ante for good governance. >> Yeah, thank you for calling that out. Okay folks, hey that's all the time that we have here, your experience and depth of understanding on these key issues on data and data management really on point and they were on display today. I want to thank you for your contributions. Really appreciate your time. >> Enjoyed it. >> Thank you. >> Thanks for having me. >> In addition to this video, we're going to be making available transcripts of the discussion. We're going to do clips of this as well we're going to put them out on social media. I'll write this up and publish the discussion on wikibon.com and siliconangle.com. No doubt, several of the analysts on the panel will take the opportunity to publish written content, social commentary or both. I want to thank the power panelists and thanks for watching this special CUBE presentation. This is Dave Vellante, be well and we'll see you next time. (bright music)
SUMMARY :
and I'd like to welcome you to I as moderator, I'm going to and that is the journey to weigh in on there, and it's going to demand more solid data. Brad, I wonder if you that are specific to individual use cases in the past is because we I like the fact that you the data from, you know, Dave Menninger, I mean, one of the things that all need to be managed collectively. Oh thank you Dave. and to the community I think we could have a after the fact to say, okay, is this incremental to the market? the magic it does and to do it and that slows the system down. I know the story, but And that is a problem that the languages move on to Dave Menninger. So in the next say three to five years, the guy who has followed that people still need to do their taxes, And I agree 100% with you and the streaming data as the I mean, when you think about, you know, and because of basically the all of that is fixed, but the it becomes the default? I think around, you know, but it becomes the default. and we're seeing a lot of taking the hardware dimension That'll just happened, Carl. Okay, let's move on to Brad. And that is to say that, Those attributes that you And one of the things that you know, Carl could you add in the past, you know, I think that what you have to bear in mind that term is not going to and the data science needs. and the data science world, You need the ability to do lot of these, thank you Tony, I like to talk about it, you know, It's just a node on the mesh. basically either the element, you know, So you can have that single they virtualized data. "aha, but it's easy to go from I mean, it's coming to the you want to add to that? I hope that no vendor Yeah, let's hope that doesn't happen. I've said this to people too. I like how you supported That's the world we live I mean, on the one hand I And verify, verify before you do anything. I liked the way you set up We've already seen that with, you know, the time that we have here, We're going to do clips of this as well
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Menninger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Brad Shimmin | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Dave Velannte | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Brad | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
January of 2022 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
381 databases | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Sanjeev | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Omdia | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
SanjMo | ORGANIZATION | 0.99+ |
79% | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
15 data stores | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
DockerCon2021 Keynote
>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.
SUMMARY :
We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mario Andretti | PERSON | 0.99+ |
Dani | PERSON | 0.99+ |
Matt Falk | PERSON | 0.99+ |
Dana Lawson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Maya Andretti | PERSON | 0.99+ |
Donnie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mona | PERSON | 0.99+ |
Nicole | PERSON | 0.99+ |
UNICEF | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
14 million | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Khan | PERSON | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
99 | QUANTITY | 0.99+ |
1.3 times | QUANTITY | 0.99+ |
1.2 times | QUANTITY | 0.99+ |
Claire | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
UC Irvine | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
34% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
Joey | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
160 images | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
23 minutes | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
April | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
56% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Molly | PERSON | 0.99+ |
Mac mini | COMMERCIAL_ITEM | 0.99+ |
Hughie cower | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Georgie | PERSON | 0.99+ |
Matt fall | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Second question | QUANTITY | 0.99+ |
Kubicki | PERSON | 0.99+ |
Moby | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Youi Cal | PERSON | 0.99+ |
three nines | QUANTITY | 0.99+ |
J frog | ORGANIZATION | 0.99+ |
200 K | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
Sharon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 X | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
windows | TITLE | 0.99+ |
381 | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
A Day in the Life of a Data Scientist
>>Hello, everyone. Welcome to the a day in the life of a data science talk. Uh, my name is Terry Chang. I'm a data scientist for the ASML container platform team. And with me, I have in the chat room, they will be moderating the chat. I have Matt MCO as well as Doug Tackett, and we're going to dive straight into kind of what we can do with the asthma container platform and how we can support the role of a data scientist. >>So just >>A quick agenda. So I'm going to do some introductions and kind of set the context of what we're going to talk about. And then we're actually going to dive straight into the ASML container platforms. So we're going to walk straight into what a data scientist will do, kind of a pretty much a day in the life of the data scientists. And then we'll have some question and answer. So big data has been the talk within the last few years within the last decade or so. And with big data, there's a lot of ways to derive meaning. And then a lot of businesses are trying to utilize their applications and trying to optimize every decision with their, uh, application utilizing data. So previously we had a lot of focus on data analytics, but recently we've seen a lot of data being used for machine learning. So trying to take any data that they can and send it off to the data scientists to start doing some modeling and trying to do some prediction. >>So that's kind of where we're seeing modern businesses rooted in analytics and data science in itself is a team sport. We're seeing that it doesn't, we need more than data scientists to do all this modeling. We need data engineers to take the data, massage the data and do kind of some data manipulation in order to get it right for the data scientists. We have data analysts who are monitoring the models, and we even have the data scientists themselves who are building and iterating through multiple different models until they find a one that is satisfactory to the business needs. Then once they're done, they can send it off to the software engineers who will actually build it out into their application, whether it's a mobile app or a web app. And then we have the operations team kind of assigning the resources and also monitoring it as well. >>So we're really seeing data science as a team sport, and it does require a lot of different expertise and here's the kind of basic machine learning pipeline that we see in the industry now. So, uh, at the top we have this training environment and this is, uh, an entire loop. Uh, we'll have some registration, we'll have some inferencing and at the center of all, this is all the data prep, as well as your repositories, such as for your data, for any of your GitHub repository, things of that sort. So we're kind of seeing the machine learning industry, go follow this very basic pattern and at a high level I'll glance through this very quickly, but this is kind of what the, uh, machine learning pipeline will look like on the ASML container platform. So at the top left, we'll have our, our project depository, which is our, uh, persistent storage. >>We'll have some training clusters, we'll have a notebook, we'll have an inference deployment engine and a rest API, which is all sitting on top of the Kubernetes cluster. And the benefit of the container platform is that this is all abstracted away from the data scientist. So I will actually go straight into that. So just to preface, before we go into the data as small container platform, where we're going to look at is a machine learning example, problem that is, uh, trying to predict how long a specific taxi ride will take. So with a Jupiter notebook, the data scientists can take all of this data. They can do their data manipulation, train a model on a specific set of features, such as the location of a taxi ride, the duration of a taxi ride, and then model it to trying to figure out, you know, what, what kind of prediction we can get on a future taxi ride. >>So that's the example that we will talk through today. I'm going to hop out of my slides and jump into my web browser. So let me zoom in on this. So here I have a Jupiter environment and, um, this is all running on the container platform. All I need is actually this link and I can access my environment. So as a data scientist, I can grab this link from my it admin or my system administrator. And I could quickly start iterating and, and start coding. So on the left-hand side of the Jupiter, we actually have a file directory structure. So this is already synced up to my get repository, which I will show in a little bit on the container platform so quickly I can pull any files that are on my get hub repository. I can even push with a button here, but I can, uh, open up this Python notebook. >>And with all this, uh, unique features of the Jupiter environment, I can start coding. So each of these cells can run Python code and in specific the container at the ESMO container platform team, we've actually built our own in-house lime magic commands. So these are unique commands, um, that we can use to interact with the underlying infrastructure of the container platform. So the first line magic command that I want to mention is this command called percent attachments. When I run this command, I'll actually get the available training clusters that I can send training jobs to. So this specific notebook, uh, it's pretty much been created for me to quickly iterate and develop a model very quickly. I don't have to use all the resources. I don't have to allocate a full set of GPU boxes onto my little Jupiter environment. So with the training cluster, I can attach these individual data science notebooks to those training clusters and the data scientists can actually utilize those resources as a shared environment. >>So the, essentially the shared large eight GPU box can actually be shared. They don't have to be allocated to a single data scientist moving on. We have another line magic command, it's called percent percent Python training. This is how we're going to utilize that training cluster. So I will prepare the cell percent percent with the name of the training cluster. And this is going to tell this notebook to send this entire training cell, to be trained on those resources on that training cluster. So the data scientists can quickly iterate through a model. They can then format that model and all that code into a large cell and send it off to that training cluster. So because of that training cluster is actually located somewhere else. It has no context of what has been done locally in this notebook. So we're going to have to do and copy everything into one large cell. >>So as you see here, I'm going to be importing some libraries and I'm in a, you know, start defining some helper functions. I'm going to read in my dataset and with the typical data science modeling life cycle, we're going to have to take in the data. We're going to have to do some data pre-processing. So maybe the data scientists will do this. Maybe the data engineer will do this, but they have access to that data. So I'm here. I'm actually getting there to be reading in the data from the project repository. And I'll talk about this a little bit later with all of the clusters within the container platform, we have access to some project repository that has been set up using the underlying data fabric. So with this, I have, uh, some data preprocessing, I'm going to cleanse some of my data that I noticed that maybe something is missing or, uh, some data doesn't look funky. >>Maybe the data types aren't correct. This will all happen here in these cells. So once that is done, I can print out that the data is done cleaning. I can start training my model. So here we have to split our data, set into a test, train, uh, data split so that we have some data for actually training the model and some data to test the model. So I can split my data there. I could create my XG boost object to start doing my training and XG boost is kind of like a decision tree machine learning algorithm, and I'm going to fit my data into this, uh, XG boost algorithm. And then I'm going to do some prediction. And then in addition, I'm actually going to be tracking some of the metrics and printing them out. So these are common metrics that we, that data scientists want to see when they do their training of the algorithm. >>Just to see if some of the accuracy is being improved, if the loss is being improved or the mean absolute error. So things like that. So these are all things, data scientists want to see. And at the end of this training job, I'm going to be saving the model. So I'm going to be saving it back into the project repository in which we will have access to. And at the end, I will print out the end time so I can execute that cell. And I've already executed that cell. So you'll see all of these print statements happening here. So importing the libraries, the training was run reading and data, et cetera. All of this has been printed out from that training job. Um, and in order to access that, uh, kind of glance through that, we would get an output with a unique history URL. >>So when we send the training job to that training cluster, we'll the training cluster will send back a unique URL in which we'll use the last line magic command that I want to talk about called percent logs. So percent logs will actually, uh, parse out that response from the training cluster. And actually we can track in real time what is happening in that training job so quickly, we can see that the data scientist has a sandbox environment available to them. They have access to their get repository. They have access to a project repository in which they can read in some of their data and save the model. So very quick interactive environment for the data scientists to do all of their work. And it's all provisioned on the ASML container platform. And it's also abstracted away. So here, um, I want to mention that again, this URL is being surfaced through the container platform. >>The data scientist doesn't have to interact with that at all, but let's take, it's take a step back. Uh, this is the day to day in the life of the data scientists. Now, if we go backwards into the container platform and we're going to walk through how it was all set up for them. So here is my login page to the container platform. I'm going to log in as my user, and this is going to bring me to the, uh, view of the, uh, Emma lops tenant within the container platform. So this is where everything has been set up for me, the data scientist doesn't have to see this if they don't need to, but what I'll walk through now is kind of the topics that I mentioned previously that we would go back into. So first is the project repository. So this project deposited comes with each tenant that is created on the platform. >>So this is a more, nothing more than a shared collaborative workspace environment in which data scientist or any data scientist who is allocated to this tenant. They have this politics client that can visually see all their data of all, all of their code. And this is actually taking a piece of the underlying data fabric and using that for your project depository. So you can see here, I have some code I can create and see my scoring script. I can see the models that have been created within this tenant. So it's pretty much a powerful tool in which you can store your code store any of your data and have the ability to read and write from any of your Jupiter environments or any of your created clusters within this tenant. So a very cool ad here in which you can, uh, quickly interact with your data. >>The next thing I want to show is the source control. So here is where you would plug in all of your information for your source control. And if I edit this, you guys will actually see all the information that I've passed in to configure the source control. So on the backend, the container platform will take these credentials and connect the Jupiter notebooks you create within this tenant to that get repository. So this is the information that I've passed in. If GitHub is not of interest, we also have support for bit bucket here as well. So next I want to show you guys that we do have these notebook environments. So, um, the notebook environment was created here and you can see that I have a notebook called Teri notebook, and this is all running on the Kubernetes environment within the container platform. So either the data scientists can come here and create their notebook or their project admin can create the notebook. >>And all you'd have to do is come here to this notebook end points. And this, the container platform will actually map the container platform to a specific port in which you can just give this link to the data scientists. And this link will actually bring them to their own Jupiter environment and they can start doing all of their model just as I showed in that previous Jupiter environment. Next I want to show the training cluster. This is the training cluster that was created in which I can attach my notebook to start utilizing those training clusters. And then the last thing I want to show is the model, the deployment cluster. So once that model has been saved, we have a model registry in which we can register the model into the platform. And then the last step is to create a deployment clusters. So here on my screen, I have a deployment cluster called taxi deployment. >>And then all these serving end points have been configured for me. And most importantly, this endpoint model. So the deployment cluster is actually a wrap the, uh, train model with the flask wrapper and add a rest endpoint to it so quickly. I can operationalize my model by taking this end point and creating a curl command, or even a post request. So here I have my trusty postman tool in which I can format a post request. So I've taken that end point from the container platform. I've formatted my body, uh, right here. So these are some of the features that I want to send to that model. And I want to know how long this specific taxi ride at this location at this time of day would take. So I can go ahead and send that request. And then quickly I will get an output of the ride. >>Duration will take about 2,600 seconds. So pretty much we've walked through how a data scientists can quickly interact with their notebook. They can train their model. And then coming into the platform, we saw the project repository, we saw the source control. We can register the model within the platform, and then quickly we can operationalize that model with our deployment cluster, uh, and have our model up and running and available for inference. So that wraps up the demo. Uh, I'm gonna pass it back to Doug and Matt and see if they want to come off mute and see if there are any questions, Matt, Doug, you there. Okay. >>Yeah. Hey, Hey Terry, sorry. Sorry. Just had some trouble getting off mute there. Uh, no, that was a, that was an excellent presentation. And I think there are generally some questions that come up when I talk to customers around how integrated into the Kubernetes ecosystem is this capability and where does this sort of Ezreal starts? And the open source, uh, technologies like, um, cube flow as an example, uh, begin. >>Yeah, sure. Matt. So this is kind of one layer up. We have our Emma LOBs tenant and this is all running on a piece of a Kubernetes cluster. So if I log back out and go into the site admin view, this is where you would see all the Kubernetes clusters being created. And it's actually all abstracted away from the data scientists. They don't have to know Kubernetes. They just interact with the platform if they want to. But here in the site admin view, I had this Kubernetes dashboard and here on the left-hand side, I have all my Kubernetes sections. So if I just add some compute hosts, whether they're VMs or cloud compute hosts, like ETQ hosts, we can have these, uh, resources abstracted away from us to then create a Kubernetes cluster. So moving on down, I have created this Kubernetes cluster utilizing those resources. >>Um, so if I go ahead and edit this cluster, you'll actually see that have these hosts, which is just a click and a click and drop method. I can move different hosts to then configure my Kubernetes cluster. Once my Kubernetes cluster is configured, I can then create Kubernetes tenant or in this case, it's a namespace. So once I have this namespace available, I can then go into that tenant. And as my user, I don't actually see that it is running on Kubernetes. So in addition with our ML ops tenants, you have the ability to bootstrap cute flow. So queue flow is a open source machine learning framework that is run on Kubernetes, and we have the ability to link that up as well. So, uh, coming back to my Emma lops tenant, I can log in what I showed is the ASML container platform version of Emma flops. But you see here, we've also integrated QP flow. So, uh, very, uh, a nod to, uh, HPS contribution to, you know, utilizing open source. Um, it's actually all configured within our platform. So, um, hopefully, >>Yeah, actually, Tara, can you hear me? It's Doug. So there were a couple of other questions actually about key flare that came in. I wonder whether you could just comment on why we've chosen cube flow. Cause I know there was a question about ML flow in stead and what the differences between ML flow and coop flow. >>Yeah, sure. So the, just to reiterate, there are some questions about QP flow and I'm just, >>Yeah, so obviously one of, uh, one of the people watching saw the queue flow dashboard there, I guess. Um, and so couldn't help but get excited about it. But there was another question about whether, you know, ML flow versus cube flow and what the difference was between them. >>Yeah. So with flow, it's, it's an open source framework that Google has developed. It's a very powerful framework that comes with a lot of other unique tools and Kubernetes. So with Q flow, you really have the ability to launch other notebooks. You have the ability to utilize different Kubernetes operators like TensorFlow and PI torch. You can utilize a lot of the, some of the frameworks within Q4 to do training like Q4 pipelines, which visually allow you to see your training jobs, uh, within the queue flow. It also has a plethora of different serving mechanisms, such as Seldin, uh, for, you know, deploying your, your machine learning models. You have Ks serving, you have TF serving. So Q4 is very, it's a very powerful tool for data scientists to utilize if they want a full end to end open source and know how to use Kubernetes. So it's just a, another way to do your machine learning model development and right with ML flow, it's actually a different piece of the machine learning pipeline. So ML flow mainly focuses on model experimentation, comparing different models, uh, during the training and it off it can be used with Q4. >>The complimentary Terry I think is what you're saying. Sorry. I know we are dramatically running out of time now. So that was really fantastic demo. Thank you very much, indeed. >>Exactly. Thank you. So yeah, I think that wraps it up. Um, one last thing I want to mention is there is this slide that I want to show in case you have any other questions, uh, you can visit hp.com/asml, hp.com/container platform. If you have any questions and that wraps it up. So thank you guys.
SUMMARY :
I'm a data scientist for the ASML container platform team. So I'm going to do some introductions and kind of set the context of what we're going to talk about. the models, and we even have the data scientists themselves who are building and iterating So at the top left, we'll have our, our project depository, which is our, And the benefit of the container platform is that this is all abstracted away from the data scientist. So that's the example that we will talk through today. So the first line magic command that I want to mention is this command called percent attachments. So the data scientists can quickly iterate through a model. So maybe the data scientists will do this. So once that is done, I can print out that the data is done cleaning. So I'm going to be saving it back into the project repository in which we will So here, um, I want to mention that again, this URL is being So here is my login page to the container So this is a more, nothing more than a shared collaborative workspace environment in So on the backend, the container platform will take these credentials and connect So once that model has been saved, we have a model registry in which we can register So I've taken that end point from the container platform. So that wraps up the demo. And the open source, uh, technologies like, um, cube flow as an example, So moving on down, I have created this Kubernetes cluster So once I have this namespace available, So there were a couple of other questions actually So the, just to reiterate, there are some questions about QP flow and I'm just, But there was another question about whether, you know, ML flow versus cube flow and So with Q flow, you really have the ability to launch So that was really fantastic demo. So thank you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Doug Tackett | PERSON | 0.99+ |
Terry Chang | PERSON | 0.99+ |
Terry | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Matt MCO | PERSON | 0.99+ |
Jupiter | LOCATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
first line | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
about 2,600 seconds | QUANTITY | 0.97+ |
Q4 | TITLE | 0.97+ |
A Day in the Life of a Data Scientist | TITLE | 0.97+ |
hp.com/asml | OTHER | 0.97+ |
last decade | DATE | 0.97+ |
one layer | QUANTITY | 0.95+ |
hp.com/container | OTHER | 0.92+ |
single data | QUANTITY | 0.91+ |
Emma | PERSON | 0.91+ |
one large cell | QUANTITY | 0.91+ |
each tenant | QUANTITY | 0.88+ |
one | QUANTITY | 0.84+ |
one last thing | QUANTITY | 0.81+ |
Q flow | TITLE | 0.8+ |
Emma | TITLE | 0.8+ |
ESMO | ORGANIZATION | 0.76+ |
last few years | DATE | 0.74+ |
one of | QUANTITY | 0.73+ |
day | QUANTITY | 0.72+ |
eight GPU | QUANTITY | 0.7+ |
Seldin | TITLE | 0.69+ |
Q4 | DATE | 0.67+ |
percent percent | OTHER | 0.65+ |
Ezreal | ORGANIZATION | 0.65+ |
some questions | QUANTITY | 0.65+ |
ASML | TITLE | 0.65+ |
ASML | ORGANIZATION | 0.61+ |
people | QUANTITY | 0.49+ |
ETQ | TITLE | 0.46+ |
Teri | ORGANIZATION | 0.4+ |
Emma | ORGANIZATION | 0.35+ |
Mik Kersten, Tasktop | BizOps Manifesto Unveiled
>>from around the globe. It's the Cube with digital coverage of biz ops Manifesto unveiled. Brought to you by Biz Ops Coalition. Hey, Welcome back, everybody. Jeffrey here with the Cube. We're coming to you from our Palo Alto studios. And welcome back to this event. Is the biz Opps Manifesto unveiling? So the biz Opps manifesto and the biz Opps coalition have been around for a little while, But today's the big day. That's kind of the big public unveiling are excited to have some of the foundational people that put their put their name on the dotted line, if you will, to support this initiative to talk about why that initiative is so important. And so the next guest, we're excited to have his doctor, Mick Kirsten. He is the founder and CEO of Task Top. Make great to see you coming in from Vancouver, Canada, I think. Right. >>Yes. Great to be here, Jeff. Thank you. Absolutely. >>I hope your air is a little better out there. I know you had some of the worst air of all of us a couple a couple of weeks back, so hopefully things air, uh, getting a little better. And we get those fires under control? >>Yeah, Things have cleared up now, so yeah, it's good. It's good to be close to the U. S. And it's gonna have the Arabic clean as well. >>Absolutely. So let's let's jump into it. So you you've just been an innovation guy forever Starting way back in the day and Xerox Park. I was so excited to do an event at Xerox Park for the first time last year. I mean that that to me represents along with Bell Labs and and some other, you know, kind of foundational innovation and technology centers. That's got to be one of the greatest one. So I just wonder if you could share some perspective of getting your start there at Xerox Parc. You know, some of the lessons you learn and what you've been ableto kind of carry forward from those days. >>Yeah, I was fortunate. Joined Xerox Park in the computer science lab there at a very early point in my career, and to be working on open source programming languages. So back then, and the computer science lab where some of the inventions around programming around software development names such as Object of programming and ah, lot of what we had around really modern programming levels construct. Those were the teams that had the fortune of working with and really our goal waas. And of course, there's a Z. You know, this, uh, there's just this DNA of innovation and excitement and innovation in the water. And really, it was the model that was all about changing the way that we work was looking at for how we could make it 10 times easier to write. Code like this is back in 99 we were looking at new ways of expressing especially business concerns, especially ways of enabling people who are who want to innovate for their business, to express those concerns in code and make that 10 times easier than what that would take. So we created a new open source programming language, and we saw some benefits, but not quite quite what we expected. I then went and actually joined Charles Stephanie that former chief actor Microsoft, who is responsible for I actually got a Microsoft word as a out of Xerox Parc and into Microsoft and into the hands of Bill Gates and the company I was behind the whole office suite and his vision and the one I was trying to execute with working for him was to, you know, make Power point like a programming language, make everything completely visual. And I realized none of this was really working, that there was something else fundamentally wrong that programming languages or new ways of building software like Let's try to do with Charles around intentional programming. That was not enough. >>That was not enough. So you know, the agile movement got started about 20 years ago, and we've seen the rise of Dev ops and really this kind of embracing of of, of sprints And, you know, getting away from M. R. D s and P. R. D s and these massive definitions of what we're gonna build and long billed cycles to this iterative process. And that's been going on for a little while. So what was still wrong? What was still missing? Why the Biz Ops Coalition? Why the biz ops manifesto? >>Yeah, so I basically think we nailed some of the things that the programming language levels of teams can have. Effective languages deployed softened the club easily now right and at the kind of process and collaboration and planning level agile two decades decades ago was formed. We were adopting all the all the teams I was involved with on. It's really become a solved problem. So agile tools, agile teams actually of planning are now very mature and the whole challenges when organizations try to scale that. And so what I realized is that the way that Agile was scaling across teams and really scaling from the Technology Party organization to the business was just completely flawed. The agile teams had one set of doing things. One set of metrics, one set of tools and the way that the business was working was planning was investing in technology was just completely disconnected and using a a whole different set of measures. It's pretty interesting because I think it's >>pretty clear from the software development teams in terms of what they're trying to deliver, because they've got a feature set right and they've got bugs and it's easy. It's easy to see what they deliver, but it sounds like what you're really honing in on is is disconnect on the business side in terms of, you know, is it the right investment you know. Are we getting the right business? R o I on this investment? Was that the right feature? Should we be building another feature or shall we building a completely different products? That so it sounds like it's really a core piece of this is to get the right measurement tools, the right measurement data sets so that you can make the right decisions in terms of what you're investing, you know, limited resource is you can't Nobody has unlimited resources and ultimately have to decide what to do, which means you're also deciding what not to dio. It sounds like that's a really big piece of this of this whole effort. >>Yeah, Jeff, that's exactly it. Which is the way that the adult measures their own way of working is very different from the way that you measure business outcomes. The business outcomes are in terms of how happy your customers are. Are you innovating fast enough to keep up with the pace of, ah, rapidly changing economy, rapidly changing market and those are those are all around the customer. And so what? I learned on this long journey of supporting many organizations transformations and having them trying to apply those principles vigilant develops that those are not enough. Those measures technical practices, those measures, technical excellence of bringing code to the market. They don't actually measure business outcomes. And so I realized that really was much more around having these entwined flow metrics that are customer centric and business centric and market centric where we needed to go. So I want to shift gears >>a little bit and talk about your book because you're also a best selling author project a product, and and you you brought up this concept in your book called The Flow Framework. And it's really interesting to me because I know, you know, flow on one hand is kind of a workflow in the process flow, and you know that's how things get done and and embrace the flow. On the other hand, you know, everyone now in a little higher level, existential way is trying to get into the flow right into the workflow and, you know not be interrupted and get into a state where you're kind of your highest productivity, you know, kind of your highest comfort. Which floor you talking about in your book, or is it a little bit of both. >>That's a great question, is it's not what I gotta ask very often, cause me, it's It's absolutely both. So the thing that we want to get that we've learned how toe and, uh, master individual flow, that there's this beautiful book by me Holly teachings mentality. There's a beautiful Ted talk about him as well, about how we can take control of our own flow. So my question with the book with project surprise, How can we bring that to entire teams and really entire organizations? How come we have everyone contributing to a customer outcome? And this is really what if you go to the bazaar manifesto? It says, I focus on Out comes on using data to drive, whether we're delivering those outcomes rather than a focus on proxy metrics such as How quickly did we implement this feature? And now it's really how much value did the customs of the future and how quickly did we learn? And how quickly did you use that data to drive to that next outcome? Really, that with companies like Netflix on, like Amazon, have mastered, how do we get that every large organization, every idea, organization and make everyone be a softer innovator. So it's to bring that on the concept of flow to these entering value streams. And the fascinating thing is, we've actually seen the data. We've been able to study a lot of value streams. We see when flow increases, when organizations deliver value to a customer faster developers actually become more happy. So things like that implying that promotes course rise. And we've got empirical data for this. So that beautiful thing to me is that we've actually been able thio, combine these two things and and see the results in the data that you increased flow to the customer, your development or more happy. I >>love it. I love it, right, because we're all more. We're all happier when we're in the flow and we're all more productive winner in the flow. So I that is a great melding of two concepts. But let's jump into the into the manifesto itself a little bit. And you know, I love that you know, that took this approach really of having kind of four key values, and he gets 12 key principles and I just want to read a couple these values because when you read them, it sounds pretty brain dead, right? Of course. Right. Of course, you should focus on business outcomes. Of course, you should have trust and collaboration. Of course, you should have data based decision making processes and not just intuition or, you know, whoever is the loudest person in the room on toe, learn and respond and pivot. But >>what's the >>value of actually just putting them on a piece of paper? Because again, this is not this. These are all good positive things, right? When when somebody reads these to you or tells you these or sticks it on the wall? Of course. But unfortunately, of course, isn't always enough. >>No, I think what's happened is some of these core principles originally from the agile manifested two decades ago. The whole Dev ops movement of the last decade off flow feedback and continue learning has been key. But a lot of organizations, especially the ones undergoing transformations, have actually gone a very different way, right? The way that they measure value in technology innovation is through costs For many organizations, the way that they actually are looking at at their moving to cloud is actually is a reduction in costs, whereas the right way of looking at moving the cloud is how much more quickly can we get to the value to the customer? How quickly can we learn from that? And how could quickly can we drive the next business outcome? So, really, the key thing is to move away from those old ways of doing things that funding projects and call centers to actually funding and investing in outcomes and measuring outcomes through these flow metrics, which in the end are your fast feedback for how quickly you're innovating for your customer. So these things do seem, you know, very obvious when you look at them. But the key thing is what you need to stop doing. To focus on these, you need to actually have accurate real time data off how much value your phone to the customer every week, every month, every quarter. And if you don't have that, your decisions are not given on data. If you don't know what your bottle like, it's. And this is something that in the decades of manufacturing car manufacturers, other manufacturers master. They always know where the bottom back in their production processes you ask, uh, random. See, I all want a global 500 company where the bottleneck is, and you won't get it there. Answer. Because there's not that level of understanding. So have to actually follow these principles. You need to know exactly where you follow like is because that's what's making your developers miserable and frustrated on having them context, which on thrash So it. The approach here is important, and we have to stop doing these other things right. >>There's so much. They're a pack. I love it, you know, especially the cloud conversation, because so many people look at it wrong as a cost saving device as opposed to an innovation driver, and they get stuck, they get stuck in the literal. And, you know, I think the same thing always about Moore's law, right? You know, there's a lot of interesting riel tech around Moore's law and the increasing power of microprocessors. But the real power, I think in Moore's laws, is the attitudinal change in terms of working in a world where you know that you've got all this power and what will you build and design? E think it's funny to your your comment on the flow in the bottleneck, right? Because because we know manufacturing assumes you fix one bottleneck. You move to your next one, right, You always move to your next point of failure. So if you're not fixing those things, you know you're not. You're not increasing that speed down the line unless you can identify where that bottleneck is, or no matter how Maney improvements you make to the rest of the process, it's still going to get hung up on that one spot. >>That's exactly, and you also make it sound so simple. But again, if you don't have the data driven visibility of where the bottleneck is. And but these bottlenecks are just as you said, if it's just lack, um, all right, so we need to understand is the bottleneck, because our security use air taking too long and stopping us from getting like the customer. If it's that automate that process and then you move on to the next bottleneck, which might actually be that deploy yourself through the clouds is taking too long. But if you don't take that approach of going flow first rather than again the sort of way cost production first you have taken approach of customer centric city, and you only focus on optimizing cost. Your costs will increase and your flow will slow down. And this is just one, these fascinating things. Whereas if you focus on getting back to the customer and reducing your cycles on getting value your flow time from six months to two weeks or 21 week or two event as we see with tech giants, you actually could both lower your costs and get much more value. Of course, get that learning going. So I think I've I've seen all these cloud deployments and modernizations happen that delivered almost no value because there was such a big ball next up front in the process. And actually the hosting and the AP testing was not even possible with all of those inefficiencies. So that's why going flow first rather than costs. First, there are projects versus Sochi. >>I love that and and and and it begs, repeating to that right within a subscription economy. You know you're on the hook to deliver value every single month because they're paying you every single month. So if you're not on top of how you delivering value, you're going to get sideways because it's not like, you know, they pay a big down payment and a small maintenance fee every month. But once you're in a subscription relationship, you know you have to constantly be delivering value and upgrading that value because you're constantly taking money from the customers. It's it's such a different kind of relationship, that kind of the classic, you know, Big Bang with the maintenance agreement on the back end really important. >>Yeah, and I think in terms of industry ship, that's it. That's what catalyzed this industry shift is in this SAS that subscription economy. If you're not delivering more and more value to your customers, someone else's and they're winning the business, not you. So one way we know is that divide their customers with great user experiences. Well, that really is based on how many features you delivered or how much. How about how many quality improvements or scaler performance improvements you delivered? So the problem is, and this is what the business manifesto was was the forefront of touch on is, if you can't measure how much value delivered to a customer, what are you measuring? You just back again measuring costs, and that's not a measure of value. So we have to shift quickly away from measuring costs to measuring value to survive in in the subscription economy. Mick, >>we could go for days and days and days. I want to shift gears a little bit into data and and a data driven, um, decision making a data driven organization. Because right day has been talked about for a long time. The huge big data mean with with Hadoop over over several years and data warehouses and data lakes and data, oceans and data swamps and you go on and on, it's not that easy to do right. And at the same time, the proliferation of data is growing exponentially were just around the corner from from I, O. T and five G. So now the accumulation of data at machine scale again this is gonna overwhelm, and one of the really interesting principles that I wanted to call out and get your take right is today's organizations generate mawr data than humans can process. So informed decisions must be augmented by machine learning and artificial intelligence. I wonder if you can again, you've got some great historical perspective reflect on how hard it is to get the right data to get the data in the right context and then to deliver to the decision makers and then trust the decision makers to actually make the data and move that down. You know, it's kind of this democratization process into more and more people and more and more frontline jobs, making more and more of these little decisions every day. >>Yeah, and Jeff, I think the front part of what you said are where the promises of big data have completely fallen on their face into these swamps. As you mentioned, because if you don't have the data and the right format, you can connect, collected that the right way, you're not. Model it that way the right way. You can't use human or machine learning on it effectively. And there have been the number of data, warehouses and a typical enterprise organization, and the sheer investment is tremendous. But the amount of intelligence being extracted from those is a very big problem. So the key thing that I've known this is that if you can model your value streams so you actually understand how you're innovating, how you're measuring the delivery value and how long that takes. What is your time to value through these metrics? Like for the time you can actually use both. You know the intelligence that you've got around the table and push that balance as it the assay, far as you can to the organization. But you can actually start using that those models to understand, find patterns and detect bottlenecks that might be surprising, Right? Well, you can detect interesting bottle next one you shift to work from home. We detected all sorts of interesting bottlenecks in our own organization that we're not intuitive to me that had to do with more senior people being overloaded and creating bottlenecks where they didn't exist. Whereas we thought we were actually organization. That was very good at working from home because of our open source route. So the data is highly complex. Software Valley streams are extremely complicated, and the only way to really get the proper analysts and data is to model it properly and then to leverage these machine learning and AI techniques that we have. But that front, part of what you said, is where organizations are just extremely immature in what I've seen, where they've got data from all the tools, but not modeled in the right way. >>Well, all right, so before I let you go, you know? So you get a business leader he buys in. He reads the manifesto. He signs on the dotted line. He says, Mick, how do I get started? I want to be more aligned with With the development teams, you know, I'm in a very competitive space. We need to be putting out new software features and engage with our customers. I want to be more data driven. How do I get started? Well, you know, what's the biggest inhibitor for most people to get started and get some early winds, which we know is always the key to success in any kind of a new initiative, >>right? So I think you can reach out to us through the website. Uh, on the is a manifesto, but the key thing is just it's exactly what you said, Jeff. It's to get started and get the key wins. So take a probably value stream. That's mission critical. It could be your new mobile Web experiences, or or part of your cloud modernization platform where your analysts pipeline. But take that and actually apply these principles to it and measure the entire inflow of value. Make sure you have a volumetric that everyone is on the same page on, right. The people on the development teams that people in leadership all the way up to the CEO and one of the where I encourage you to start is actually that enter and flow time, right? That is the number one metric. That is how you measure whether you're getting the benefit of your cloud modernization. That is the one metric that even Cockcroft when people I respect tremendously put in his cloud for CEOs Metric 11 way to measure innovation. So basically, take these principles, deployed them on one product value stream measure into and flow time on. Then you'll actually you well on your path to transforming and to applying the concepts of agile and develops all the way to the business to the way in your operating model. >>Well, Mick, really great tips, really fun to catch up. I look forward to a time when we can actually sit across the table and and get into this, because I just I just love the perspective. And, you know, you're very fortunate to have that foundational, that foundational base coming from Xerox parc. And it's, you know, it's a very magical place with a magical history. So the to incorporate that and to continue to spread that wealth, you know, good for you through the book and through your company. So thanks for sharing your insight with us today. >>Thanks so much for having me, Jeff. Absolutely. >>Alright. And go to the biz ops manifesto dot org's Read it. Check it out. If you want to sign it, sign it. They'd love to have you do it. Stay with us for continuing coverage of the unveiling of the business manifesto on the Cube. I'm Jeffrey. Thanks for watching. See you next time.
SUMMARY :
Make great to see you coming in from Vancouver, Canada, I think. Absolutely. I know you had some of the worst air of all of us a couple a couple of weeks back, It's good to be close to the U. S. And it's gonna have the Arabic You know, some of the lessons you learn and what you've been ableto kind of carry forward you know, make Power point like a programming language, make everything completely visual. So you know, the agile movement got started about 20 years ago, and the whole challenges when organizations try to scale that. on is is disconnect on the business side in terms of, you know, is it the right investment you know. very different from the way that you measure business outcomes. And it's really interesting to me because I know, you know, flow on one hand is kind of a workflow the results in the data that you increased flow to the customer, your development or more happy. And you know, I love that you know, that took this approach really of having kind of four key When when somebody reads these to you or tells you these or sticks But the key thing is what you need to stop doing. You're not increasing that speed down the line unless you can identify where that bottleneck is, flow first rather than again the sort of way cost production first you have taken you know you have to constantly be delivering value and upgrading that value because you're constantly taking money and this is what the business manifesto was was the forefront of touch on is, if you can't measure how and data lakes and data, oceans and data swamps and you go on and on, it's not that easy to do So the key thing that I've known this is that if you can model your value streams so you more aligned with With the development teams, you know, I'm in a very competitive space. but the key thing is just it's exactly what you said, Jeff. continue to spread that wealth, you know, good for you through the book and through your company. Thanks so much for having me, Jeff. They'd love to have you do it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Mick Kirsten | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Mik Kersten | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Mick | PERSON | 0.99+ |
12 key principles | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Bell Labs | ORGANIZATION | 0.99+ |
Charles | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Task Top | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
Xerox Park | ORGANIZATION | 0.99+ |
U. S. | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
Bill Gates | PERSON | 0.99+ |
Cockcroft | PERSON | 0.99+ |
Holly | PERSON | 0.99+ |
agile | TITLE | 0.99+ |
two weeks | QUANTITY | 0.99+ |
21 week | QUANTITY | 0.99+ |
one set | QUANTITY | 0.99+ |
one metric | QUANTITY | 0.99+ |
Biz Ops Coalition | ORGANIZATION | 0.99+ |
two concepts | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Xerox Parc | ORGANIZATION | 0.98+ |
Charles Stephanie | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
two decades decades ago | DATE | 0.98+ |
Vancouver, Canada | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
The Flow Framework | TITLE | 0.97+ |
Ted | PERSON | 0.96+ |
One set | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.96+ |
500 company | QUANTITY | 0.96+ |
M. R. D | PERSON | 0.95+ |
Xerox Park | LOCATION | 0.95+ |
first | QUANTITY | 0.93+ |
one spot | QUANTITY | 0.92+ |
P. R. D | PERSON | 0.92+ |
one bottleneck | QUANTITY | 0.92+ |
Sochi | ORGANIZATION | 0.91+ |
Agile | TITLE | 0.91+ |
about 20 years ago | DATE | 0.9+ |
last decade | DATE | 0.9+ |
decades | QUANTITY | 0.88+ |
single month | QUANTITY | 0.88+ |
Moore | PERSON | 0.87+ |
Xerox | ORGANIZATION | 0.87+ |
first time | QUANTITY | 0.87+ |
Arabic | OTHER | 0.86+ |
four key values | QUANTITY | 0.83+ |
Opps Manifesto | EVENT | 0.82+ |
Big Bang | EVENT | 0.8+ |
every | QUANTITY | 0.76+ |
Tasktop | ORGANIZATION | 0.72+ |
couple of weeks | DATE | 0.7+ |
couple | QUANTITY | 0.69+ |
Ramin Sayar, Sumo Logic | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with its ecosystem partners. >> Welcome back to the eighth year of AWS re:Invent. It's 2019. There's over 60,000 in attendance. Seventh year of theCUBE. Wall-to-wall coverage, covering all the angles of this broad and massively-growing ecosystem. I am Stu Miniman. My co-host is Justin Warren, and one of our Cube alumni are back on the program. Ramin Sayar, who is the president and CEO of Sumo Logic. >> Stu: Booth always at the front of the expo hall. I think anybody that's come to this show has one of the Sumo-- >> Squishies. >> Stu: Squish dolls there. I remember a number of years you actually had live sumos-- >> Again this year. >> At the event, so you know, bring us, the sixth year you've been at the show, give us a little bit of the vibe and your experience so far. >> Yeah, I mean, naturally when you've been here so many times, it's interesting to be back, not only as a practitioner who's attended this many years ago, but now as a partner of AWS, and seeing not only our own community growth in terms of Sumo Logic, but also the community in general that we're here to see. You know, it's a good mix of practitioners and business folks from DevOps to security and much, much more, and as we were talking right before the show, the vendors here are so different now then it was three years go, let alone six years ago. So, it's nice to see. >> All right, a lot of news from Amazon. Anything specific jump out from you from their side, or I know Sumo Logic has had some announcements this week. >> Yeah, I mean, like, true to Amazon, there's always a lot of announcements, and, you know, what we see is customers need time to understand and digest that. There's a lot of confusion, but, you know, selfishly speaking from the Sumo side, you know, we continue to be a strong AWS partner. We announced another set of services along with AWS at this event. We've got some new competencies for container, because that's a big aspect of what customers are doing today with microservices, and obviously we announced some new capabilities around our security intelligence capabilities, specifically for CloudTrail, because that's becoming a really important aspect of a lot of customers maturation of cloud and also operating in the cloud in this new world. >> Justin: So walk us through what customers are using CloudTrail to do, and how the Sumo Logic connection to CloudTrail actually helps them with what they're trying to do. >> Well, first and foremost, it's important to understand what Sumo does and then the context of CloudTrail and other services. You know, we started roughly a decade ago with AWS, and we built and intelligence platform on top of AWS that allows us to deal with the vast amount of unstructured data in specific use cases. So one very common use case, very applicable to the users here, is around the DevOps teams. And so, the DevOps teams are having a much more complicated and difficult time today understanding, ascertaining, where trouble, where problems reside, and how to go troubleshoot those. It's not just about a siloed monitoring tool. That's just not enough. It doesn't the analytics or intelligence. It's about understanding all the data, from CloudTrail, from EC2, and non-AWS services, so you can appropriately understand these new modern apps that are dependent on these microservices and architectures, and what's really causing the performance issue, the availability issue, and, God forbid, a security or breach issue, and that's a unique thing that Sumo provides unlike others here. >> Justin: Yeah, now I believe you've actually extended the Sumo support beyond CloudTrail and into some of the Kubernetes services that Amazon offers like AKS, and you also, I believe it's ESC FireLens support? >> Ramin: Yeah, so, and that's just a continuation of a lot of stuff we've done with respect to our analytics platform, and, you know, we introduced some things earlier this year at re:Inforce with AWS as well so, around VPC Flow Logs and the like, and this is a continuation now for CloudTrail. And really what it helps our customers and end users do is better better and more proactively be able to detect potential issues, respond to those security issues, and more importantly, automate the resolution process, and that's what's really key for our users, because they're inundated with false positives all the time whether it's on the ops side let alone the security side. So Sumo Logic is very unique back to our value prop, but providing a horizontal platform across all these different use cases. One being ops, two being cybersecurity and threat, and three being line-of-business users who are trying to understand what their own users on their digital apps are doing with their services and how to better deliver value. >> Justin: Now, automation is so important when you've got this scope and scale of cloud and the pace of innovation that's happening with all the technology that's around us here at the show, so the automation side of things I think is a little bit underappreciated this year. We're talking about transformation and we're talking about AI and ML. I think, with the automation piece, is one thing that's a little bit underestimated from this year's show. What do you think about that? >> Yeah, I mean, our philosophy all along has been, you can't automate without AI and ML, and it's proven fact that, you know, by next year the machine data growth is going to be 16 zettabytes. By 2025, it's going to be 75 zettabytes of data. Okay, while that's really impressive in terms of volume of data, the challenge is, the tsunami of data that's being generated, how to go decipher what's an important aspect and what's not an important aspect, so you first have to understand from the streaming data services, how to be able to dynamically and schema on read, be able to analyze that data, and then be able to put in context to those use cases I talked about, and then to drive automation remediation, so it's a multifaceted problem that we've been solving for nearly a decade. In a given day, we're analyzing several hundred petabytes of data, right? And we're trying to distill it down to the most important aspects for you, for your particular role and your responsibility. >> Stu: Yeah, um, we've talked a lot about transformation at this show, and one of the big challenges for customers is, they're going through that application modernization journey. I wonder if you could bring us inside some of your customers, you know, where are they having success, where are some of the bottlenecks slowing them down from moving along on this transformation journey? >> Yeah, so, it's interesting because, whether you're a cloud-native company like Sumo Logic or you're aspiring to be a cloud-native company or a cloud-first project going through migration, you have similar problems. It's now become a machine-scale problem, not a human-scale problem, back to the data growth, right? And so, some of our customers, regardless of their maturation, are really trying to understand, you know, as they embark on these digital transformations, how do they solve, what we call, the intelligence gap? And that is, because there's so much silos across the enterprise organizations today, across development, operations, IT, security, lines of business, in its context, in its completeness, it's creating more complexity for our customers. So, what Sumo tries to help solve, do, is, solve that intelligence gap in this new intelligence economy by providing an intelligence platform we call "continuous intelligence". So what do customers do? So, some of our customers use Sumo to monitor and troubleshoot their cloud workloads. So whether it's, you know, the Netflix team themselves, right, because they're born and bred in the cloud or it's Hudl, who's trying to provide, you know, analytics and intelligence for players and coaches, right, to insurance companies that are going through the migration journey to the cloud, Hartford Insurance, New York Life, to sports and media companies, Major League Baseball, with the whole cyber SOC, and what they're trying to do there on the backs of Sumo, to even trucking companies like Packard, who's trying to do driverless, autonomous cars. It doesn't matter what industry you're in, everyone is trying to do through the digital transformation or be disrupted. Everyone's trying to gain that intelligence or not just be left behind but be lapped, and so what Sumo really helps them do is provide one single intelligence platform across dev, sec, and ops, bringing these teams together to be able to collaborate much more efficiently and effectively through the true multi-tenant SaaS platform that we've optimized for 10 years on AWS. >> Justin: So we heard from Andy yesterday that one of the important ways to drive that transformational change is to actually have the top-down support for that. So you mentioned that you're able to provide that one layer across multiple different teams who traditionally haven't worked that well together, so what are you seeing with customers around, when they put in Sumo Logic, where does that transformational change come from? Are we seeing the top-down driven change? Is that were customers come from, or is it a little bit more bottom-up, were you have developers and operations and security all trying to work together, and then that bubbles up to the rest of the organization? >> Ramin: Well, it's interesting, it's both for us because a lot of times, it depends on the size of the organization, where the responsibilities reside, so naturally, in a larger enterprise where there's a lot of forces of mass because of the different siloed organizations, you have to, often times, start with the CISO, and we make sure the CISO is a transformation agent, and if they are the transformation agent, then we partner with them to really help get a handle and control on their cybersecurity and threat, and then he or she typically sponsors us into other parts of the line of business, the DevOps teams, like, for example, we've seen with Hartford Insurance, right, or that we saw with F5 Networks and many more. But then, there's a flip side of that where we actually start in, let's use another example, uh, you know, with, for example, Hearst Media, right. They actually started because they were doing a lift-and-shift to the cloud and their DevOps team, in one line of business, started with Sumo, and expanded the usage and growth. They migrated 32 applications over to AWS, and then suddenly the security teams got wind of it and then we went top-down. Great example of starting, you know, bottom-up in the case of Hearst or top-down in the case of other examples. So, the trick here is, as we look at embarking upon these journeys with our customers, we try to figure out which technology partners are they using. It's not only in the cloud provider, but it's also which traditional on-premise tools versus potentially cloud-native services and SaaS applications they're adopting. Second is, which sort of organizational models are they adopting? So, a lot of people talk about DevOps. They don't practice DevOps, and then you can understand that very quickly by asking them, "What tools are you using?" "Are you using GitHub, Jenkins, Artifactory?" "Are you using all these other tools, "and how are you actually getting visibility "into your pipeline, and is that actually speeding "the delivery of services and digital applications, "yes or no?" It's a very binary answer, and if they can't answer that, you know they're aspiring to be. So therefore, it's a consultative sale for us in that mode. If they're already embarking upon that, however, then we use a different approach, where we're trying to understand how they're challenged, what they're challenged with, and show other customers, and then it's really more of a partnership. Does that makes sense? >> Justin: Yeah, makes perfect sense to me. >> So, one of the debates we had coming into this show is, a lot of discussion at multicloud around the industry. Of course, Amazon doesn't talk specifically about multicloud all that well. If you look historically, attempts to manage lots of different environments under a single pane of glass, we always say, "pane is spelled P-I-A-N", when you try to do that. There's been great success. If you look at VMware in the data center, VMware didn't cover the entire environment, but vCenter was the center of your, you know, admin's world, and you would edge cases to manage some of the other environments here. Feels that AWS is extending their footprint with thing like Outposts and the environments, but there are lots of things that won't be on Amazon, whether it be a second cloud provider, my legacy data center pieces, or anything else there. Sounds like you touch many of the pieces, so I'm curious if you, just, weigh in on what you hear from customers, how they get their arms around the heterogeneous mess that IT traditionally is, and what we need to do as an industry to make things better. >> You know, for a long time, many companies have been bi-modal, and now they're tri-modal, right, meaning that, you know, they have their traditional and their new aspects of IT. Now they're tri-modal in the sense of, now they have a third leg of that complexity in stool, which is public cloud, and so, it's a reality regardless of Amazon or GCP or Azure, that customers want flexibility and choice, and if fact, we see that with our own data. Every year, as you guys well know, we put out an intelligence report that actually shows year-over-year, the adoption of not only various technologies, but adoption of technologies used across one cloud provider versus multicloud providers, and earlier this year in September when we put the new release of the report out, we saw that year-over-year, there was more than 2x growth in the user of Kubernetes in production, and it was almost three times growth year-over-year in use of Kubernetes across multiple cloud providers. That tells you something. That tells you that they don't want lock-in. That tells you that they also want choice. That tells you that they're trying to abstract away from the IaaS layer, infrastructure-as-a-service layer, so they have portability, so to speak, across different types of providers for the different types of workload needs as well as the data sovereignty needs they have to constantly manage because of regulatory requirements, compliance requirements and the like. And so, this is actually it benefits someone like Sumo to provide that agnostic platform to customers so they can have the choice, but also most importantly, the value, and this is something that we announced also at this event where we introduced editions to our Cloud Flex licensing model that allows you to not only address multi-tiers of data, but also allows you to have choice of where you run those workloads and have choice for different types of data for different types of use cases at different cost models. So again, delivering on that need for customers to have flexibility and choice, as well as, you know, the promise of options to move workloads from provider to provider without having to worry about the headache of compliance and audit and security requirements, 'cause that's what Sumo uniquely does versus point tools. >> Well, Ramin, I think that's a perfect point to end on. Thank you so much for joining us again. >> Thanks for having me. >> Stu: And looking forward to catching up with Sumo in the future. >> Great to be here. >> All right, we're at the midway point of three days, wall-to-wall coverage here in Las Vegas. AWS re:Invent 2019. He's Justin Warren, I'm Stu Miniman, and you're watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services and one of our Cube alumni are back on the program. of the Sumo-- I remember a number of years you actually had live sumos-- At the event, so you know, bring us, the sixth year and business folks from DevOps to security Anything specific jump out from you from their side, and also operating in the cloud in this new world. and how the Sumo Logic connection to CloudTrail and how to go troubleshoot those. and more importantly, automate the resolution process, so the automation side of things I think from the streaming data services, how to be able I wonder if you could bring us inside some or it's Hudl, who's trying to provide, you know, so what are you seeing with customers around, and then you can understand that very quickly and you would edge cases to manage to have flexibility and choice, as well as, you know, Well, Ramin, I think that's a perfect point to end on. Stu: And looking forward to catching up with Sumo and you're watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Ramin | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Packard | ORGANIZATION | 0.99+ |
Hartford Insurance | ORGANIZATION | 0.99+ |
Hearst Media | ORGANIZATION | 0.99+ |
F5 Networks | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Sumo Logic | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
16 zettabytes | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
New York Life | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
32 applications | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sumo | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
six years ago | DATE | 0.99+ |
Stu | PERSON | 0.98+ |
three | QUANTITY | 0.98+ |
sixth year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Seventh year | QUANTITY | 0.98+ |
Sumo | PERSON | 0.98+ |
over 60,000 | QUANTITY | 0.97+ |
a decade ago | DATE | 0.97+ |
next year | DATE | 0.97+ |
third leg | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
DevOps | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
more than 2x | QUANTITY | 0.96+ |
second cloud | QUANTITY | 0.96+ |
one layer | QUANTITY | 0.96+ |
Cloud Flex | TITLE | 0.95+ |
AKS | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.94+ |
earlier this year | DATE | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
EC2 | TITLE | 0.91+ |
Andy Jassy Keynote Analysis | AWS re:Invent 2019
la from Las Vegas it's the cube covering AWS reinvent 2019 brought to you by Amazon Web Services and Vinum care along with its ecosystem partners hello everyone welcome to the cube we're here live in Las Vegas for AWS reinvent 2019 I'm John Farrar your host is silicon Angles flagship the cube we're extract a signal noise leader in event coverage with day Volante my co-host and justin warren tech analysts Forbes contributor guru of cube host guys keynote for J&E jassie first of all I don't know how he does it he's just like continues hissing Marc loved the live music in there but a slew of announcements this is a reinvention of AWS you can tell that they're just essentially trying to go the next level on what the cloud means how they're gonna bring it to customers and you know they've been criticized for you know kind of nut I won't say falling behind I could say Microsoft's been probably praised more for catching up and it's been a lot of discussion around that the loss of the Jedi contract variety of enterprise wins Microsoft has the field Salesforce Google's just kind of retooling but Amazon clearly the leader with a little pressure for the first time in the rearview mirror they've got someone on their on their tail win and Microsoft's far back but this isn't a statement from from chassis and Amazon of okay you want to see the Jets we're gonna we're gonna turn on the Jets and blow pass everybody Jesse gets cocky self Justin what do you think yeah so a lot of signaling to enterprise that it's safe to come here it's this is where you can have everything that you need to get everything that you need done you can get all of it in one place so there there is a real signal there to say Enterprise if you want to do cloud there's only one place to do cloud enterprise customers they tried out some big names Goldman Sachs not a small enterprise they had all the classic born in the cloud but you know we put out this concept on I'm on our Silicon angle post called reborn in the cloud almost born-again enterprise you start to see the telegraphing of what their core message is which is transform just don't kick the tires and fall into the Microsoft trap go with em is on and transform your business model transform your miss not just run IT a better way than before well yeah I mean I'm impressed they got two CEOs the CEO of Goldman Sachs David Solomon the CEO of Cerner coming to the show it's kind of rare that the CEO of your customer comes to the show I guess the second thing I'd say is you know Amazon is not a rinse and repeat company at these shows although they are when it comes to shock and awe so they ticked the Box on shock and awe but you're right John they're talking a lot about transformation I sort of think of it as disruption here's what I would say to that Amazon has a dual disruption agenda one is its disrupting the horizontal technology stack and 2 its disrupting industries it wants to be the platform of which startups in particular but also incumbents can disrupt industries and it's in their DNA because it's in Amazon's DNA and I think it's the last thing I'll say as Amazon is the reach a Amazon retailers the you can buy anything here store and now to your point Justin Amazon Web Services is you can get AWS anywhere at the edge and a little mini data centers that they're built on outpost and of course in the cloud all right I want to get you guys reactions a couple things I saw and I want to just analyze the keynote one as we saw Jesse come out with the transformation message that's really more of their posture to the market you should be transforming we're gonna take Amazon as a center of gravity and push it out to the edge without post so kind of a customer company posture there on the industry then you had the announcements and I thought that the sage maker studio was pretty robust a lot of data and announcements so you had the transformation message a lot of core data and then they kind of said hey we're open we got open source databases we got kubernetes and multiple flavors a couple steers from the Twitter crowd on that one and then finally outpost with the edge where they're essentially you know four years ago Dave they said no more data centers in ten years now they're saying we're gonna push Amazon to the your datacenter so you know a posture for the company a lot of data centric data ops almost program and build I'm also DevOps feel to it what's your reaction to that I think the most interesting part for me was the change there was a bit of a shift there I think he made the statement of rather than bringing the data to the computer we want to bring the compute to the data and I think that's that's acknowledging reality that data has gravity and it's very difficult for enterprises particularly if you've already invested a lot in building a data Lake so being able to just pick that up and then move it to any cloud nothing let alone AWS just moving that around is is a big effort so if you're going to transform your business you have to kind of rethink completely how you address some of these issues and one of that would be well what if rather than let's just pick everything up and move it to cloud what if we could actually do something a little bit better than that and we can pick and choose what we want to suit our particular solution and your point Dave I think that's where Amazon strength comes from is it they are the everything store so you can buy whatever you want be at this tiny little piece that only five companies need or the same thing that everyone else on the planet needs you can come and buy everything from us and that's what I think they're trying to signal to an organization that says look if you want to transform and you're concerned that it'll be difficult to do we've got you we've got something here that will suit your needs and we will be able to work with you to transform your business and we're seeing you know Amazon years ago we wouldn't talk about hybrid and now they're going really all-in on hybrid and it's not outpost is no longer just this thing they're doing with VMware it's now a fundamental piece of their infrastructure for the edge and I think the key point there is the the edge is going to be one with developers and Amazon is essentially bringing its development platform to the edge without posts as the the underpinning and I like the strategy much much better than I like what I'm seeing from some of the guys like HP and Dell which is they're throwing boxes you know over the fence with really without a strong developer angle your thoughts I mean my my big takeaway was I think this is key knows about a next-generation shift on the business model but that's the transformation he didn't come out and say it I said it in my post but I truly believe if you're not born in the cloud or reborn in the cloud you'll probably be out of business and as a startup were to ask them of the VCS this question how do you go after and target some of those people who aren't gonna be reborn in the cloud to have the scale advantage but the data announcements was really the big story here because we look at DevOps infrastructure as code programming infrastructure we've seen that that that's of now an established practice now you start to see this new concept around data ops some people call it AI ops whatever but Dana now the new programmability it's almost a devops culture - data and I think what got my attention the most was the IDE for stage maker which kind of brings in this cool feature of what everyone was which is I want machine learning but I can't hire anybody and I got to make I got a democratized machine learning I got to make application developers get value out of the data because the apps need to tap the data it's got to be addressable so I think this is a stake in the ground for the next five to ten years of a massive shift from increasing the DevOps mission to add a layer making that manageable multiple databases he's totally right on that it's not one database if you want time series for real-time graph for you know network constructs it's pick your database you know that shouldn't be it inhibitor at all I think the data story is real that's the top story in my mind the data future what that's going to enable and then the outpost is just a continuation of Amazon realizing that the center of the cloud is not the end game it's just the center of gravity and I think you gonna start to see edge become really huge I mean I count ten into ten purpose-built databases now and jesse was unequivocal he said you gotta have the right database tool for the right job you're seeing the same thing with their machine learning and AI tools it's been shocking dozens and dozens of services each with their own sort of unique primitives that give you that flexibility and so where you can disagree with the philosophy but their philosophy is very clear we're gonna go very granular and push a lot of stuff out there I think there's two bits at play there that I can see you know I think you're right on the data thing and something that people don't quite realize is that modern data analysis is programming like it's code your data scientists know how to code so there was a lot of talk there about notebooks going in there like they love their notebooks they love using different frameworks to solve different problems and they need to be able to use for this one I need tens of flow for another one I might need MX net yeah so if you couple that that idea that we need to it's all about the data and you couple that with developers and AWS knows developers really really well so you've got modern enterprises lot wanting to do more with the data that they have the age or business problem of I've got all this information I need to process I need to do be out bi I need to do data analysis and you couple that with the Pala that iws has with developers I think it's a pretty strong story then you know in my interview with Jesse I asked him the question and I stole the line from Steve Moe Mulaney from aviatrix you take the tea out of cloud native it's cloud naive and I think what I've been seeing is a lot of customers have been naive about what cloud is and it's actually been buying IT and so they really don't are not sensitive to the capabilities message so I asked Jeff see I'm like you got these capabilities that's cool if you want to go to the store and buy everything or look at everything and buy what you want and construct and transform check no problem I buy that however some customers just want a package solution and Amazon has not always been great on having something packaged for customers so he kind of addressed that and this might be an Achilles heel for Amazon as Microsoft has such entrenched sales sales presence that they might be pushing a solution that frankly customers might not care about capabilities we did see one bit where there was a little bit of a nudge towards is fees and and systems integrators and I think that that really for me is there needs to be a lot more work done by Amazon there because that's what Enterprise me enterprise is used to dealing with systems integrators that will help them to use the raw materials that ados provides to solve that promote you said there are two segments of developers and customers one that wants all the low level building blocks and others want simpler faster results with abstractions aka packaging so they're going down the road but again they're not shy don't like hey we're just going to continue to build we're not going to try to move off our trajectory they're gonna stay with adding more power and frankly some digs at snowflake I fought with red shift and I thought the dig to the kubernetes community with we code our own stuff wink wink we don't have to slow down was a nice jab at the CN CF I thought because he's saying hey you know what we're not in committees deciding features which is the customers and implementing them so a kind of a jab well sure that's gonna rapid a I would say the snowflake is sort of a copycat separating compute from stores that's what snowflakes has been doing forever but he did take direct jabs at IBM Oracle and obviously Microsoft with with Windows so I like to see that you know usually Jessie doesn't do that it's good take the gloves so much so many announcements out there you got to go to silk and angled comm will have all the stories but one of the top stories coming into the reinvent that we didn't hear anything about but if you squint through and connect the dots on Jessie's keynote it is pretty evident what the strategy is and that's multi-cloud so I'll see multi-cloud is a word that Amazon is not using at all onstage as you can tell they don't really they're in well they're one cloud they don't really care about the other clouds but their customers do so guys multi cloud is a legit conversation how they get multi cloud is debatable acquisition sprawl by the end of the day multiple clouds is reality I think Jessie was kind of predicting and laying down some early narratives around the multi cloud story by saying hey we have more capabilities we're faster we're doing more stuff so I think he's trying to cede the base on the concept of hey if you want to go look at other clouds try to go apples to apples NIT that other than that he didn't really address at all multi-cloud what do you guys think about multi cloud yeah what it's pretty much that if you're gonna have multiple clouds at least one of them's gonna be AWS so they're gonna get some of your money if we came a bi can't get all your money I'll get at least get some of your money that's reasonable but I think part of the multi cloud conversation is that enterprises are actually trying to clarify their existing way of doing things so cloud isn't a destination it's not like a it's not a physical location it's a state of mind it's a way of operating things an enterprise that that's that's the transformation part that enterprises are trying to do so transform the way that they operate themselves to be more cloud like so part of the multi cloud piece I think that people are kind of missing is well it's not just Amazon or some of its competitors its existing on-site infrastructure and making that into a cloud which i think is where something like outpost becomes a really strong proposition and I've said a million times multiplied cloud is more of a symptom than it is a strategy that'll start to change they will see an equilibrium there you know right cloud for the right job but today it's a problem that CIOs are asked being asked to clean up the crime scene all right let's wrap up by summarizing the keynote each of you guys give me your take on I'll start I think this was a inflection point for AWS and Jesse in the sense of they now know they have to go the next gen loud it's Amazon enterprise it's data it's outpost it's all these things it's truly next-gen I think this is going to be all about data it's all gonna be about large-scale infrastructure and data scaling and with edge and outpost I think is really an amazing move for them in the sense that's gonna probably put in motion another five to ten years of continuing architectural reshipping and I think that if you're not born in the cloud or reborn in the cloud you're gonna be naive to the fact that you're not gonna have the capabilities to be success when I think that's going to be an opportunity for entrepreneurs and for companies pivoting into enterprises so I think this goes will go might go down as one of the most pax keynotes but I think it'll look back as one of the instrumental transitions for Amazon so I think he did a good job beginning and to rush 30 announcements in three hours marathon but overall I thought he did a great job I think I would agree Jesse always does a good job he's giving a message to you know CEOs as opposed to the CIO and he had two CEOs on stage I thought there was quite a gap between you know that message of transformation and then sort of geeking out on all the new services so there's still some work to be done there but I think it's a lot of developers in the audience I'm seeing them tell your boss to get on the train it's a very hard keynote to serve both audiences but so it's a start but there's a lot of work to be done there Justin yeah I agree with that I think this is probably one of the first keynotes maybe last year but certainly this year there's like AWS is very serious about enterprise and is trying to talk to enterprise a lot more than it ever has it still talks to developers but we didn't see anywhere near as much interesting in kind of the startup ecosystem it's like no no cloud is for serious companies doing serious work and I think that we're just going to see Amazon talking about that more and more and more because that's where all the money is yeah next-generation cloud new architectures all about the enterprise guys this is the cube opening day for three days of wall-to-wall coverage keynote analysis from Andy Jessie and Amazon Andy Jessie will be on Thursday at 3 o'clock we got a lot of top Amazon executives will who'll help us open and unpack all these to make mega announcements stay with us for more cube coverage and go to Silicon angle comm cube net for the videos be back back after this short break [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Solomon | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Andy Jessie | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John Farrar | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
30 announcements | QUANTITY | 0.99+ |
Jessie | PERSON | 0.99+ |
Steve Moe Mulaney | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Marc | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ten | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
three hours | QUANTITY | 0.99+ |
two bits | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
justin warren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
five | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
two segments | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
five companies | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
Windows | TITLE | 0.98+ |
one place | QUANTITY | 0.97+ |
jesse | PERSON | 0.97+ |
one | QUANTITY | 0.96+ |
Vinum care | ORGANIZATION | 0.96+ |
Thursday at 3 o'clock | DATE | 0.96+ |
two CEOs | QUANTITY | 0.96+ |
Cerner | ORGANIZATION | 0.96+ |
ten purpose-built databases | QUANTITY | 0.95+ |
both audiences | QUANTITY | 0.95+ |
each | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
jassie | PERSON | 0.93+ |
Steve Wood, Dell Boomi | VMworld 2019
>> Narrator: From San Francisco, celebrating 10 years of high-tech coverage, it's theCUBE! Covering VMworld 2019. Brought to you by VMware and its ecosystem partners. >> Hey, welcome back everyone. We're here CUBE live in San Francisco, California, VMworld 2019. We're here in Moscone north lobby. I'm John Furrier with David Vellante, my co-host. Three days of coverage. Our next guest is Steve Wood, chief product officer at Dell Boomi. Steve, thanks for joining us today. Appreciate you coming on. >> Thank you. >> So we got your event coming up in DC. theCUBE will be there covering it. >> Correct, yes. >> We've been following you guys. Interesting opportunity, you're the chief product officer, you got the keys to the kingdom. You're in charge. (laughs) >> Yes sir. Oh yeah, yes. >> Tell us, what products, roadmap, pricing, all the analysis. >> (laughs) >> Take a minute to explain Boomi real quick for the folks that might not fully understand the product idea. >> Sure, yeah, yeah, absolutely. I mean, Boomi is a platform. The goal of the platform is to solve really tough technical challenges that you often meet in order to get to a business outcome of some kind. So if kind of brought that into maybe sharper focus, if you like. So Boomi started its life as an integration vendor. And its main goal is actually making it super easy to integrate your assets across cloud and on-prem. And that was a challenge at the time. A lot of the older integration tools weren't really ready for the cloud. Boomi brought forward this awesome architecture, this distribution architecture of containers that could run anywhere, integrating everything, moving your data around as needed. >> It was visionary. >> It was super visionary. >> I mean, it was early days. I was like, almost pre-cloud. >> Yeah, yeah, yeah. And actually, what was the cool thing was that you would have the benefits of cloud computing but you still could run something, like, behind your firewall, which was a really unheard of experience. Which actually starts to sound a lot like today, with Edge. But I'll kibosh that. But then, we sort of expanded into B2B, so you can connect to like, Walmart with all the sort of traditional and sort of modern protocols, kind of stuff that's been around for a while. We launched Hub for data quality, 'cause we felt like, hey, if we're connecting all of your data together, you're probably going to find it's fairly inconsistent. So we have Hub to help you manage your data quality. And then we moved into API management. We've done a huge investment this year to API-enable your integrations, but also API-enable your enterprise. And then possibly my favorite, 'cause it's an acquisition of my company, which I joined Boomi, acquisition of a workflow business. So actually not only provides workflow for people-centric processes, so really the connecting the dots from your devices and things and your infrastructure, on-prem and the cloud, all the way up to your people, driving those end-to-end experiences, but we also use the workflow product to help extend our existing products. >> So you were building a platform in your other company, and now Boomi's also in the same ethos, API-based, DevOps, complete DevOps, kind of no-code, low-code kind of thing. >> Steve: Low-code, yeah, for sure. Absolutely yes. >> What is, so what did you guys jump on, which wave is powering you guys now? Because I look at VMware, for instance, they have all these acquisitions. Their integration's going to be challenging. And just, most enterprises that are not born in the cloud, I mean, their legacy is, they got everything under the sun. And they're not necessarily talking to each other. It's a huge problem. >> No, for sure it is. And actually, it's become more of a problem as we move into machine learning and sharing data across enterprise, given access to the data for sure, ensuring it's controlled. So there's a lot going on. I think also for us, we're seeing obviously data's getting faster, you know. So as I often joke internally, nobody's asking for less data slower. >> (laughs) >> And we don't think that the volumes of data are going down anytime soon. So for us, it continues to be about the data. That for sure is the trend, the fact that it's moving faster, it's needed faster. We're going from batch to streaming, going from, you know, request-response to real-time. >> So what problems do you guys solve? You had to be nailed down and give up the problem statement, what is the main problem statement that you guys are addressing today that's most relevant? >> Yeah, the biggest problem is actually, I would say it's just unlocking your data. But in the fastest time possible. So when Boomi kind of, I guess, does well in the market, it's because we bring kind of enterprise creds, we bring you a journey to the cloud, not a cloud-only picture. We're not lookin' on-prem, tryin' to be retrofitted to the cloud. So what customers experience is they get the agility that they expect, so they get the value very, very fast. But they're also kind of ready to kind of make that transition from bein' on-prem, legacy, big vendor type, ERP, massive system to best of breed. And we help them with that change. >> I always say that, to David and I chattin', just really DevOps is about Dev and Ops, right? You want to have a great development environment so you can build those next-gen apps, which by the way, they need data, they need machine learning, all these new things are going on within microservices. It's very compelling, and everyone kind of knows that already. Or they should know it. But the dev scene's lookin' good, CID pipeline, good scene on the dev side. It's the ops side. (laughs) So I've seen a lot of enterprises really tryin' to catch up their operations, which is why VMware is continuing to do well, because they got operators. So I get that, like, they're not going to shift overnight to the Nirvana. But the role of developing and operating that app is ultimately the core digital transformation. >> Yeah, for sure, for sure. >> John: Your thoughts on that and what you guys are doing? >> Well, part of it also, like, when we looked at, so actually with the acquisition of Flow, I think it was interesting for us because it moved us also to be able to provide apps. So for example, VMware has something called Workspace ONE, which is their onboarding, help the employees onboard within the organization, connecting you to your endpoint applications. We're actually working with them on a similar thing. We have an onboarding solution to help employees onboard faster. But part of, I think, the value that we bring is that apps have traditionally, you know, been something that's heavily coded, they take a long time to do. So from integrations being heavily coded to APIs being heavily coded, and now for us, apps being heavily coded, is we kind of solve those tough types of challenges, everything from like, mobile and offline to APIs that are scalable and robust, through connecting to all of your systems including your things, and having the ability to do that. We kind of solve all of that so you can focus on what, so the true innovation. But like any cloud vendor, even if you leave it alone, it's getting faster, richer, better. So you know, it's unlike, say, coded solutions where they kind of sort of, they're a snapshot of that point in time. And if you leave them alone, they kind of slowly fade away, whereas Boomi is, we're constantly modernizing what you build on our platform. >> So the other piece about digital transformation is the data. And then you're talkin' about your data quality and information quality initiatives. That's kind of in the tailwind for you guys. So where does it all fit in terms of digital transformation, data, some of the things you were just talking about, and then the rest of the Dell family, Dell, VMware, how does it all fit together? >> Oh, sure, okay. Yeah, that's a lot. But yeah, I'll see if I can sort of give the gist it. Well so partly actually for us is like, getting data out. It feels like if you're going to transform your business, you kind of need to know what data you have. That feels like a fairly normal thing. But also, and I can't, I'll give you a teaser. We can't say more about it. But one of the things that's been interesting about the data on our platform, our metadata, which is anonymized, we have more customers for the longest time running on our cloud service, which is a multi-tenant service, which means we see how the 9000 plus customers work with other systems. And we have the metadata of how they architect that connectivity across the board, all the way out to people, all the way down to their infrastructure. We can see what's going on. So we've been doing a lot of research. And actually, showing you more about what your business is doing. And we have some really cool announcements coming up at Boomi World. >> So the truth in the data. I'm imagining machine learning. But you get to see the patterns. >> We get to see the patterns. >> Emerging. The signals, there's signals. >> Yes. And we're seeing the patterns not only in what's being built and the structure of what's being built, but how it's operating, how it's being deployed, what's most successful, how those things work. So we have a really interesting sense. So when you're going through a digital transformation, we think we can show you things that you'll not have seen before. >> So what are you showing and to whom are you showing it? >> So it'll be at Boomi World on the first of October >> (laughs) >> In Washington. So I can't say more than that. But we're going to show them some things that our platform can extract for you that we don't think any other vendor's done before. >> And today, how do you visualize that? >> Well, today actually we don't do that much to visualize it, actually. That was actually, so we've been on a real machine learning train for the past couple of years. And as we got really good at understanding the metadata we have, and we've got the data scientists involved, they started showing us more of the art of the possible. So for that I'd say we've been probably remiss in not helping customers more, exposing more of those insights. Obviously, from a transformation perspective, we unlock your data. But we think we can do a lot more. >> So is the Dell relationship largely a go-to-market one? Same question for VMware. >> Well I'd say, like, if you think about Dell, it's like, I guess, I dunno, the sort of unofficial, so the hardware part of the triangle, VMware being the server infrastructure. >> Don't tell them that. >> Yeah, sorry. >> But it's true. (laughs) >> Yeah, sorry Michael. But it's the hardware side. And VMware you've got the kind of infrastructure, DevOps, operational side. And then Boomi brings you the data. And we think that that kind of triangle is what you need to go through a digital transformation, certainly if your title is CIO. >> And Michael Dell's bullish on you guys. He was at your last event we broadcasted. He sees you guys as modern SAAS interface for companies, certainly from a transformational standpoint, as the interface in for integration. >> Yeah, for sure. I mean, it will, I guess some of our performance speaks to that. I mean, we've been a very, very high-performing, I don't want to say we're the number one performing technology in his portfolio, but it's certainly, it's either-- >> Well, you're up and to the right. That quadrant thing. >> Yes, quadrant, yes. >> What's the winning formula? Why are you winning these deals? Why are you winning customers? Why are you keeping customers? What's the real value that they're getting out of Boomi? >> So our CMO would want me to say, business outcomes accelerated, which is, hopefully you got that. >> Check, got that down. >> Oh, yeah, yeah. (laughs) >> Gold star for you, go. >> Thank you, thank you. >> Now, the truth. (laughs) >> Now the truth. (laughs) It's actually, but it is time to value. I mean, our customers, that's the, because we've solved the challenges, sure. Other vendors can say, we've solved the challenges too. But we've solved it in a low-code way, and customers see the value very, very quickly. So when we go, you know, head-to-head with a competitor on a deal, you know, like a bake-off if you like, we win pretty much every time. >> Take a minute to explain what low-code is for the folks that are, been debating what low-code is. Been a lot of Twitter wars on this. But explain what low-code is. >> I will give my explanation, sure. So low-code fundamentally is the idea that, you know, I'd say, like, the first phase, almost, of cloud, was like, hey, you're not going to code anything. The new paradigm is it's all point and click. And Salesforce, actually I used to be at Salesforce, I sold my last company to Salesforce. It was all about kind of like, the no-code approach. But I think reality is, it's like, there's different ways in which you can be productive. Sometimes point and click is by far the most productive, but it is not always the most productive way to solve a problem. Sometimes code is by far the most productive way to solve a problem. So when you provide a low-code platform, what you're really thinking about is productivity for everybody, not just the point and click, drag and drop, ease of use, but also productivity for the developers. So when they engage and they're working together to deliver a solution, it's highly productive. >> For instance, wiring up APIs is a great example, or managing containers might be a great use case of low-code. No code would be just, you know, more automation behind the simple stuff. But low-code is really more stitching stuff together. >> Yeah. And sometimes people do associate it more with application creation side, but I often think of it as, like, a role thing. If you think about, like, your company, one solution to solve the kind of app gap, or the gap in all the stuff in your backlog that needs to be done, is to hire more IT people. The other way to solve the problem is to empower everybody you have to do more with technology. So I often think about it as like, you know, software eating the world, you know, a lot of people are on the wrong side of that equation. You know, they're-- >> You talk to people who are cloud-native, or born in the cloud, their IT is the developer. I mean, they're the ones managing the configurations, and it's all either scripted away or written code for. What was IT's job? (laughs) >> You say a lot of people on the wrong side of that equation, you mean customers? >> No, I mean, well, people inside the business are often like, you know, they've got a whole bunch of stuff they want to do with technology, but there's a gatekeeper, and that gatekeeper is the developer. And it's not that they want to be a gatekeeper, it's that you need tools to be able to do it. They want to be sure the architecture's right. So low-code platforms are all about kind of bringing more people into the conversation. So I often think about it as like, take the business, and so say, your ideas don't now get translated through a whole bunch of series of weird things, you can now be very engaged in the creation process. >> So it's domain expertise meets coding capability. >> It reminds me of the old 4GL days in the '80s. You know, you had interpreters, scripting languages, kind of higher-level of abstractions. But the underlying language is hardcore, compiler, object code, you know, all that stuff under the covers has to be there, right. That's, you're putting that abstraction on top, making it easy to code. >> Yeah, absolutely. 'Cause like, I mean, what you deploy has to be credible. So what the low-code vendors are after is something where an architect would go, love that, that thing is great, I love the way it's put together, it's well-architect, well put together, and I can code around it to finish those last small issues, and kind of, you know, add my shine to it. >> 'Cause they know what they're dealing with. >> Yeah. >> Under the covers, at least. >> Yeah. But a lot of like, you know, the no-code vendors kind of went for architecturally slightly curious routes and didn't necessarily think about the whole picture. >> So you guys are all about dealing with all this complexity, helping people manage that, at least a part. How about some of these new innovations that are comin' out. I mean, the world's crazy about ML, AI, blockchain, you know, all kinds of new automations. Where do you guys fit into that? Is that an opportunity for you? >> Yeah. I mean, well, so machine learning, we're all, oop. Sorry, I tried to spill my water. We're all crazy about machine learning as well. So we're using it a lot, as I mentioned, on our metadata. But also, we see a lot of our customers using our technology to get the data out in order to surface new insights. So for example we've got, like, actually Jack in the Box would be an interesting example of kind of emerging technology. One is that they're using our technology to get data out at the point of sale. So they have to use, our technology is running at the point of sale. They have 2200 plus locations, which means we have to be able to run out there on the edge and process it right at the point of sale. But they're trying to do things like, you know, when you drive up and your license plate is scanned, they know who you are, they go, hey do you want those, that same meal again. You know, so they can predict what you want, they can help make suggestions for you. So that's a fantastic example. So, yeah. >> Great edge use cases. I mean, that's awesome. >> And then, which is one of them, but there's also, machine learning for us, we're tied with machine learning. And we are exploring the idea of actually providing machine learning as a service to our customers. That's something we're just, we're sort of eyeing that up as we've been doing more and more internally. But blockchain's the same. And we see customers playing with blockchain all the time. And actually, I guess, our pitch to customers who are looking at emerging technology is we have a group that is looking specifically at emerging technology. And because of our time to value, and because often, emerging technology is like, so what does blockchain mean to, I dunno, well, you guys, theCUBE. >> John: Supply chain. >> Steve: You know, like, how would you use it? You might want to experiment with it. >> We have a CUBEcoin. >> You have a CUBEcoin. >> And we have a reputation protocol, and we have a community software layer. >> It's actually working. >> I would track the supply chain. >> You're going to do it? >> I already built it. (laughing) It's in tech preview right now. >> Okay, well good, good. Hopefully you did it on Boomi, that'd be nice. (laughing) >> No, but I mean like, the success or maybe failure of CUBEcoin, I don't want to call it, but you know. >> It's not a utility token. Well maybe, nah. >> Right. (laughs) But like, a lot of customers want to build to experiment, so time to value's really important. We're solving those problems in those emerging technologies. >> Yeah, rapid application development and DevOps, using containers, APIs, very friendly. >> Try it out and then see, like, does this make sense? >> All right, so you got the event coming up October first to the third in Washington DC. You get a plug for that. >> I might've mentioned it. >> theCUBE will be there. You're holdin' back on some of the good stuff. The good items. We'll wait for then. >> Yeah, otherwise, yeah. Wait for the keynote, then you'll see, yes. >> (laughs) They all want to know now. Come on. (laughing) They're all like, no, don't say anything. All right. We'll leak it on Twitter later if I find out. No, no. Steve, thanks for coming on and sharing the insight. We're looking forward to chatting more at Boomi World in Washington DC. I'm John Furrier with Dave Vellante. More live coverage here in San Francisco for VMworld 2012 after this short break. (electronic music)
SUMMARY :
Brought to you by VMware and its ecosystem partners. Appreciate you coming on. So we got your event coming up in DC. you got the keys to the kingdom. Oh yeah, yes. roadmap, pricing, all the analysis. for the folks that might not fully understand The goal of the platform is to solve I mean, it was early days. So we have Hub to help you manage your data quality. So you were building a platform in your other company, Steve: Low-code, yeah, for sure. And just, most enterprises that are not born in the cloud, data's getting faster, you know. going from, you know, request-response to real-time. we bring you a journey to the cloud, So I get that, like, they're not going to shift overnight So you know, it's unlike, say, coded solutions That's kind of in the tailwind for you guys. But also, and I can't, I'll give you a teaser. But you get to see the patterns. The signals, there's signals. we think we can show you things that our platform can extract for you the metadata we have, So is the Dell relationship largely a go-to-market one? it's like, I guess, I dunno, the sort of unofficial, But it's true. is what you need to go through a digital transformation, And Michael Dell's bullish on you guys. I guess some of our performance speaks to that. Well, you're up and to the right. which is, hopefully you got that. (laughs) Now, the truth. So when we go, you know, head-to-head with a competitor for the folks that are, been debating what low-code is. So low-code fundamentally is the idea that, you know, No code would be just, you know, more automation software eating the world, you know, You talk to people it's that you need tools to be able to do it. But the underlying language is hardcore, compiler, and kind of, you know, add my shine to it. But a lot of like, you know, the no-code vendors So you guys are all about You know, so they can predict what you want, I mean, that's awesome. And because of our time to value, Steve: You know, like, how would you use it? And we have a reputation protocol, the supply chain. I already built it. Hopefully you did it on Boomi, I don't want to call it, but you know. It's not a utility token. But like, a lot of customers want to build to experiment, and DevOps, using containers, APIs, very friendly. so you got the event coming up October first to the third You're holdin' back on some of the good stuff. Wait for the keynote, then you'll see, yes. Steve, thanks for coming on and sharing the insight.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Steve Wood | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
DC | LOCATION | 0.99+ |
Three days | QUANTITY | 0.99+ |
Moscone | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
today | DATE | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.98+ | |
VMworld 2019 | EVENT | 0.98+ |
VMworld 2012 | EVENT | 0.98+ |
San Francisco, California | LOCATION | 0.97+ |
Boomi World | ORGANIZATION | 0.97+ |
third | QUANTITY | 0.97+ |
first phase | QUANTITY | 0.97+ |
Dell Boomi | ORGANIZATION | 0.96+ |
Dell Boomi | PERSON | 0.96+ |
2200 plus locations | QUANTITY | 0.95+ |
one solution | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
this year | DATE | 0.94+ |
9000 plus | QUANTITY | 0.93+ |
Workspace ONE | TITLE | 0.92+ |
past couple of years | DATE | 0.87+ |
theCUBE | ORGANIZATION | 0.79+ |
Nirvana | LOCATION | 0.78+ |
CUBE | ORGANIZATION | 0.75+ |
'80s | DATE | 0.74+ |
theCUBE | EVENT | 0.72+ |
first of October | DATE | 0.72+ |
first | QUANTITY | 0.72+ |
CUBEcoin | COMMERCIAL_ITEM | 0.7+ |
DevOps | TITLE | 0.68+ |
Edge | TITLE | 0.67+ |
Boomi | EVENT | 0.62+ |
Jack in the Box | ORGANIZATION | 0.6+ |
4GL | TITLE | 0.59+ |
Flow | TITLE | 0.57+ |
VMware | TITLE | 0.55+ |
CUBEcoin | ORGANIZATION | 0.5+ |
Hub | ORGANIZATION | 0.49+ |
World | ORGANIZATION | 0.38+ |
Sanjay Uppal & Steve Woo, VMware | VMworld 2019
>> Announcer: Live from San Fransciso, celebrating 10 years of hi-tech coverage, it's the theCUBE, covering VMworld 2019. Brought to you by VMware and its eco-system partners. >> Welcome back everyone. It's theCUBE's live coverage at VMworld 2019. I'm John Furrier, Dave Vellante, Dave, 10 years doing theCUBE at VMworld, what a transformation, lot of technologies coming back into the center of all the action. SD-WAN's one of them, we got two great guests, two entrepreneurs, the co-founders of VeloCloud. Sanjay Uppal who's the VP and GM of VeloCloud Business Unit part of VMware, VMware bought on December 2017, Steve Woo, Senior Director of VeloCloud Business Unit. Also co-founder, you guys both strong in networking, entrepreneurs, congratulations on. >> Thank you. >> That was two years ago. Okay, so, we were reminiscing about 10 years, 2010, when we first started doing theCUBE to now, but more than ever SD-WAN, just over the past 24 months, 36 months, a lot's changing as cloud has become more obvious. Certainly public cloud, no debate, but we start talking about cloud 2.0. Enterprise requirements are much unique and different that just, you know, being born in the cloud at least like the startups are. So, whole different challenges. This is a kind of difficult, it's a networking challenge. Networking and security are the two biggest, hottest areas right now in tech as clouds scale, the enterprise comes in. What's the vision, Sanjay? >> So what's going on here as you were rightly pointing out, cloud is changing. It's no longer people just want to get from private to public, it's a multi-cloud world and it's a hybrid cloud world. Now, that's talking at it from the compute standpoint. But, other services are also moving to the cloud, security services are moving to the cloud, so when you look at it from that standpoint, our customers want to get from the clients, which could be a user, it could be a thing, it could be a machine, all the way to the container which has the application. So we're looking at SD-WAN as being that fabric that connects from the client to the cloud to the container. And as you're rightly pointing out, networking and security is the hot area right now. So how does security and networking impact this client to cloud to container world is where SD-WAN is headed toady. >> And Pat Gelsinger who just came fresh off the keynote, he'll be on tomorrow, I'm going to ask him this question directly but, we've always been saying public cloud is such a great resource, I mean, who doesn't want all that massive compute, massive storage, if you can use it? But when you start getting into hybrid, right? I said the data center's an edge. And he's talking about a thin edge and a big edge and a thick edge, so when you're a networking packet, when you're in networking you move stuff around, you're an edge and you're a center, you're a core. These are networking concepts, this is not new, I mean, this is not new. >> Yes, this is not new. And I think the concept of the edge, as he was pointing out, there's different edges everywhere and you have to really look at it from, as you're crossing the boundary, how do you get the packets from point A to point B? Making sure that the performances are short, so you get the application layer performance, but yet not increasing your attack surface from a security standpoint. And so, the facilities that Steve and myself and other folks at VeloCloud have constructed is really reducing the attack surface by segmentation. But making sure that the conversation from the client to the cloud to the container has that assured performance, particularly for real time applications. Which are actually not easy to get right because the underlying transport may not actually help in any great way. >> So, John, you said it's not really new for you networking guys, it's really not. At the same time, Pat talked about choice versus complexity so it's a much more complex world. So you've had to change the way in which, you approach from a technology standpoint I presume? The roadmap has probably shifted, maybe you could talk about that a little bit. >> So, absolutely. So the discussion about moving to the cloud has been about the compute, but then you have to also actually look at the network, right? They forecast that 30 to 50% of the enterprise traffic is going to go to the cloud, right? But the network in the past was built for applications going to the on premise data center. So what we've had is inequality where you've had a full enterprise grade network going to the enterprise data center, but actually your cloud access was a second grade citizen. As Sanjay was saying, I still want performance, I still want security, and then in fact, as people actually expand to the cloud but actually put more and more workloads in the cloud, they start to realize, gee, where's my automation? Where's my scaling? So that still has to be done at the branch that the remote sites that need access to the cloud, and they need this automated, secure, high performing access to all the cloud workloads. Especially even that it's now moved to multi-cloud, right? So you went from on premise, a little bit in the hybrid, private cloud, now many more instances and now multi-cloud, becomes more and more complex and that's where cloud delivered SD-WAN really addresses that problem. >> So Steve, lay out the architecture, so let's just all roleplay for a second here. I'm a CCO, CIO, I'm progressive, got my hands in all the top things, certainly security's number one concern I have. And I'm building my own stack, I love the cloud, I don't want to make it a second class citizen, I really want to re-architect this. What the playbook, what do I do, what's your recommendation? >> Alright, so the playbook is, and this is advice from the cloud compute centers as well, right? Go direct to the cloud, don't back haul it through the enterprise data center and introduce latency so you now need Internet Breakout at more locations, not just the central data center. But I still need the security, so how do I have cloud security for traffic going straight to the cloud versus going back to the east west, to the data center? So really, the advantage that the SD-WAN solution has is it's actually a hybrid that has a footprint on premise but also has a cloud footprint. So Sanjay and I and VeloCloud, we have this big network of cloud gateways so you have the footprint on prem and in the cloud to have distributed security. >> So, Sanjay, talk about, back to your original bumper sticker, client, cloud, containers. So, I see that security piece. How important has the container piece become? And what is that role of the container in the future? Is it going to be a wrapper for legacy apps, is it going to be primary for new apps? Because Kubernetes clearly is orchestrating a bunch of containers and other services so the role of the container's certainly super valuable. How does that impact some of the efficiencies that's needed for networking and to ensure security? >> Yeah, great question. You know, the networking folks, and networking was always relegated to being the underlay or the plumbing. Now what's becoming important is that the applications are making their intent aware to the network. And the intent is becoming aware. As the intent becomes aware, we networking people know what to do in the SD-WAN layer, which then shields all the intricacies of what needs to get done in the underlay. So to put it in very simple terms, the container's what really drives the need and what we're doing is we're building the outcome to satisfy that need. Now containers are critical because as Pat was saying, all of the new digital applications are going to be built with containers in mind. So the reason we call it client to cloud to container is because the containers can literally be anywhere. You know, we're talking about them being in the private cloud and then the public cloud, they could be right next to where the client is because of the edge cloud. They could be in the telco network which is the telco cloud. So between these four clouds, you literally have a network of these containers and the underlying infrastructure that we are doing is to provide that SD-WAN layer that'll get the containers to talk to one another as well as to talk to the clients that are getting access to those applications. >> You know, sometimes it takes a history lesson to figure out the future. I was talking with Steve Herrod and I want to get your reaction to a comment he made to me when we were talking about the impact of VMware back in the old days, you know, virtualization. Virtualization kind of came out as an application and then it became what it did in the server world, just changed the game. But one key thing that we talked about and he mentioned was, the key was that virtualization allowed for massive efficiencies. Not just on price and consolidation of service and efficiency on price, but it enabled more efficiencies in performance without any code changes to the application. So the question is, is that, okay, containers I buy 100%, we agree, since Docker and early days to now with the Kubernetes, containers are going to be a game changer. What's that dynamic that's going to come next? Is there a view from your perspective on that step up function of value without a lot of application rewrites or network changes? I mean, I'm just trying to figure out how that fits together what's your view on that? >> Yeah, let me drag this first and then maybe Steve can comment as well, so. The first thing is that SD-WAN, just like server virtualization did, we're doing what server virtualization was for the network. So you don't require any changes to your underlay, meaning that you don't require changes to your broadband, you don't require changes to your LTE and even 5G, as well as the NPLS network so you don't have to twiddle with those bits, we manage it all in the overlay, this is exactly similar to what VMs did when it came to server virtualization. Now, when containers come in, because we get the visibility of what the container wants, we can both in real time, as well as a priori, figure out how the network should be configured. And that is a game changer because a container could be right next to you, it could be in the cloud, far edge, thin edge, it's not just a destination, it's literally everywhere. And that underlying fabric, if the underlying fabric of the network doesn't work, your digital transformation project for containers is not going to work either. You there's a key building block over there. >> So if I get this right, you're saying is that because you have that underlay visibility without any changes, by making efficiencies there, you then can understand what the container wants so you're bringing intelligence to the container and vice versa? >> Yes, so that containers tells us what do they need to run, I mean the application tells us, which is built with containers. And what we do is we dynamically measure how the network is performing, and we adapt to what the container wants. We call this outcome driven. We know what the outcome is and we adapt the networking to deliver that outcome. >> So I want to ask you guys, so Pat talked today about 8% better improvement relative to bare metal, but it's really about the entire system, the entire network. And I'm curious as to how you guys are evolving. You know, John and I talk about cloud 2.0, how you're evolving to support that. Because it's really about application performance in total, what the user sees, not what I can measure in some on prem data center, I'm not saying Pat was doing that, but my guess to deduce the numbers for the keynote they probably did do that. So, how is your infrastructure and architecture evolving to support application performance across the network? >> Right, right. So, to add to what Sanjay was saying in terms of just being aware of the requirements of the containers and optimizing and having visibility but actually, leverage the container and virtual machine technology in the SD-WAN platform itself. So in terms of solving the network problem, it's not just about us virtualizing the network resources and then choosing the best path across the network to the applications, but actually hosting some applications that deserve to be moved out to the edge to help solve the performance problem as well. A good example is IOT, where you just have a lot of data, a lot of real time data that needs real time control response instead of necessarily going over the most efficient path to an existing cloud data center on premise, perhaps do some of the analytics actually in the SD-WAN network edge, and we can do that with containers. >> So what about the real time aspect? Because I think that's a key point, you mentioned that, Sanjay, earlier. Because, I remember, not the date myself, but I remember back in the days when policy was a revolution, oh my God, we can do policy based stuff! And provisional stuff, that was an, oh my God, static network, though, I mean everything was provisioned, buttoned up nicely, you're not dealing with a static network when you're dealing with services. So you're moving up the stack, we're talking containers now, at the application level, assuming you have the fabric down here. There's going to be a lot of stuff being turned on, turned off, things provisioning, unprovisioning, so a lot of dynamic nature going on. So, if I see this right, policy is key and enables some intelligence, it's got to have an impact on the real time so talk about what real time means, some of the challenges, is it just a transactional issue? Is it latency? And is that where the container magic happens? Just unpack that a little bit. >> So there's really four classes of real time applications that we see. Voice, video, VDI and IOT. Now, there's of course, other applications that are built from these building blocks or these types of application, sub-applications. Now, each of these has a latency requirement, but it also has a requirement in terms of dynamism, so as you know, video can change dramatically from one moment to the other, variable portrayed video, right? Voice doesn't change as dramatically but has very stringent requirements in terms of when that packet should show up. So when we look at these, and you put them on a best effort network that only says that they're going to get the packet from point A to point B, these real time applications may not work. So what we have constructed is an overlay that supports realtime applications even on best effort networks. And this is actually a fairly significant shift in the industry, like if you look at running, you know, all of us have done a voice call, on a broadband and you hear these artifacts and rubberbanding and you can't hear the other person, right? But with VeloCloud, we're able to provide guarantees running on best effort networks. And I think that is a game changer. That is going to be a game changer also as the applications get much more dynamic. I mean, you bring in containers, one of the issues is where should that application run? That can be decided in real time. VMware invented this whole vMotion idea, well how about vMotioning the container? And how are you going to vMotion it and how are you going to decide where that container should be? So all of this is really what a networking infrastructure can provide for you in real time. >> And you've got this overlay, and without performance degradation or dramatic performance degradation, right? So what's the secret sauce behind that? >> So, the secret sauce in our solution is something we call dynamic multi-path optimization. So just like virtualization was done for the data center, first continuously monitor the resource's performance, capacity of the different underlay resources and then in real time, recognizing the business priority of the different applications, instantly put the workload, or in this case, the network WAN traffic on the right resource and actually have the flexibility to move it as conditions change, as capacity changes. And further than that, if you can't stare around the problems that we may see in the network, we can actually remediate the actual traffic streams and since we're on both ends we can have a lot of optimization tricks and actually make sure that real time data applications work perfectly. >> So it's a data analysis and a math problem to solve? >> Yeah, so we use that for real time optimization, and then the other benefit is we have this huge, in the cloud, of course, huge data lake of information that we continue to share more and more with the users so they can see the overlay, so that the entire underlay environment of the WAN, where it's going in the different hybrid cloud, and also the overlay performance. There's going to be huge value in that in terms of solving network problems. >> Are the telcos a bottleneck to the future or is 5G going to solve all that, or? >> Telcos are a partner, and more than 50% of our business is done with the telco. So it's us working with the telco and then going eventually to the enterprise. >> And they're moving at the speed that you want em to move? They're saddled with pressures on costs and network function virtualization, and it's a complicated problem. >> Right, as you heard Pat say in the morning, the telcos are going through a dramatic change. Because they're shifting away from this custom proprietary hardware infrastructure into a completely software driven world, right? And so the telco is a critical partner. They are virtualizing their own network, they are virtualizing the core of the network using VMware and other technologies, and as they're doing that, they're virtualizing what goes out to the enterprise customer. And the network virtualization piece, of course, is built on SD-WAN. One thing I wanted to add to what Steve said, is that we collect almost 10 billion flow records a day. From across all of our 150,000 sites, and this is a treasure trove of information. It is this information that allows us to develop the next generation algorithms. We're the only ones who have that much information that is collected, it's rich information, it's about how the network performs, how the applications are, where it is going, how the application workloads are. And using this we generate the next generation algorithms that'll optimize the networks and make them more secure. >> And that is the benefit of SaaS, the beautiful thing about having a SaaS platform, easy to stand up, the data becomes a really critical aspect for making the network smarter, to your point, this is all those data points. It's an operating, sounds like an operating system to me. >> It's a highly distributed network operating system. >> Guys, thanks for coming on, great insight. Final question to end the segment, as two co-founders and entrepreneurs, when you started VeloCloud, knowing what's going on today, explain in your entrepreneurial mind, where this is going, because this isn't your, as they say, grandfather's SD-WAN market anymore. It's really turning into, quite frankly, next generation networking, next generation software, you mentioned it's network operating system, it's one big distributed network. And all these new things are happening, what's the vision? Is this what you thought it would be when you guys started? >> Well, you know, the amazing this is many startups usually go through a pivot, right? They start off as one thing and maybe more than one pivot, in fact, I think it was a couple of years ago that we just for grins, looked at the first few slides that Steve has made when we had got started. For our seed investor, where we actually had absolutely nothing! And it was, actually is very true, the graphics were very very poor, other than that the idea of moving to the cloud and using the cloud as the network, even at that time we said the cloud is the network. That has not changed. And so, the enduring vision here is that regardless of where you are, you're on laptops right now, clients could be sensors, actuators, all of this is going to go through a network cloud. And that network cloud is going to be responsible for getting you to any final destination. Whether it's your nearby container or whether it's running in some public cloud. And so the vision is trust the network, it's going to make sure that it'll figure out whether you should be on Wi-Fi or Bluetooth or LTE or 5G or whatever have you. You just say this application's important to me. The network is going to take care of the rest of it. >> Well you guys are certainly music to our ears, we love network effects, we think network effects is not just the way media is today but also technology, the network is all interconnected it's all instrumented, you can get the data. There's no blindspots, if you can instrument it, you can automate it. You guys are pioneers, thanks for coming on theCUBE, appreciate it. >> Good to have ya. >> Thank you. >> CUBE coverage here, 10 years covering VWworld, I'm John Furrier, Dave Vellante. Back with more live coverage after this short break. (electronic music)
SUMMARY :
Brought to you by VMware and its eco-system partners. coming back into the center of all the action. Networking and security are the two biggest, that connects from the client to the cloud to the container. I said the data center's an edge. from the client to the cloud to the container At the same time, Pat talked about choice versus complexity that the remote sites that need access to the cloud, And I'm building my own stack, I love the cloud, on prem and in the cloud to have distributed security. How does that impact some of the efficiencies all of the new digital applications are going to be built of VMware back in the old days, you know, virtualization. this is exactly similar to what VMs did how the network is performing, And I'm curious as to how you guys are evolving. So in terms of solving the network problem, it's got to have an impact on the real time in the industry, like if you look at running, you know, and actually have the flexibility to move it so that the entire underlay environment of the WAN, and then going eventually to the enterprise. And they're moving at the speed that you want em to move? And so the telco is a critical partner. And that is the benefit of SaaS, Final question to end the segment, other than that the idea of moving to the cloud is not just the way media is today I'm John Furrier, Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Woo | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Sanjay Uppal | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Steve Herrod | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
December 2017 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
Sanjay | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
150,000 sites | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
more than 50% | QUANTITY | 0.99+ |
VeloCloud Business Unit | ORGANIZATION | 0.99+ |
San Fransciso | LOCATION | 0.99+ |
today | DATE | 0.99+ |
VeloCloud | ORGANIZATION | 0.99+ |
two co-founders | QUANTITY | 0.99+ |
two entrepreneurs | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
VMworld 2019 | EVENT | 0.99+ |
one moment | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
36 months | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.97+ |
VMworld | EVENT | 0.97+ |
One | QUANTITY | 0.97+ |
four clouds | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
50% | QUANTITY | 0.97+ |
point B | OTHER | 0.95+ |
both | QUANTITY | 0.94+ |
first few | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.92+ |
first thing | QUANTITY | 0.92+ |
VWworld | ORGANIZATION | 0.9+ |
Sanjay | ORGANIZATION | 0.9+ |
one key thing | QUANTITY | 0.89+ |
5G | ORGANIZATION | 0.89+ |
second | QUANTITY | 0.88+ |
both ends | QUANTITY | 0.86+ |
theCUBE | ORGANIZATION | 0.85+ |
more than one pivot | QUANTITY | 0.85+ |
second grade | QUANTITY | 0.85+ |
telco cloud | ORGANIZATION | 0.84+ |
point A | OTHER | 0.82+ |
a couple of years ago | DATE | 0.81+ |
almost 10 billion flow records | QUANTITY | 0.81+ |
two biggest | QUANTITY | 0.8+ |
about 8% | QUANTITY | 0.8+ |
Dominique Jodoin, NoviFlow | Fortinet Accelerate 2019
>> Live from Orlando, Florida It's the que covering accelerate nineteen. Brought to you by important >> Welcome back to the Cube. Live from Orlando, Florida at Fortinet Accelerate twenty ninety nine. Lisa Martin Joining and welcoming to the queue for the first time, the CEO and president of Novy Flow. Dominique Jordan. Dominic. Great to have you joining on the Cube at accelerate. So here we are in Orlando, talking about all things cyber security. I just came from the keynote session where Fortinet was talking about how much they're innovating. What? How they're leading from a competitive perspective. What customers air saying why their security fabric is so differentiated? No, the flow is one of their security fabric ready partners. But before we talk about that, why don't you take a minute or two to describe to our audience who know the flow is and what you guys are doing in cybersecurity? >> Yeah, way We came in a little bit by accident. The cyber security. We we've been founded seven years ago, and the idea was to create the very programmable networks. It's very much in line with what we heard today on the keynote, and we became a technology leader in that field as the and software defined networking. And three, four years ago, customers started to use our product, obviously for cybersecurity application. We didn't even know about that. They don't necessarily tell us, and we spend a bit more focus into it. And over time we started to work with fortunate, for example. And now we have a developing. Is Greg relationship great solutions? Also for the for the customers. >> So one of the things that we understand from Fortinet and from all of the conversations that the Cube has globally is is that digital transformation is fundamental to every business to compete right. But as is secure transformation and security transformation, very challenging to do as businesses. And you think of any industry, financial services, retail, consumer packaged goods. As they expand digitally, so does the attack surface. So one of the things that fourteen it talks about is it's not enough anymore to have these point solutions pointed at different, you know, on Prem Cloud edge that the entire infrastructure as it's changing and they attacked services expanding has got to be protected more from an integrated perspective. This notion of the of the security fabric. Talk to us about a fabric ready partnership. What that means to know that though I know that's only in the last six months or so. So walk us through what you did to become a fabric ready partner and what it is that you in forging that are seeing in the market as challenges that you're helping to results. >> Yeah, what we see. Actually, I like to decide the defined that as a battlefield, the attacks are being waged, really, and and the band we feel is the networks of those carriers. There was a government agencies, large enterprise, etcetera, and those those companies are not really taking advantage of their position because, in fact, with the right network fabric the right tools to be able to react, they could actually be very much more powerful. So this is where we are working with forty nine to equip those customers with solutions that are much more agile, more programmable because the network is also evolving. It's not only that the attacks are broader, they also changing the nature of it is changing, and the fact that we came from a background of working at the edge of the networks mostly. Well, I wouldn't mention that before we deployed. Typically at the large tier one carriers all around the world are mentioned. A few tell Strike group, wait deployed at the Hutchison Group Young law, etcetera. And also a two of these five eyes. So government agencies that are engage in fighting these attacks. So So we come with a background of working in a decentralized approach anyway, So it was a very natural evolution. Work was done with Fortinet so far. So what we built so far together we built some integrated solutions s So far, we have two solutions that we are demonstrating two customers. The first one is to allow the large. It tends to be the larger customer fortunate that are making the transition from a in existing appliance to virtual eyes solutions. That's an area where we are very effective at helping them to scale. And those would be for customers that would have say, hundred gig of traffic or more. So we're fortunate we built a and undermanned solution. It's an integrated solution that enables those carriers to are. Those customers could be other kind of customers to gradually grow the number of the EMS that are used in real time for doing whatever Sabbath security job they have to do. And if they the demand comes down, these v EMS were released in the customer data centers. To do some other jobs like this is one of the products that we built together, and we are demonstrating. The second one is a. A feature of that is that we can process about the way this is Ah is able to scale all the way up to six point five terra bits per second. I'Ll repeat that six point five terror bits per second. This is a unheard of and this is, I think one of the interests of Fortinet is working with no visible. We already have developed not really the metal ring system, but all the O. N m features that you demand as a customer to be deployed in the real world. So so that's that. That's the base on. The second option is that we developed is a carry Great Nat again. Same idea. We can scale the Terra great net analysis up to one point six terabit per second. Former, very powerful. They're powerful solutions to meet this this raising the man which you talked about? They say this literally a wave of attacks coming more and more. >> So you mentioned some customers by name. Telstra, for example. CEO to CEO conversations tells, has been around for a long time as the organization expands digitally. And we talked about a minute ago as this the attack surface. What are some of the conversations that you're having with the scene? The C suite about security? It's not just talking to, you know, network security admits. Right? What are those conversations that you're having with the CEO in the C suite that are where they're saying these are my business problems? Dominic, help us solve these problems. >> Well, it comes to two words, basically its scale and are slow flexibility. It comes to that simple. Is this so they are struggling to see how they can cope with the especially the ones that are virtual izing because you end up. Imagine the model is that you go from a very powerful appliance and once you virtual eyes this appliance, you might end up with thirty different servers, you know, running in parallel, you have to have low balancers in front of it. That makes for a very complex and very expensive solution. So that's that's are they searching for? How can we reduce the complexity, for example, one of the advantages of our product working side by side with fortunate. Since we worked at six point five terabytes per second, we do some of the pre processing of the traffic before it hits the virtualized solution forty gate, for example. We have built some blacklist white list we can do also the load balancing. No need to install some additional law balancing can have. That is a kind of a black box I get that does all the required feature to increase the scaling off those those combined solution and the second, the second party flexibility. You got to be able to evolves your solution in time as these attacks are revolving now or product is built from bottom up, and it's built on and infrastructure typically white boxes that are running chips that are programmable by us. So the software, the NASA's it's Gone, is complemented by some very easy to use porting layers if you like. So the Fortinet solution could be easily adapted to this platform and And that's how we can achieve this kind of throughput. And in fact, I will tell to your viewers that we already have built live demos of those solutions in the Sofia anti police lab in France. The labs of Fortinet, Where were you? We're doing demos for the for customers of those solutions. >> So I'MA tell Stir, though, and you said speed and flexibility scale rather the other sailor disability scale. Inflexibility. What are some? How does my business? What am I looking to achieve? A. My looking to scale to x number of users X number of regions. How does how is that measured from, say, a Telstra's perspective as a big business impact that Novy Flow and Fortinet are helping to them to achieve? >> Yeah, the It's really all dimensions way have some challenge just by handling the raw volume of traffics. Sometimes some customers are pumping terra bits of traffic between one country and the other, so that's one. And but it's also geography because your attack and come and any anywhere in your network that the periphery or inside your network so you have to be able to in a centralized away once you detect there's an attack you have to be able to respondent and in some time, and that's how we can do with our programmable infrastructure can actually reprogrammed those air routing tables. You can take some mitigation action, for example, some of some of the bad traffic on the blacklist. If you've looked at it, perhaps you could put it on a white list for serpent of time. Don't don't look at it over and over. Just wait, maybe a little bit those kind of off measures to alleviate the load. So, in fact, it's work more intelligently with the raw volume of traffic that comes to you. So this is one of the real advantage of is the end. So after defined networking applied to a cyber security problem, >> what are some of the other industries that you are seeing that have potential to dramatically benefit from suffer to find networking in cyber security? Knowing that he d threat landscape, it is exponentially growing. Yes, we've got tools like a I and Machine Learning, which we'LL talk about later on the program today with respective forty Gar labs, for example. But of course, so do the attackers have access to utilize artificial intelligence to create even smarter attacks. But from your perspective, what are some of the other industries that are really right to take advantage of SGN and cyber security practices? >> You know, I think all industries are moving to data. There's no exception. I was talking to some guy, an interpreter in Montreuil yesterday's doing farming, but it's high tech farming with several earlier. It's all based on a I. It's all based on data, even those industry that the forming industry thing that may be so every industry will rely on data, and that means it will rely on a network, and it all comes down to the network. You gotta be able to build a cyber security network ready fabric from the bottom up so that your network is one of the key features is actually stop the attacks, and that doesn't matter in which industry you are. I think they you can think about the industry where you have vast volumes of data. They will be most likely the first one to take benefit of these. You know, we talk about countries before, and this is one such an industry, but it certainly where you process the vast amount of traffic. So they taking advantage of our technology, for example. And but I think it will be probably most of the industry will be affected by that shorter later >> and hopefully sooner rather than later, considering how fast all of these opportunities, good and bad, are growing. One of the things sporting that talked a lot about this morning during this section and some of the press releases is this growth that they've experienced growing twenty percent year on year from last year one point eight billion in revenue over three hundred eighty five thousand customers. You're one of the fabric ready partners, of which there are fifty seven. So a lot of growth, a lot of potential. What excites you as the head of no be Flo and your recent and developing partnership with Fortinet for twenty nineteen and beyond were gonna latch onto that growth trajectory. >> Absolute well, you know, when you mentioned high volume of traffic that plays to our cards. So the market is actually coming where we are way have our product runs at six for five terabytes per second, and that's today because we have a *** plans to move to twelve Tara bits and so forth. So for us, it's exciting because we feel we have the right scaling platform and the right program ability. So our customers, fortunate customers together with us can start with the existing. They're powerful platform. But should that evolved, they'LL be able to move to a new level of software new capacity gradually over time. So that's very exciting for us. >> But what about some of the announcements that came out this morning? Over three hundred new features added, for example, that's a tremendous amount of innovation since last year's accelerate. >> Yeah, well, the's features needs also have the right, I would say filtered level of data to be able to do it more efficiently. And that's where we commend we're not inside the subway Security company. We are really complimenting the product of forty nine by playing upstream and doing a pre filtering controlled by the policy management of the Fortunate, the equipment but nevertheless taking up some of the load of it so that the equipment could be more efficient. But just as an example, I read in a magazine a couple days ago that Google is building a A two hundred fifty terabyte cable between North America and Europe. Think about that. It's it's mindboggling is three time Library of Congress per second. And those are the kind of volume of data did you see coming so suddenly? Six point five terabytes per second doesn't sound so big, does it? But in fact, that's the world win today, and we're lucky it may be flow. We invested early on in the software layer that runs on top of these extremely powerful white boxes and were taking advantage of it with Fortinet. >> Gotta deliver that scale, that flexibility and his son's more and more like Speed. Dominic, thank you so much for stopping by the Cuban joining me on the program today, talking about Novy float what you're doing with Fortinet and what excites you about the year ahead >> was a pleasure, Liza. Thank you for >> mine as well. I want to thank you for watching the Cube Lisa Martin live on the Cube from Fortinet Accelerate twenty nineteen in Orlando. Thanks for watching
SUMMARY :
Brought to you by important Great to have you joining on the Cube leader in that field as the and software defined networking. So one of the things that fourteen it talks about is it's not enough anymore to have really the metal ring system, but all the O. N m features that you demand What are some of the conversations That is a kind of a black box I get that does all the required impact that Novy Flow and Fortinet are helping to them to achieve? for example, some of some of the bad traffic on the blacklist. But of course, so do the attackers have access to utilize artificial intelligence to create one of the key features is actually stop the attacks, and that doesn't matter in which industry you are. One of the things sporting that talked a lot about this morning during this section and some So the market is actually coming where we are way have our product But what about some of the announcements that came out this morning? But in fact, that's the world win today, and we're lucky it may be flow. with Fortinet and what excites you about the year ahead I want to thank you for watching the Cube Lisa Martin live on the Cube from Fortinet
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Fortinet | ORGANIZATION | 0.99+ |
Dominique Jordan | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Dominique Jodoin | PERSON | 0.99+ |
Dominic | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
twenty percent | QUANTITY | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Liza | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Library of Congress | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
second option | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Greg | PERSON | 0.99+ |
fifty seven | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
two customers | QUANTITY | 0.99+ |
two solutions | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two words | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Fortinet Accelerate | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Hutchison Group | ORGANIZATION | 0.99+ |
Montreuil | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Sofia | LOCATION | 0.99+ |
a minute | QUANTITY | 0.99+ |
seven years ago | DATE | 0.99+ |
five eyes | QUANTITY | 0.98+ |
Novy Flow | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
over three hundred eighty five thousand customers | QUANTITY | 0.98+ |
Telstra | ORGANIZATION | 0.98+ |
thirty different servers | QUANTITY | 0.98+ |
one country | QUANTITY | 0.98+ |
Over three hundred new features | QUANTITY | 0.97+ |
2019 | DATE | 0.97+ |
three time | QUANTITY | 0.97+ |
four years ago | DATE | 0.97+ |
One | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
forty nine | QUANTITY | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
fourteen | QUANTITY | 0.94+ |
two hundred fifty terabyte | QUANTITY | 0.94+ |
Six point | QUANTITY | 0.94+ |
twenty nineteen | QUANTITY | 0.94+ |
a minute ago | DATE | 0.93+ |
six point | QUANTITY | 0.92+ |
Flow | PERSON | 0.92+ |
five terabytes per second | QUANTITY | 0.92+ |
twelve Tara bits | QUANTITY | 0.92+ |
this morning | DATE | 0.92+ |
one point | QUANTITY | 0.91+ |
Fortinet Accelerate | ORGANIZATION | 0.91+ |
three | DATE | 0.9+ |
eight billion | QUANTITY | 0.89+ |
nineteen | QUANTITY | 0.89+ |
last six months | DATE | 0.88+ |
six terabit per second | QUANTITY | 0.86+ |
Jim Frey, Kentik Technologies | Cisco Live EU 2019
(techno music) >> Live from Barcelona, Spain, it's theCUBE. Covering Cisco Live! Europe. Brought to you by Cisco and it's ecosystem partners. >> Welcome back to theCUBE's exclusive coverage here at Barcelona, Spain of Cisco LIVE! Europe 2019. I'm John Furrier. Stu Miniman, and Dave Vellante here this week covering all the action in cloud, data center, multi-cloud. Our next is Jim Frey who's the Vice President's alliances at Kentik Technologies. Groundbreaking report that came of the Amazon Reinvent Conference. A lot of customers. Part of the multi-cloud discussion. Jim, great to see you, welcome to theCUBE. >> Thanks. It's Frye by the way. >> Frye. I'm sorry. >> Okay. No worries. No worries. >> Multi-cloud, your report has some interesting data. Talk about the survey, the results. What is is telling us? >> Yeah, we've been working hard at Kentik on extending our solution to start covering the cloud, multi-cloud server hybrid environments. And so we were at the AWS re:Invent show and we decided to take the opportunity to talk to some of the attendees and just sort of get their view of what some of the challenges are. So we talked to a little over 300 of em and we asked them a few questions not a, you know, rigorous thing, you're doing it on the show floor, right? But we found some really interesting things out of that. So the first thing is is that it really is a multi-cloud world already. More so even in hybrid. And so we had nearly 60 percent. 58 percent of the people who we talked to had more than just one of the cloud in play. They almost all had AWS of course cause it was an AWS event, but not all, of which is really interesting. But, you know, they either had AWS plus Google or plus Azure or plus some other cloud. More so than even hybrid. And so we also asked, are you using AWS in conjunction with you know, your own private data center or a third party host to go low center. Only 33 percent were doing that. So, we were surprised. And the reason that that is really significant is monitoring in management of these environments is much more complex in a, well. It's complex in a hybrid environment. It's even more complex in a multi-cloud environment. So it sounds like there's some real need for some help there. >> What are the challenges and what are the some of the complexities? What are the challenges in the monitoring? >> Well, so that was the next question. What's the key challenge, ya know? And usually whenever you ask someone about the challenges, the number one answer is always, oh, security is my biggest concern. That did not turn out to be the case here. The biggest overriding concern across all the different sort of levels of people we talked to was actually cost management. And cost management is, it was a bit surprising. You know, but, usually, you hear security, security, security and then something else. This was cost management either number one or number two. And number one for most of the constituencies. And in some of the subgroups, like VP level, SVP level, architect level, it was overwhelmingly the first choice. 40 and 50 percent of them are saying yeah, cost control is their biggest issue. Even ahead of other things like performance, like visibility, like actual, you know, control of the environment. You know, its cost was really the biggest concern. That's the big issue. >> Jim, something we've been tracking especially at shows like this, at the Cisco show is the challenges I used to understand kind of the stuff I had in my data center. I could get my arms around it. I might not love the management tools that I have. I might complain about some of the cost. But, it's all very well understand. It's bought most of the cap and freight. When you get to the public cloud, like totally understand what you are saying. multi-cloud. Now I've got all these different pieces and how will I have them defined. There's different skill sets between them. >> Right >> And when it comes to cost, right, the big unknown is oh wait, am I getting surprised by what happens, in that environment and across all them, I mean, I've talked to plenty of companies that will dedicate an engineering resource just to manage cloud or >> Right. >> I have many friends in the industry that are helping you know, cost optimization is something that is, you know, software consulting, there's huge business in that because we're still early in this getting to the steady stage. Help us connect the dots. Where does Kentik play into this then. So you talk to all these customers. >> Thank you. Our viewpoint is network and we're trying to give a viewpoint of what's happening in this environment by watching the network. And that's always super valueable because it helps you localize where things are, you know, what activity's happening and it helps you see, you know, which workloads are talking to which workloads. And that reveals sometimes things you don't expect. And this is where the cost control come in because you know, the cloud environment, you have to pay for certain network traffic. Especially between availability zones or when you're shipping it out of the cloud back to your other, you know, your home environment. And we have talked to a lot of customers who have said, hey, end of the month comes around, I get my bill and there's this big number there for data, you know, transfer. I don't know what drove that. And why am I being surprised time and time again by this. Well then there were viewpoints really awesome for seeing that. And if you can do it with a monitoring system that's watching for that all the time, the good news is that you can catch it, figure it out if it's real or not, needed or not, and fix it before 20 days later you get a big, fat bill. >> What does fixing it mean, does it mean like keeping it contained in the cloud, or on frame, or managing what's moving around? >> Could be combination of things, one of the things that we've seen in some of our earlier deployments are someone moved a workload into a different availability zone. Well, there was an application dependency they didn't recognize. And, you know, that workload was talking to, you know, home datacenter, or the another availability zone, and creating traffic across there and just running up the meter on the network costs. So if you can see that and it becomes very obvious to watch the traffic patterns. You can at least have someone go say, Hmm, okay, that's a surprise. They had a big rise to my zone to zone traffic or my, you know, cloud to home traffic. Let's just take a look to who's driving that and whether something that should be or shouldn't be. >> One of the interesting trends we've been watching on scene with cloud and hybrid cloud is kind of the consumption and deployments of cloud and hybrid's interesting because hybrid's with a cloud operation on premise. Which is been slowest to deploy. WikiBound's done a lot of research on private cloud and why that's happening. But it seems that clouds sprawl on the public side has been there. So yeah, I've got some Amazon, easy to stand up. I've got some Azure and now Google. So it's probably easier to get stuff in the clouds and then now they've got repurpose on premise to kind of have this seamless cloud native environment and Cisco's announcements, et cetera, et cetera. >> Yeah >> So as that's happened, what have you guys learned and scene in terms of the customer behavior. They wake up obviously, the bills are higher, so makes sense that the cloud is higher than hybrid and the cost containment is a concern. How did they get there? What are you seeing? And what's the psychology the customer just share some insight into the customer behavior. >> Well. >> Oh shoot, I got to unwind this, do I double down? What's going on? >> I think it really depends a lot on what the projects are, what the objectives are and what the skillset is. But one of the things that we found in this survey is that, network viewpoint that helps you understand what's really happening in the production environment is often being underutilized or underappreciated in the cloud environments, in the cloud, you know, deployments in cloud infrastructure. So one of the things we asked about was, how many of you folks at this event are actually taking advantage of, for instance, VPC Flow Logs, which can tell you exactly what's happening with an AWS, and between the availability zones. And it was surprising, they've been around, VPC Flow Logs have been around for years as a technology and as an additional service available. But, only about a third of the response were actually using them. So they weren't taking advantage of this important insight and viewpoint ceilometery set. About a third kind of knew about em, but wasn't using em yet. And then another third didn't even know what they were. >> Yeah. >> So I think there's still some maturity happening some maturation happening in terms of understanding what can I do about this? How can I get ahead of this? What's at my disposal? So part of the challenge of course then is that I have that piece covered, but as you said now, how do I cover my home, you know, home front? And where do I find, you know, some sort of tools that can be put these things together so I can see it all as one. >> That's where you guys fit in. >> That's where we fit in, yeah. >> So let me get some anecdotes from you. One it's clear that's a, there's a pain point. Take the aspirin. Understand what's going on, contain the bills. Is, give a scenario of what they're doing to contain the, you mentioned a few of them, but also to give an example of where they're using the data to be proactive, so there's the vitamin side of it. The vitamin, aspirin, whatever metaphor. So, you know, I've got contain my cost, I get that. How are people using the data to be more proactive in either architecting or deploying? >> So I think the, I don't know that anyone's being proactive yet. That is certainly the promise and the opportunity. Most organizations are simply want to be more aware of what happened. Or more affectively reactive and you start there. And once you start to realize, hey, I can do this then you can start turning toward being more proactive. So, for instance our solution was built to allow you to trigger corrective actions back to the environment. We don't take the actions, but we can trigger the systems that would change configurations or change policy and then form those systems of, you know, what's happening and what sort of parameters can we recognize that indicated and issue? So we believe that in especially watching the change in patterns of activity, noticing the anomalies. Anomaly detection often times used around security use cases. We do that. But also, it should be applied to operational use cases. When does a new workload pop up or a new, you know, volume of traffic show up that they didn't expect? And if it's something that I recognize happens at a regular basis and I know the answer, let's automate the corrective response. So that's kind of our theory of provide you the understanding of what's happening then with the tools to trigger and automatic corrective action. >> Alright, so Jim, we're talking a lot about multi-cloud this week with Cisco. Of course, you know, Cisco dominant in the networking space. Really feeling out where they live in multi-cloud, how networking plays across all of them. What's the relationship between Kentik and Cisco? How does that work? >> Thanks, so we're a member of CSPP Program. We are a partner. We joined because we manage a lot of Cisco gear. (laugh) So, a lot of our customers have Cisco. A lot of our use cases, historically, have been at the edge of the network, in particular the service providers. So, those that are delivering internet services or using the internet to reach their customers in some way. So, what's really different about us is we do a really deep and detailed approach of integrating a path, BGP path data, PGP rev data and correlating that with the traffic. As well with other enhancements, and augmentations of the data that give business and service context to the network traffic. Makes it more actionable. >> Yes, and what are you doing in the container space? You mentioned edge computing got some interesting use cases maybe explain a little bit where you play there? >> So when I say edge, I'm saying internet edge, not EDGE computing, although we're fascinated by what EDGE computing represents and the new challenges that's going to bring. Now when it comes to containers, actually we're very fascinated working in that area too, because, Jon, as you mentioned, moving and implementation of new cloud workloads is cloud native, using Kubernetes, using things like Gist T O, you know, that changes the environment once again. So, we've actually built a connector into Kubernetes so that we can use that to pull service information, you know, in terms of what workloads, what containers are out there. What are they doing? What's there purpose? So when we show you activity map of, you know, site to site communications we can say, here are the actual, you know, services that are being, participating in this activity. Its G was another place where we're really interested in to look at the service mesh, you know, that's being set up to run and operate communication between containers. Cause that's a new sort of virtual cloud network. It's a way that these containers are communicating. and again, the more you understand about the communication patterns, the better you can recognize problems, the better you can balance and plan, the better you really get a handle on what's really happening. >> Jim, I want to get your thoughts on since you brought up edge of the internet, multi-cloud and hybrid cloud, data moves around, certainly brings up the question of which routes the packets are moving around? There's always been debates but SL lays around, you know, direct connection versus go through the internet? Is China looking at it? So, there's a security kind of concern. >> Yep. >> What's the trend that you're seeing with respect to say either direct connects, cause I'm a company that I have multiple clouds. I have the connections in there. I'm concerned about latency, certainly cost, you know, whether it's cat videos or whatever, or application too. It still costs money. >> Yep. >> So latency's important so each cloud is its own kind of latency issues. What have you seen? >> Well, getting to the cloud and then within the cloud. >> Yeah, exactly. So it's complicated. So, this is a new dynamic, but it's similar concept. Is there standard latencies? Is it getting better? What's the trend look like? >> That's a great question, and I honestly don't have a good answer for you. But I recognize and agree that those are common concerns that we hear. And the best thing at least for what Kentik is doing is to provide the means to measure and understand that. So you can compare what's working. You can, you know, document a baseline, your different options and your different paths, and recognize when there's a real problem occurring. When you start to see latencies spike to any particular cloud service or location or zone so that you can try and get on top of it and figure it. >> That's a classic case of evolution. Get it instrumented. >> Yeah. >> Get the providers, get better what there services. That's the out of, really out of your hands. >> Yeah. >> That's not really. Okay, so, getting back to the survey kind of wrap things up. Interesting it said Amazon the biggest cloud show Azure pops up on the list as pretty high. >> It sure does. >> Makes sense Microsoft's got great performance. I mean Azure's kind of like, they move a lot of stuff into Azure preexisting Microsoft stuff plus they're investing. What's the bottom line summary as you kind of, you know, the aroma of the rapport. What's coming out of the rapport? What's the key insights that you can glean out of this? >> So I think it indicates normal pattern of adoption, and sort of we're growing into this marketplace. It's evolving as we go, you know. We saw big early-end option hopping in like lift and ship approaches to just move stuff into the cloud. Throw it in the cloud. It's going to be cheaper. It doesn't turn out to be cheaper. It can be. Then you've got another, you know, set of organizations that are born in the cloud, right? And they've started out from the beginning. So those two early approaches are merging into how do we really use this as a true, strategic approach to I.T.? What are the real world complexities we're going to deal with? And how are we going to deal with those? It's really no different from the way that technology has evolved within traditional data centers. And why, the way virtualization came in and changed the way we build and architect datacenters. It's awesome. It's great. It save you money in one area, but then it created huge blindspots, cause you couldn't tell what was going on in those virtualization layers, so we had to adapt our operational monitoring, and operational practices to accommodate the new technology. I think we're going through the same thing now with cloud. People recognize that they don't necessarily want to be holded to a single cloud provider. They want alternatives. They want, you know, cost competitiveness. They want redundancy. And so multi-cloud, I think, is becoming more and more real in part because people don't want to put all of their eggs in that one basket. >> And cost certainly looks good on paper at the beginning. >> Yeah. >> But then as you said, there's side effects. It's a system so there's consequences to the system. >> Yes, absolutely. >> When you start growing or whatever. And that's just where people have to work it better. Right? >> Yep. >> That's pretty much the operational. >> I mean, let's apply the same rigor that we used to apply to traditional data center environments. And let's start embracing the cloud, right? >> So, Jim, you've talked about the multi-cloud bid. Why don't you put a fine point on it. There's a reason why you jump from being an analyst into the vendor world. Some people on the outside will be like, well, you know, cloud's been going on for ten years, seems we understand where this is going. But, tell us why, you know, now is so important for this multi-cloud environment and the opportunity that you see again. >> Sure. >> In this ecosystem. >> Kentik in particular what we're starting to hear, very loud and clear amongst the what. Our traditional an initial base of customers was facilities based, service providers and digital enterprise that managed big routed networks and needed to understand better control their relationship with the internet and delivery across the internet. There coming to us and saying, hey look. We're splitting. We're adding cloud workloads. So, we're moving our content that we're serving up into the cloud, you know, more and more of our systems are moving into the cloud and we rely on you for this visibility in our production environment. We need you to add this. So, we saw a demand from our customers to, you know, accommodate this and in parallel we're just really inspired by this next generation of cloud native application development. It seems to be starting to reach that point where's it's becoming reality and it's becoming mature, and it's becoming a reliable approach to I.T. That now's the time to really get serious about bringing these other best practices for the traditional world, and applying them there. >> And the survey data has created proved multi-cloud and hybrid all here, costs can run out of control. You've got to work. You've got to operationalize cloud. And same rigor. I love that. Great insights, Jim. Thanks for coming on theCUBE. Appreciate it. >> Sure. >> Live CUBE coverage here in Barcelona for Cisco Live! Europe 2019. It's theCUBE. Day three, or three days of coverage. We'll be back with more, after this short break. (techno music)
SUMMARY :
Brought to you by Cisco and Part of the multi-cloud discussion. No worries. Talk about the survey, the the opportunity to talk to And in some of the subgroups, It's bought most of the cap and freight. something that is, you know, the good news is that you can catch it, home datacenter, or the kind of the consumption so makes sense that the in the cloud, you know, So part of the challenge of course then is So, you know, I've got and you start there. dominant in the networking space. and augmentations of the and again, the more you understand about edge of the internet, What's the trend that you're What have you seen? Well, getting to the cloud What's the trend look like? And the best thing at least That's a classic case of evolution. That's the out of, the biggest cloud show What's the key insights that and changed the way we build good on paper at the beginning. But then as you said, When you start growing or whatever. I mean, let's apply the and the opportunity that you see again. That now's the time to And the survey data has here in Barcelona for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Jim Frey | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
58 percent | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
50 percent | QUANTITY | 0.99+ |
Kentik Technologies | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Kentik | ORGANIZATION | 0.99+ |
WikiBound | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
theCUBE | ORGANIZATION | 0.99+ |
one area | QUANTITY | 0.98+ |
first choice | QUANTITY | 0.98+ |
Amazon Reinvent Conference | EVENT | 0.98+ |
first thing | QUANTITY | 0.98+ |
Day three | QUANTITY | 0.98+ |
each cloud | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
33 percent | QUANTITY | 0.97+ |
one basket | QUANTITY | 0.97+ |
Frye | PERSON | 0.96+ |
two early approaches | QUANTITY | 0.96+ |
third | QUANTITY | 0.95+ |
Azure | TITLE | 0.94+ |
Kubernetes | TITLE | 0.92+ |
nearly 60 percent | QUANTITY | 0.91+ |
One | QUANTITY | 0.9+ |
Kentik | PERSON | 0.9+ |
single cloud provider | QUANTITY | 0.84+ |
About a third | QUANTITY | 0.82+ |
about a third | QUANTITY | 0.81+ |
20 days later | DATE | 0.78+ |
Cisco Live! Europe 2019 | EVENT | 0.78+ |
Cisco LIVE! Europe 2019 | EVENT | 0.76+ |
Cisco Live EU 2019 | EVENT | 0.75+ |
re:Invent show | EVENT | 0.75+ |
over 300 of em | QUANTITY | 0.72+ |
plus | TITLE | 0.72+ |
Cisco Live! | EVENT | 0.71+ |
Azure | ORGANIZATION | 0.67+ |
CSPP Program | OTHER | 0.66+ |
China | ORGANIZATION | 0.65+ |
Europe | LOCATION | 0.62+ |
Gist T O | ORGANIZATION | 0.57+ |
two | QUANTITY | 0.54+ |
VPC Flow | ORGANIZATION | 0.45+ |
VPC Flow | TITLE | 0.44+ |
one | OTHER | 0.43+ |
Zongjie Diao, Cisco and Mike Bundy, Pure Storage | Cisco Live EU 2019
(bouncy music) >> Live, from Barcelona, Spain, it's theCUBE, covering Cisco Live Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back everyone. Live here in Barcelona it's theCUBE's exclusive coverage of Cisco Live 2019. I'm John Furrier. Dave Vellante, my co-host for the week, and Stu Miniman, who's also here doing interviews. Our next two guests is Mike Bundy, Senior Director of Global Cisco Alliance with Pure Storage, and Z, who's in charge of product strategy for Cisco. Welcome to theCUBE. Thanks for joining us. >> Thank you for having us here. >> You're welcome. >> Thank you. >> We're in the DevNet zone. It's packed with people learning real use cases, rolling up their sleeves. Talk about the Cisco Pure relationship. How do you guys fit into all this? What's the alliance? >> You want to start? >> Sure. So, we have a partnership with Cisco, primarily around a solution called Flashstack in the converged infrastructure space. And most recently, we've evolved a new use-case and application together for artificial intelligence that Z's business unit have just released a new platform that works with Cisco and NVIDEA to accomplish customer application needs mainly in machine learning but all aspects of artificial intelligence, so. >> So AI is obviously a hot trend in machine learning but today at Cisco, the big story was not about the data center as much anymore as it's the data at the center of the value proposition which spans the on-premises, IoT edge, and multiple clouds so data now is everywhere. You've got to store it. It's going to be stored in the cloud, it's on-premise. So data at the center means a lot of things. You can program with it. It's got to be addressable. It has to be smart and aware and take advantage of the networking. So with all of that as the background, backdrop, what is the AI approach? How should people think about AI in context to storing data, using data? Not just moving packets from point A to point B, but you're storing it, you're pulling it out, you're integrating it into applications. A lot of moving parts there. What's the-- >> Yeah, you got a really good point here. When people think about machine learning, traditionally they just think about training. But we look at it as more than just training. It's the whole data pack line that starts with collecting the data, store the data, analyze the data, train the data, and then deploy it. And then put the data back. So it's really a very, it's a cycle there. It's where you need to consider how you actually collect the data from edge, how you store them, in the speed that you can, and give the data to the training side. So I believe when we work with Pure, we try to create this as a whole data pack line and think about the entire data movement and the storage need that we look at here. >> So we're in the DevNet zone and I'm looking at the machine learning with Python, ML Library, (mumbles) Flow, Appache Spark, a lot of this data science type stuff. >> Yup. >> But increasingly, AI is a workload that's going mainstream. But what are the trends that you guys are seeing in terms of traditional IT's involvement? Is it still sort of AI off on an island? What are you seeing there? >> So I'll take a guess, a stab at it. So really, every major company industry that we work with have AI initiatives. It's the core of the future for their business. What we're trying to do is partner with IT to get ahead of the large infrastructure demands that will come from those smaller, innovative projects that are in pilot mode so that they are a partner to the business and the data scientists rather than a laggard in the business, the way that sometimes the reputation that IT gets. We want to be the infrastructure, solid, like a cloud-like experience for the data scientists so they can worry more about the applications, the data, what it means to the business, and less about the infrastructure. >> Okay. And so you guys are trying to simplify that infrastructure, whether it's converged infrastructure, and other unifying approaches. Are you seeing the shift of that heavy lifting, of people now shifting resources to new workloads like AI? Maybe you could discuss what the trends are there? >> Yeah, absolutely. So I think AI started with more like a data science experiment. You see a couple of data scientists experimenting. Now it's really getting into mainstream. More and more people are into that. And as, I apologize. >> Mike. >> Mike. >> Mike, can we restart that question? (all laughing) My deep apology. I need a GPU or something in my brain. I need to store that data better. >> You're on Fortnite. Go ahead. >> Yes, so as Mike has said earlier on, it's not just the data scientists. It's actually an IT challenge as well and I think with Cisco, what we're trying to do with Pure here is, you know that Cisco thing, we're saying, "We're a bridge." We want to bridge the gap between the data scientists and the IT and make it not just AI as an experiment but AI at scale, at production level, and be ready to actually create real impact with the technology infrastructure that we can enable. >> Mike, talk about Pure's position. You guys have announced Pure in the cloud? >> Yes. >> You're seeing that software focus. Software is the key here. >> Absolutely. >> You're getting into a software model. AI and machine learning, all this we're talking about is software. Data is now available to be addressed and managed in that software life cycle. How is the role of the software for you guys with converged infrastructure at the center of all the Cisco announcements. You were out on stage today with converged infrastructure to the edge. >> Yes, so, if you look at the platform that we built, it's referenced back, being called the Data Hub. The Data Hub has a very tight synergy with all the applications you're referring to: Spark, Tensor Flow, et cetera, et cetera, Cafe. So, we look at it as the next generation analytics and the platform has a super layer on top of all those applications because that's going to really make the integration possible for the data scientists so they can go quicker and faster. What we're trying to do underneath that is use the Data Hub that no matter what the size, whether it's small data, large data, transaction based or more bulk data warehouse type applications, the Data Hub and the FlashBlade solution underneath handle all of that very, very different and probably more optimized and easier than traditional legacy infrastructures. Even traditional, even Flash, from some of our competitors, because we built this purpose-built application for that. Not trying to go backwards in terms of technology. >> So I want to put both you guys on the spot for a question. We hear infrastructure as code going on many, many years since theCUBE started nine years ago. Infrastructure as code, now it's here. The network is programmable, the infrastructure is programmable, storage is programmable. When a customer or someone asks you, how is infrastructure, networks, and storage programmable and what do I do? I used to provision storage, I've got servers. I'm going to the cloud. What do I do? How do I become AI enabled so that I could program the infrastructure? How do you guys answer that question? >> So a lot of that comes to the infrastructure management layer. How do you actually, using policy and using the right infrastructure management to make the right configuration you want. And I think one thing from programmability is also flexibility. Instead of having just a fixed configuration, what we're doing with Pure here is really having that flexibility where you can put Pure storage, different kind of storage with different kind of compute that we have. No matter we're talking about two hour use, four hour use, that kind of compute power is different and can max with different storage, depending on what the customer use case is. So that flexibility driven to the programmability that is managed by the infrastructure management layer. And we're extending that. So Pure and Cisco's infrastructure management actually tying together. It's really single pane of glass within the side that we can actually manage both Pure and Cisco. That's the programmability that we're talking about. >> Your customers get Pure storage, end-to-end manageability? >> With the Cisco compute, it's a single pane of glass. >> Okay. >> So where do I buy? I want to get started. What do you got for me? (laughing) >> It's pretty simple. It's three basic components. Cisco Compute and a platform for machine learning that's powered by NVIDEA GPUs; Cisco FlashBlade, which is the Data Hub and storage component; and then network connectivity from the number one network provider in the world, from Cisco. It's very simple. >> And it's a SKU, it's a solution? >> Yup, it's very simple. It's data-driven. It's not tied to a specific SKU. It's more flexible than that so you have better optimization of the network. You don't buy a 1000 series X and then only use 50% of it. It's very customizable. >> Okay, do I can customize it for my, whatever, data science team or my IT workloads? >> Yes, and provision it for multi-purpose, same way a service provider would if you're a large IT organization. >> Trend around breaking silos has been discussed heavily. Can you talk about multiple clouds, on-premise in cloud and edge all coming together? How should companies think about their data architecture because silos are good for certain things, but to make multi-cloud work and all this end-to-end and intent-based networking and all the power of AI's around the corner, you got to have the data out there and it's got to be horizontally scalable, if you will. How do you break down those silos? What's your advice, is there a use case for an architecture? >> I think it's a classic example of how IT has evolved to not think just silos and be multi-cloud. So what we advocate is to have a data platform that transpires the entire community, whether it's development, test, engineering, production applications, and that runs holistically across the entire organization. That would include on-prem, it would include integration with the cloud because most companies now require that. So you can have different levels of high availability or lower cost if your data needs to be archived. So it's really building and thinking about the data as a platform across the company and not just silos for various applications. >> So replication never goes away. >> Never goes away. (laughing) >> It's going to be around for a long, long time. >> Dev Test never goes away either. >> Your thoughts on this? >> Yeah, so adding on top of that, we believe where your infrastructure should go is where the data goes. You want to follow where the data is and that's exactly why we want to partner with Pure here because we see a lot of the data are sitting today in the very important infrastructure which is built by Pure Storage and we want to make sure that we're not just building a silo box sitting there where you have to pour the data in there all the time, but actually connect to our server with Pure Storage in the most manageable way. And for IT, it's the same kind of manual layer. You're not thinking about, oh, I have to manage all this silo box, or the shadow IT that some data scientists would have under their desk. That's the least thing you want. >> And the other thing that came up in the key note today, which we've been saying on theCUBE, and all the experts reaffirm, is that moving data costs money. You've got latency costs and also just cost to move traffic around. So moving compute to the edge or moving compute to the data has been a big, hot trend. How has the compute equation changed? Because I've got storage. I'm not just moving packets around. I'm storing it, I'm moving it around. How does that change the compute? Does that put more emphasis on the compute? >> It's definitely putting a lot more emphasis on compute. I think it's where you want compute to happen. You can pull all the data and want it to happen in the center place. That's fine if that's the way you want to manage it. If you have already simplified the data, you want to put it in that's the way. If you want to do it at the edge, near where the data source is, you can also do the cleaning there. So we want to make sure that, no matter how you want to manage it, we have the portfolio that can actually help you to manage that. >> And it's alternative processors. You mentioned NVIDEA. >> Exactly. >> You guys are the first to do a deal with them. >> And other ways, too. You've got to take advantage of technology like Kubernetes, as an example. So you can move the containers where they need to be and have policy managers for the compute requirements and also storage, so that you don't have contention or data integrity issues. So embracing those technologies in a multi-cloud world is very, very essential. >> Mike, I want to ask you a question around customer trends. What are you seeing as a pattern from a customer standpoint, as they prepare for AI, and start re-factoring some of their IT and/or resources, is there a certain use-case that they set up with Pure in terms of how they set up their storage? Is it different by customer? Is there a common trend that you see? >> Yeah, there are some commonalities. Take financial services, quant-trading as an example. We have a number of customers that leverage our platform for that because it's very time-sensitive, high-availability data. So really, I think that the trend overall of that would be: step back, take a look at your data, and focus on, how can I correlate and organize that? And really get it ready so that whatever platform you use from a storage standpoint, you're thinking about all aspects of data and get it in a format, in a form, where you can manage and catalog, because that's kind of essential to the entire thing. >> It really highlights the key things that we've been saying in storage for a long time. High-availability, integrity of the data, and now you've got application developers programming with data. With APIs, you're slinging APIs around like it's-- >> The way it should be. >> That's the way it should be. This is like Nirvana finally got here. How far along are we in the progress? How far? Are we early? Are we moving the needle? Where are the customers? >> You mean in terms of a partnership? >> Partnership, customer AI, in general. You guys, you've got storage, you've got networking and compute all working together. It has to be flexible, elastic, like the cloud. >> My feeling, Mike can correct me, or you can disagree with me. (laughing) I think right now, if we look at what all the analysts are saying, and what we're saying, I think most of the companies, more than 50% of companies either have deployed AI MO or are considering a plan of deploying that. But having said that, we do see that we're still at a relatively early stage because the challenges of making AI deployment at scale, where data scientists and IT are really working together. You need that level of security and that level of skill of infrastructure and software and evolving DevNet. So my feeling is we're still at a relatively early stage. >> Yeah, I think we are in the early adopter phase. We've had customers for the last two years that have really been driving this. We work with about seven of the automated car-driving companies. But if you look at the data from Morgan Stanley and other analysts, there's about a $13 billion infrastructure that's required for AI over the next three years, from 2019-2021, so that is probably 6X, 7X what it is today, so we haven't quite hit that bell curve yet. >> So people are doing their homework right now, setting up their architecture? >> It's the leaders. It's leaders in the industry, not the mainstream. >> Got it. >> And everybody else is going to close that gap, and that's where you guys come in, is helping them do that. >> That's scale. (talking over one another) >> That's what we built this platform with Cisco on, is really, the Flashstack for AI is around scale, for tens and twenties of petabytes of data that will be required for these applications. >> And it's a targeted solution for AI with all the integration pieces with Cisco built in? >> Yes. >> Great, awesome. We'll keep track of it. It's exciting. >> Awesome. >> It's cliche to say future-proof but in this case, it literally is preparing for the future. The bridge to the future, as the new saying at Cisco goes. >> Yes, absolutely. >> This is theCube coverage live in Barcelona. We'll be back with more live coverage after this short break. Thanks for watching. I'm John Furrier with Dave Vallente. Stay with us. (upbeat electronic music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Dave Vellante, my co-host for the week, We're in the DevNet zone. in the converged infrastructure space. So data at the center means a lot of things. the data to the training side. at the machine learning with Python, ML Library, But what are the trends that you guys are seeing and less about the infrastructure. And so you guys are trying to simplify So I think AI started with I need to store that data better. You're on Fortnite. and the IT and make it not just AI as an experiment You guys have announced Pure in the cloud? Software is the key here. How is the role of the software and the platform has a super layer on top So I want to put both you guys on the spot So a lot of that comes to the What do you got for me? network provider in the world, from Cisco. It's more flexible than that so you have Yes, and provision it for multi-purpose, and it's got to be horizontally scalable, if you will. and that runs holistically across the entire organization. (laughing) That's the least thing you want. How does that change the compute? That's fine if that's the way you want to manage it. And it's alternative processors. and also storage, so that you don't have Mike, I want to ask you a where you can manage and catalog, High-availability, integrity of the data, That's the way it should be. It has to be flexible, elastic, like the cloud. and that level of skill of infrastructure that's required for AI over the next three years, It's leaders in the industry, not the mainstream. and that's where you guys come in, is helping them do that. That's scale. is really, the Flashstack for AI is around scale, It's exciting. it literally is preparing for the future. I'm John Furrier with Dave Vallente.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Mike Bundy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
four hour | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Zongjie Diao | PERSON | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
more than 50% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
1000 series X | COMMERCIAL_ITEM | 0.99+ |
today | DATE | 0.99+ |
Pure | ORGANIZATION | 0.98+ |
7X | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Barcelona, Spain | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
one thing | QUANTITY | 0.98+ |
6X | QUANTITY | 0.98+ |
nine years ago | DATE | 0.98+ |
NVIDEA | ORGANIZATION | 0.97+ |
Global Cisco Alliance | ORGANIZATION | 0.97+ |
Flash | TITLE | 0.97+ |
two guests | QUANTITY | 0.96+ |
Appache Spark | TITLE | 0.96+ |
2019-2021 | DATE | 0.96+ |
Nirvana | ORGANIZATION | 0.96+ |
Flow | TITLE | 0.93+ |
$13 billion | QUANTITY | 0.93+ |
FlashBlade | COMMERCIAL_ITEM | 0.91+ |
Fortnite | TITLE | 0.91+ |
Z | PERSON | 0.9+ |
Data Hub | TITLE | 0.9+ |
Europe | LOCATION | 0.9+ |
Spark | TITLE | 0.89+ |
three basic components | QUANTITY | 0.88+ |
ML Library | TITLE | 0.88+ |
tens and twenties of petabytes of data | QUANTITY | 0.88+ |
about seven of the automated car-driving companies | QUANTITY | 0.84+ |
last two years | DATE | 0.83+ |
Cisco Live 2019 | EVENT | 0.82+ |
two hour | QUANTITY | 0.81+ |
Cisco | EVENT | 0.8+ |
Flashstack | TITLE | 0.79+ |
single pane of | QUANTITY | 0.78+ |
single pane of glass | QUANTITY | 0.77+ |
Dev Test | TITLE | 0.77+ |
about | QUANTITY | 0.74+ |
Cisco Pure | ORGANIZATION | 0.73+ |
next three years | DATE | 0.72+ |
Kubernetes | TITLE | 0.69+ |
FlashBlade | TITLE | 0.65+ |
DevNet | TITLE | 0.65+ |
Sunil Potti, Nutanix | Nutanix .NEXT EU 2018
>> Live from London, England, it's The Cube covering .NEXT conference Europe 2018. Brought to you by Nutanix. >> Welcome back to London, England. This is The Cube's coverage of Nutanix .NEXT 2018. 3,500 people gathered to listen to Sunil Potti. >> Thanks, Stu. >> For the keynote this morning, Sunil's the chief product and development officer with Nutanix. Glad we moved things around, Sunil, 'cause we know events, lots of things move, keynotes sometimes go long, but happy to have you back on the program. >> No, likewise, anytime. >> All right, so, I've been to a few of these and one of the things I hope you walk us through a little bit. So Nutanix, simplicity is always at its core. I have to say, it's taken me two or three times hearing the new, the broad portfolio, the spectrum, and then I've got the core, I've got essentials, I've got enterprise. I think it's starting to sink in for me, but it'll probably take people a little bit of time, so maybe let's start there. >> I mean, I think one of the biggest things that happened with mechanics is that we went from a few products just twelve months ago to over ten products within the span of a year. And both internally as well as externally, while the product values are obviously obvious, so it's more the consumption within our own sales teams, channel teams, as well as our customer base, needed to be codified into something that could be a journey of adoption. So we took it customer inwards, in about a journey that a customer goes through in adopting services in a world of multi-cloud, and before that, before you get to multi-cloud, you have to build a private cloud that is genuine, as we know. And before we do that, we have to re-platform your data center using HCI, so that's really if you work backwards to that, you start with core, which is your HCI platform for modernizing your data center and then you expand to a cloud platform for every workload, and then you can be in a position to actually leverage your multi-cloud services. >> Yeah, and I like that. I mean, start with the customer first, is where you have and I mean the challenge is, you know, every customer is a little bit different. You know, one of the biggest critiques of, you know, you say, okay, what is a private cloud? because they tend to be snowflakes. Every one's a little bit different and we have a little bit of trouble understanding where it is, or did it melt all over the floor. So give us a little bit of insight into that and help us through those stages, the dirty, the crawl-walk-run. >> Yeah, I think the biggest thing everyone has to understand here is that these are not discrete moving parts. Core is obviously your starting point of leveraging computer storage in a software defined way. The way that Amazon launched with EC2 and S3, right. But then, every service that you consume on top of public cloud still leverages computer storage. So in that sense, essentials is a bunch of additional services such as self-service, files, and so forth, but you still need the core to build on essential, to build a private cloud And then from there onwards, you can choose other services, but you're still leveraging the core constructs. So in that sense, I think, both architecturally as well as from a product perspective, as well as architecturally from a packaging perspective, that's why they're synergistic in the way that things have rolled out. >> Okay, so looking at that portfolio. A lot of the customers I work with now, they don't start out in a data center, they've already moved past that, right? So they are leveraging a partner, the public cloud, they might not even be running virtual machines at all anymore. How does that fit into your portfolio? >> Yeah, I mean, increasingly what we are realizing, and you know, we've done this over the last couple of years, is for example, with Calm, you can only use Calm to manage your public clouds without even managing your private cloud of Nutanix. Increasingly with every new service that we're building out, we're doing it so that people don't have to pay the strategy tax off the stack. It needs to be done by a desire of I want to do it versus I need to do it. So, with Frame, you can get going on AWS in any region in an instant or Azure. You don't need to use any Nutanix software. Same thing with Epoch, with Beam. So I think as a company, what we're essentially all about is about saying let us give you a cloud, service-like experience, maybe workload-centric. If it is desktops and so forth. Or if you are going to be at some point reaching a stage where you have to re-platform your data center to look like a public cloud, then we have the core, try and call it platform itself that'll help you get there as well. >> So, looking at re-platforming that data center. If I were to do that now for a customer I wouldn't be looking at virtual machines, storage, networking, I'd be looking at containers or serverless or you know, the new stuff. Again, what is Nutanix's answer to that? >> Yeah, I mean, I think what we've found is that there's quite a bit of an option, obviously, of cloud-native ads, but when it comes to mainstream budget allocation, it's still a relative silo in terms of mainstream enterprise consumption. So what we're finding out is that if you could leverage your well-known cloud platform to not create another silo for Kubernetes, don't create another silo for Edge or whatever the new use-cases are, but treat them as an extension of your core platform. At least from a manageability perspective and an operations perspective, then the chances of you adopting or your enterprise adopting these new technologies becomes higher. So, for example, in Calm, we have this pseudonym called Kalm with a K, right. Which essentially allows Kubernetes containers to run natively inside a Calm blueprint, but coexist with your databases inside of EM because that's how we see the next-generation enterprise apps morphing, right. Nobody's going to rewrite my whole app. They're going to maybe start with the web tier and the app tier as containers, but my database tier, my message queue tier, is going to be as VMs. So, how does Calm help you abstract the combination of containers and VMs into a common blueprint is what we believe is the first step towards what we call a hybrid app. And when you get to hybrid apps, is when you can actually then get to eventually all of your time to native cloud apps. >> You know, one of the questions I was hearing from customers is, they were looking for some clarity as to the hybrid environments. You know, the last couple of shows, there was a big presence of Google at the show and while I didn't see Google here on the show floor, I know there was an update from kind of, GCP and AHV. Is Google less strategic now, or is it just taking a while to, you know, incubate? How do you feel about that? >> So the way that you'll see us evolve as we navigate the cloud partnerships is to actually find the sweet spot of product-market fit, with respect to where the product is ready and where the market really wants that. And some of it is going to be us doing, you know, a partnership by intent first and then as we execute, we try to land it with honest products. So, where we started off with Google, as you guys know, is to actually leverage the cloud platform side, core locator with Google data centers and then what we we've evolved to is the fact that our data centers can quote-unquote integrate with their data centers to have a common management interface, a common security interface and all, but we can still run as core-located ones. Where the real integration that has taken some time for us to get to is the fact that, look, in addition to Calm, in addition to GKE kind of things, is rather than run as some kind of power sucking alien on top of some Google hardware, true integration comes with us actually innovating on a stack that lands AH3 natively inside GCP and that's where nested virtualization comes in and we have to take that crawl-walk-run approach there because we didn't want to expose it to public customers what we didn't consume internally. So what we have with the new offering that now is called Test Drive is, essentially that. We've proven that AH3 can run a nested virtualization mode on GCP natively, you can core locate with the rest of GCP services, and we use it currently in our R&D environment for running thousands of nodes for pretty much everyday testing on a daily basis, right. And so, once customer interview expose that now as an environment for our end customers to actually test-drive Nutanix as a fully compatible stack though, on purpose, so you have Prism Central, the full CDP stack and so forth, then as that gets hardened over a period of time, we expose that into production and so forth. >> So there's one category of cloud I haven't heard yet, and that's the service providers. So Nutanix used to be a really good partner for service providers, you know, enabling them to deliver services locally to local geography, stuff like that, so what's the sense of Nutanix regarding these service providers currently? >> Yeah, I think that frankly, that's probably a 2019 material change to our roadmap. It's your, the analogy that I have is that when we first launched our operating system, we fist had to do it with an opinionated stack using Supermicro. Most importantly, from an end-customer perspective, they got a single throat to choke, but also equally importantly, it kept the engineering team honest because we knew what it means to do one pick-up page for the full stack. Similarly, when we launched Xi, we needed to make sure we knew what SREs do, right. That scale, and so that's why we started with our version of SMC on, you know, as you guys know with Additional Reality as well as partners like Xterra. But very soon you're going to see is, once we have cleared that opinionated stack, software-wise we're able to leverage it, just like we went from Supermicro to Dell and Lenovo and seven other partners, you're going to see us create a Xi partner network. Which essentially allows us to federate Xi as an OS into the service providers. And that's more a 2019 plus timeframe. >> Yeah, speaking along those lines, the keynote this morning, Karbon with a k talked about Kubernetti's. Talk about that, that's the substrate for Nutanix's push toward cloud natives, so-- >> Yeah, I mean, I think you're going to hear that in the day two keynote as well, is basically, customer's want, as I said, an operating system for containers that is based on well-known APIs like Kube Cattle from Kubernetes and all that, but at the same time, it is curated to support all of the enterprise services such as volumes, storage, security policies from Flow, and you know, the operational policies of containers shouldn't be any different from Vms. So think about it as the developers still a Kubernetes-like interface, they can still port their containers from Neutanix to any other environment, but from an IT ops side, it looks like Kubernetes, containers, and VMs are co-residing as a first-class option. >> Yeah, I feel like there had been a misperception about what Kubernetes is and how it fits, you know. My take has been, it's part of the platform so there's not going to be a battle for a distribution of Kubernetes because I'm going to choose a platform and it should have Kubernetes and it should be compatible with other Kubernetes out there. >> Yeah, I mean, it's going to be like a feature of Linux. See, in that sense, there's lots of Linux distros but the core capabilities of Linux are the same, right. So in that sense, Kubernetes is going to become a feature of Linux, or the cloud operating system, so that those least-common denominator features are going to be there in every cloud OS. >> Alright, so Kubernetes not differentiating just expand the platform >> Enabling >> Enabling peace. So, tell us what is differentiating today? You know, what are the areas where Nutanix stands alone as different from some of the other platform providers of today? >> I think that, I mean obviously, whatever we do, we are trying to do it thoughtfully from the operational, you know, simplicity as a first-class citizen. Like how many new screens do we add when we use new features? A simple example of that is when we did micro-segmentation. The part was to make sure you could go from choosing ten VMs to grouping them and putting a policy as soon as possible as little friction of adopting a new product. So, we didn't have to "virtualize" the network, you didn't need to have VX LANs to actually micro-segment, just like in public cloud, right. So I think we're taking the same thing into services up the stack. A good one to talk about is Error. Which is essentially looking at databases as the next complex beast of operational complexity, besides. Especially, Oracle Rack. And it's easier to manage postcrest and so forth, but what if you could simplify not just the open source management, but also the database side of it? So I would say that Error would be a good example of a strategic value proposition or what does it mean to create a one plus one equals three value proposition to database administrators? Just like we did that for VIR vetted administrators, we're now going after DBS. >> Alright, well, Sunil thank you so much. Wish we had another hour to go through it, but give you the final word, as people leave London this year, you know, what should they be taking away when they think about Nutanix? >> I think the platform continues to evolve, but the key takeaway is that it's a platform company. Not a product company. And with that comes the burden, as well as the promise of being an iconic company for the next, hopefully, decade or so. All right, thanks a lot. >> Well, it's been a pleasure to watch the continued progress, always a pleasure to chat. >> Thank you >> All right, for you Piskar, I'm Stu Miniman, back with more coverage here from Nutanix's .NEXT 2018 in London, England. Thanks for watching the CUBE. (light electronic music)
SUMMARY :
Brought to you by Nutanix. 3,500 people gathered to listen to Sunil Potti. but happy to have you back on the program. I think it's starting to sink in for me, and then you expand to a cloud platform for every workload, and I mean the challenge is, you know, and so forth, but you still need the core A lot of the customers I work with now, So, with Frame, you can get going on AWS in any region or serverless or you know, the new stuff. They're going to maybe start with the web tier or is it just taking a while to, you know, incubate? And some of it is going to be us doing, you know, for service providers, you know, enabling them with our version of SMC on, you know, the keynote this morning, but at the same time, it is curated to support all about what Kubernetes is and how it fits, you know. Yeah, I mean, it's going to be like a feature of Linux. of the other platform providers of today? from the operational, you know, simplicity as people leave London this year, you know, I think the platform continues to evolve, to watch the continued progress, always a pleasure to chat. All right, for you Piskar, I'm Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lenovo | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Piskar | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Sunil Potti | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Supermicro | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
London, England | LOCATION | 0.99+ |
Epoch | ORGANIZATION | 0.99+ |
Xterra | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sunil | PERSON | 0.99+ |
Beam | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
3,500 people | QUANTITY | 0.99+ |
twelve months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
ten VMs | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
first step | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
Kube Cattle | TITLE | 0.98+ |
S3 | TITLE | 0.98+ |
DBS | ORGANIZATION | 0.98+ |
one category | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
thousands of nodes | QUANTITY | 0.96+ |
seven other partners | QUANTITY | 0.96+ |
Stu | PERSON | 0.95+ |
2018 | EVENT | 0.95+ |
today | DATE | 0.95+ |
Edge | TITLE | 0.94+ |
SMC | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
a year | QUANTITY | 0.92+ |
single throat | QUANTITY | 0.91+ |
Oracle | ORGANIZATION | 0.91+ |
The Cube | ORGANIZATION | 0.91+ |
SREs | ORGANIZATION | 0.89+ |
this morning | DATE | 0.89+ |
Sunil | ORGANIZATION | 0.89+ |
Kubernetti | ORGANIZATION | 0.86+ |
couple | QUANTITY | 0.85+ |
EU | EVENT | 0.84+ |
Frame | ORGANIZATION | 0.83+ |
Karbon | PERSON | 0.82+ |
Dheeraj Pandey, Nutanix | Nutanix .NEXT EU 2018
>> Live from London, England, it's theCUBE. Covering .NEXT Conference Europe, 2018. Brought to you by Nutanix. >> Welcome back, I'm Stu Miniman, my cohost Joep Piscaer, and you're watching theCUBE here at Nutanix .NEXT, London, 2018. Happy to welcome back to the program the co-founder, CEO, and chairman of Nutanix, Dheeraj Pandey. Dheeraj, thanks so much. Congratulations on 3500 people here at the third annual European show, and thanks so much for having theCUBE. >> Thank you, my pleasure. >> All right. So, Dheeraj, first of all, you got a lot going on. Big company event here, last night you announced the Q1 2019 earnings. I guess, step back for a second. Nutanix is now, nine years since the founding, you've been public now for a little while, you got to be feeling good. The company's reached a certain size, very respected in the marketplace. So how are you and the team feeling? >> Yeah, well, I tell people that it's actually fun to be a public company. And obviously there is a cost to being a public company, because you're on a quarterly treadmill, in some sense. But Wall Street also keeps you honest. Just like Main Street keeps you honest on quality of product and customer service, Wall Street keeps you honest on spend and what does it really mean to grow at scale. So I like the fact that there is two good streets that are keeping the company honest. And it's really fun to think about capital allocation, one of the big things as you grow. I mean, you're going to spend more than a billion dollars this year alone. How do you allocate capital wisely is something that I think a lot about in (mumbles). >> Yeah. So, at this show, you kind of change some of the positioning of the portfolio. It's the Core, Essentials, and Enterprise, and right, that asset allocation, when I look at Essential, Xi Cloud, there's all these different pieces, some of them through acquisition, some of them created internally. You need to be careful that you don't over-commit, but when do you decide to kill stuff or keep it going, so you got a lot of plates to spin now, a lot more than you did a year or two ago. >> Yeah, absolutely, and it's not just product development. It's also marketing and sales and G&A. I mean, there's other departments we need to think hard about. Like, how do you create brand awareness for these new things? How do you do demand generation? How do you have a specialty sales force? All those things have to be considered, so, nine years, it's been a journey, but it still looks like it's nothing. And we're still a very small company, and we need to think hard about the next five years, in some sense. >> Yeah. So, one of the metrics you gave Wall Street to be able to look at is, what percentage of customers are using more than just the Core? So the Essentials or the Enterprise. And if I got it right, it's up to 19% from 15%, the quarter before. I wonder, is the packaging, how much of that is for Wall Street? Somebody cynically might look and be like, hey, is the Core market slowing down? And therefore you need to expand. We've all seen public companies that need to go into adjacencies, and shouldn't you stick to your knitting? You've got a great solid product with leadership in the marketplace. >> Yep, absolutely. Also, look, we are not bundling them in SKUs so we cannot force customers to actually buy them. We're not doing financial engineering of dollars, because these not SKUs or bundles. This is a journey which is mostly advisory, in some sense. This is how you should start, this is how you should go, and this is advisory for our sellers and our buyers and our channel people. Everybody needs to say, look, have the customer go through the journey. If you had to do what he just said, probably would've bundled them in SKUs and then allocated capital to one or the other. I think, to your other comment about just sticking to the core, Juniper stuck to the core. And many companies out there which just stayed as a single-box company, they stayed at the core. And eventually you realize the market has moved faster than your core itself. So there's this business school thinking, they call it the Icarus Effect. The Icarus Effect is all about, I'm so good at what I do that I can fly to the sun and nothing will happen. But you don't realize that Icarus, the wings were actually pasted using wax. And you go to the sun, and the sun actually melts the wax. So companies like FGI and SUN, Norca, many companies just stuck to one thing. And they couldn't evolve, actually. >> Obviously you're not sticking to the core alone, right? You're expanding the portfolio, I mean, you're not just an infrastructure company anymore. You do so much on top of the infrastructure on-prem. You have so many SAP services, so how do you manage the portfolio in terms of the customer journey? Because there's so much to tell to a customer. How do you sell it? How do you convince a customer to go from Core to Essentials to Enterprise? >> The most important thing is leverage. Is Essentials going to leverage Core, and is the Enterprise going to leverage Essentials and Core itself? Case in point, Files is completely built on top of Core. So every time somebody's using Files, they're also using Core. If you think about Flow, it uses AHV underneath. Frame, and case in point. When it's going to deliver desktops, it's going to use Files because every desktop needs a filer as well. And then when Frame delivers desktops on-prem, it's going to use all the Core. So the important thing is how they don't become disparate things, like they're all going in their own direction, is there a level of progressiveness where you say, well, if you're using the Enterprise features, a lot of them actually go in and drag in the Core as well as Essentials. So how do we build that progressive experience for the customer, where each of these layers are actually being utilized, is the important piece. >> Dheeraj, so, we're talking a lot about the expansion beyond the Core. But there was a pretty significant activity that your team did on Core itself. So the first time I heard about it, it basically said, we're doing an entire file system rewrite. Think of it almost as AoS 2.0. Now, from a product name, I believe it's 5.10, so I might have trouble remembering which release it was, but talk about what went involved in that. Obviously a lot has changed in the nine years since you created it, so. >> Absolutely. Yeah, yesterday in the earnings call I talked about it too, that people scoff at Core infrastructure. Like, oh, it's going to be a commodity because it's good enough infrastructure. But then I argue that there's no such thing as good enough infrastructure. And companies struggle when they don't focus on infrastructure itself. It's like food, shelter, clothing in the Maslow's hierarchy of needs. If you don't get that, then there's no point self-actualizing it. So, Core infrastructure completely destroys network insecurity. You got to get it right. I mean, look at Oracle, how it's struggling with IaaS. And look at Google, they're trying to figure out how to make it relevant for the Enterprise. Azure has like three or four different stacks for infrastructure. One for old 265, one for Azure DB, one for Azure, and now they're rewriting it for Azure itself. VMware has three different infrastructure stacks. One for three tier, where they are very happily, they're saying, look, let EMC, their NetApps actually are underneath, and Cisco's, and stuff like that. And then they have this software-defined infrastructure with commodity servers. And finally, they have VMware-enabled AWS which is going to use AWS services. So now you have three different forks of your core base, in some sense. And for us, what's important is how we use a single core base for everything. So architecture matters. I was arguing yesterday in the earnings call that good enough infrastructure is an oxymoron. You need to get core right before you can go and try to live the other layers of the Maslow's hierarchy of needs, actually. And that's why we went back and thought about, as the workloads were growing and increasing, and we had mission-critical stuff in memory databases, what do we need to really do about the way we lay out the data and lay out the metadata? So as you know, metadata is at the core of anything in systems, and especially storage systems. And the metadata of our erstwhile system was actually very completely distributed. And then we realized that some things can be local, and some things can be distributed, and that's better scale. Again, going back to this understanding of what things can be represented locally for a certain disk versus what things need to be global so that you can go and say, okay, where is this data really located? What drive? But once you go to the drive, you can actually get more metadata. So, again, you're getting more progressive scanning. So at the end of the day, our engineers are constantly thinking about performance and scalability, and how do you change the wings of the plane at 35,000 feet? It's a very big challenge. >> So that's one of the issues, right? So you're still focusing on your own infrastructure layer, right? But many customers do already have presence in a different hardware stack, or the public cloud, or some service provider. So not everything runs on your platform. So how are you planning to deliver the services ensemble to customers that don't necessarily run on AoS? >> So that's the multi-cloud journey, which is basically the enterprise journey of our customers. I said this yesterday in the earnings call as well, that all our services should be available both on-prem and off-prem. This idea of a VPC, that is multi-location, is what hybrid cloud is all about. So how do you get a virtual private cloud to really span multiple clouds in multiple locations? I think you saw from the demos today of how you're really running all of AoS on top of GCP virtual infrastructure. And in the course of the coming year or two, you'll see us do the same thing, BEM at Amazon, BEM at Azure. Because they deliver servers in their data centers and that's leverage for them because they've already gone and spent so much money on data centers that it's easy for them to deliver a physical server that our software can run on top of. And if people are not using AoS, they'll still want to use things like Frame and Beam and COM and other such things like that. >> Yep, Dheeraj, what are you hearing from customers and how do you think of hybrid, as it were? You know, a lot of attention gets played to things like Azure Stack from Microsoft from VMware on AWS, I know you've got some view points on this. >> Yeah, no, in fact, so if you go back five years, hyperconvergence had become a buzz word maybe three, four years ago. And there were a lot of companies doing hyperconvergence. And only one or two have survived and it's us and VMware, basically have survived that. Everybody else has a checkbox because the customers said well, what about that? Will we have a check box? But, it's really about operating system sort of hyperconvergence. And it has to be honest. And it has to really blur the lines between compute and storage and networking and security. I think hybrid needs to be honest and one of the killer things that hybrid needs is blurring the lines between networks, blurring the lines on storage so you can do one click replication and one click fail over. So a lot of those things have required a lot of innovations from us. That's why we were delayed in Xi. We didn't want to just put up data centers and just like that. I mean, if you go back in time to many hardware companies were putting open stack data centers and calling it their new cloud in response to Amazon. And VMware tried vCloud Air. And they had a charter to go spend money. They weren't going to spend a ton of money on hardware. Without even knowing that the cloud is not about data centers. Cloud is about an experience. It's about eCommerce and computing coming together. And you have to be passionate about a catalog. You know, the marketplace, the catalog so that people can really go and consume things from a catalog. I think that's what our experience has been that. Look, if you don't think of it like a retail giant or retail customer, which is what Amazon has done such a good job of. You know, they've thought about computing as an eCommerce problem as opposed to as a compute storage networking problem itself. And those are the lessons that we have learned about hybrid just as much >> Alright, you did a nice job on the keynote, laying out that Nutanix, like your customers, you're going through a journey. The crawl-walk-run, if you will. We got a tease in the keynote this morning about something cloud native. Where you're going. Final question for you is as you look at the company, you said it's still young, where are your customers going, where are some of the things they need to work on, and that Nutanix will mature with them as we look to move forward? >> Well, I mean, look. I think everybody knows where customers are headed. They're questioning who fulfills the promise because the requirements are all the same. They all want to go and use next generation infrastructure, they want to modernize their data centers, the infrastructure. They want to use some things that they want to own, some things they want to rent. The question is, where is the best experience possible? And by that, I mean not just systems experience of hybrid clouds but also customer service and having an ever-growing catalog and being able to deliver things for developers and devops. And technology will come and go. Two, three years ago, the Puppet and Chef were the hottest thing on, now today, it's Kubernetes. Tomorrow, it's going to be something else. It's the fact that what you see is what you do. And what you do is what you say. In our business, it's about integrity. I was arguing about this yesterday in the earnings call, as well, that building business software is a little bit easier. I shouldn't trivialize it as much but if people use business software, they can work around weaknesses of business software. But if you are in the business of infrastructure, applications cannot work around weaknesses of infrastructure. So integrity matters a lot in our space, actually, and that is about great products, great customer service, fast innovation, recovering fast, being resilient. Those are the things that we focus a lot on. >> Alright, well, Dheeraj, thanks again, always. We didn't even get to talk about the width part, the fourth H that you've been talking about for the honest, humble, and hungry. So, thank you. Congratulations to the team and always appreciate you having on our program. >> My pleasure. >> Alright, for Joep Piscaer, I'm Stu Miniman. Stay with us. Two days live of wall to wall coverage. Thanks for watching theCUBE. (light music) >> I have been in the software and technology industry for over 12 years now. And so I've had the opportunity as a marketer.
SUMMARY :
Brought to you by Nutanix. at the third annual European show, So how are you and the team feeling? one of the big things as you grow. You need to be careful that you don't over-commit, Like, how do you create brand awareness So, one of the metrics you gave Wall Street And you go to the sun, and the sun actually melts the wax. How do you convince a customer to go and is the Enterprise going to leverage Essentials So the first time I heard about it, You need to get core right before you can go So how are you planning to deliver the services ensemble And in the course of the coming year or two, and how do you think of hybrid, as it were? And you have to be passionate about a catalog. Alright, you did a nice job on the keynote, It's the fact that what you see is what you do. and always appreciate you having on our program. Two days live of wall to wall coverage. And so I've had the opportunity as a marketer.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joep Piscaer | PERSON | 0.99+ |
Dheeraj | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
FGI | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Norca | ORGANIZATION | 0.99+ |
SUN | ORGANIZATION | 0.99+ |
Dheeraj Pandey | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
nine years | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
35,000 feet | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Tomorrow | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
3500 people | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Two days | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
London, England | LOCATION | 0.99+ |
Azure Stack | TITLE | 0.99+ |
today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
two good streets | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.98+ |
Two | DATE | 0.98+ |
one click | QUANTITY | 0.98+ |
Juniper | ORGANIZATION | 0.98+ |
three | DATE | 0.98+ |
four years ago | DATE | 0.98+ |
three | QUANTITY | 0.98+ |
over 12 years | QUANTITY | 0.98+ |
more than a billion dollars | QUANTITY | 0.98+ |
AoS 2.0 | TITLE | 0.98+ |
three years ago | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
a year | DATE | 0.97+ |
single-box | QUANTITY | 0.97+ |
Xi. | LOCATION | 0.96+ |
Azure | ORGANIZATION | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
this year | DATE | 0.95+ |
NetApps | TITLE | 0.95+ |
four | QUANTITY | 0.94+ |
Q1 2019 | DATE | 0.92+ |
VMware | TITLE | 0.92+ |
last night | DATE | 0.92+ |
five years | QUANTITY | 0.91+ |
up to 19% | QUANTITY | 0.91+ |
two ago | DATE | 0.91+ |
three different forks | QUANTITY | 0.9+ |
this morning | DATE | 0.9+ |
AoS | TITLE | 0.89+ |
Azure DB | TITLE | 0.89+ |
2018 | DATE | 0.89+ |
Wall Street | ORGANIZATION | 0.88+ |
Frame | TITLE | 0.87+ |
Azure | TITLE | 0.87+ |
three tier | QUANTITY | 0.87+ |
Europe | LOCATION | 0.87+ |
next five years | DATE | 0.84+ |
Vishwam Annam & Philip Bernick | Dell Boomi World 2018
>> Live from Las Vegas, it's theCUBE. Covering Boomi World 2018, brought to you by Dell Boomi. >> Welcome back to theCUBE, I'm Lisa Martin Live at Boomi World 2018 at The Encore in Las Vegas. Been here all day, had a lot of great chats. We're excited to welcome to theCUBE for the first time a couple of gents from Hathority Implementation Partner of Dell Boomi, Philip Bernick, PhD, Principal, and Human-Centered Technologist, aka Technology Wonk. >> I go by both. >> It does say on your card, I think that's fantastic. And Vishwan Annam, MBA and principal technology architect at Hathority. Guys, welcome to theCUBE. >> Yes, thank you. >> Thank you for having us Lisa. >> So Hathority has been an implementation partner with Dell Boomi for several years now, congratulations yesterday on winning the Innovation Partner of the Year. Philip, you had an opportunity to talk yesterday at the partner summit with CTO Michael Morton, talk to us a little bit about that and about this Innovation Partner of the Year award, that's a big title. >> It is, and we're really excited to be able to do really interesting things with Boomi. It's more than just an integration platform, it really let's us do a lot of things with devices. IOT is coming to the mainstream because now we have infrastructure that will support it. It's a lot of data, it needs a big, fat pipe. We need gigabit networks in order to move it all around, to get it to the people who need to make decisions or to get it to systems who are making decisions for us, the Dell Boomi atom let's us do that and we've got it running on little tiny devices like Raspberry Pies and we can put it on other Edge devices and routers so we've done some micro services for cities that are interested in improving their smartness. >> Excellent. >> So yeah, we're excited. >> Vishwam, tell us about, for those of our viewers who haven't heard of Hathority, tell us a little bit about what you guys do, who you are, where you're located. >> Sure, so we're a data integration company so we work with Dell Boomi in automating a lot of the data integration practices, so a lot of our customers, they're in all across the world and they're serving their different (mumbles). Just as there's airlines and the healthcare and smart cities, and some are like, you know, the gaming industry. So what we are doing is we are automating all of their work flows and connecting all of their systems in one place so that's where we are liberating. We're based in the greater Phoenix area so, and our employees are, some are here in the U.S., some are India, some are in U.K., so based on what the customers needs are like in Dell Boomi our, our consultants would work there so we are 35 in strength so far, our company. >> So about three or four years you've been in business, Dell Boomi, a number of things that came out this morning, I was up to hear numbers and statistics during the general session and Chris McNabb, CEO, talked about their adding five new customers every single day, they also were, I was reading this over the weekend, fifth year in a row strong leader in the Gartner Magic Quadrant for iPads, but they've come out today and said we are redefining the I in iPads. This is more than integration, it's more than integrating applications, you got to integrate data, news sources, existing sources, you got to integrate people and processings and trading networks with this new reimagination of the I to the intelligence. Philip, I'm curious, what does that signify to you about your partnership with Dell Boomi and what opportunities are you excited that this is going to open up for you? >> Well it says to me that they're excited about the same kinds of things that we're excited about so one of the things that we demonstrated, we have customers who are interested in lots of different technologies, yesterday they talked about three years ago IOT was the eyeroll, right, don't get a headache. This year it's Blockchain. But one of the demos we brought to Boomi World is a demo where we actually use Dell Boomi to integrate with Hyperledger, a Blockchain application, and on top of that we used Flow to produce the front end and so we can integrate across a variety of platforms and now we integrated into the Blockchain and our customers want these kinds of things. The Blockchain is interesting because it's immutable, it's auditable, and it's validated by all of the participants in a particular set of nodes in the Blockchain so, you know, it's an exciting technology. It's exciting because, not because of the tokenization, things like Bitcoin, but because it's a database that you can share, a ledger that we can share. >> Because one of the challenges that a lot of our customers run into is managing the data integrity when somebody sends the data, how reliable it is and whether there, is there any place in the middle that somebody's monitoring the data so those are the challenges that Blockchain would solve in guaranteeing the data delivery and the quality of it so those are kind of I that he was mentioning, you know, as part of integration, innovation and more of a, you know, new parts and transformation. >> We're really transforming. >> The data transformation in the digital world these days. >> So Blockchain, I often hear companies that might be integration companies that talk a lot about Blockchain and I kind of sit back and go I don't understand what your story is there. Talk to us about, cause it's a, you know, crypto Blockchain, huge buzzwords, talk to us exactly about what you guys do and what Dell Boomi is doing, I think they announced support for hyperledger fabric as well as Ethereum but-- >> Right. >> Help unpack that myth around Blockchain and what integrations role is in it. >> A lot of the confusion around Blockchain comes from things like Bitcoin so the interesting thing around Bitcoin is it was the first Blockchain and it's built around this idea of a token, the Bitcoin, right? And so what this ledger is keeping track of are these Bitcoin, but you can keep track of any sort of data on a Blockchain. You can contribute data of any sort to a, not the Bitcoin Blockchain, but Ethereum, for example, we can include software, we can include other sorts of data, you can include a healthcare record that is your healthcare record that you share only with individuals with whom you share part of your private key, right, but you own it and it's yours and it's always yours and you control it. But it's validated by all of the people who are participating in producing that Blockchain so it's decentralized but it's imutable and it's auditable so it guarantees integrity because unless all of the participants agree that a transaction took place, it didn't. So we ensure data integrity through the Blockchain. That's the interesting thing about it, for us. >> That's a major part of integration companies, because a lot of the technologies that we hear, Solaris is one of the messaging queuing systems that they presentate, so they're guaranteeing the delivery at the same time relabel messaging transmissions, streaming the data, and it's faster, reliable, and managing the full data usage. >> Here's a great use case, today is voting day. Many polling places no longer have paper ballots, so you cast your vote but you have no way to actually see the vote that you cast. If it were on a Blockchain, you could inspect your vote, but no body else could know how you voted. You could insure the fact your vote was entered into the Blockchain and count it in the way that you wanted it to be. >> That's a great example and relatable, so thanks for sharing that. So guys, Dell Boomi has, I think they said this morning, Chris McNabb, over 350 partners, you guys are one of them. They have a broad ecosystem. Embedded partners, implementation, GSIs. Talk to us about your partnership and how, as Boomi says, we want to be the transformation partner, and it is all about transformation, right? Especially in an enterprise that wasn't born in the cloud. It can't survive without, as the customer expectation drives, I want to be able to buy something from your physical store, maybe a partner store, online, Amazon, Zappos, whatnot and I expect as a customer to have a seamless experience. That's hard to do for a company that's maybe 20, 30 years old to transform. I'm thinking of omni-channel retailers as the example. How is your integration, pun intended, will Dell Boomi really helping customers transform their digital, IT, security, workforce, what goes through with that opportunity to transform? >> You know, the relationship between Dell Boomi and it's partners is really synergistic. I mean they provide a lot of support. There's really excellent training, there's excellent communication. There's marketing support, we share on projects in a variety of ways, we do jump starts. So we help teach people how to use Boomi in addition to helping Boomi folks teaching us how to use the new tools. There's a great community for providing feedback, for getting resources if there's something that we need to do that we don't know how to do. There's a huge community that shares, we all share connectors, right? We're building integration and a connector doesn't exist and we create a new connector, not the configuration of the connector itself, we share it. So that collaborative approach to doing business is really important to us and it reflects our companies ethos as we hope is also reflects Dell Boomi's ethos. >> We've been working in Boomi since 2012, so over the years like even though we were certified partners since 2015, we have been contributing to various channels, like the support or, like, the community channel, and contributing to the release planning as well, because we are the first line of defense from the customers, we know what the customers are expecting. So say they got Salesforce to implement it. So we as a system integrator, we come in and see what are the data points for the Salesforce. And say like user data, they want to build their contacts in there or any activities or sales data. So there are multiple systems that are feeding into Salesforce in this case. So we are the ones who are contributing to Dell Boomi. Okay, these are the features that we could consider. So because Salesforce a-walled in, just like Boomi, they launched a different watch list as well So as in Boomi, there is a different connector for Salesforce and Service Cloud and multiple layers in that so those are the unique cases that we are contributing to Dell, and obviously there, I mean, they take the feedback so from the partners like us where they see it as they work towards delivering with this. So one use case that we are working with some of out customers who have innovated, we have been asking Dell to build it, like, you know, and they were able to deliver it. There are, like, they want some reporting of it, so you transmit the data to one system to other, and they wanted to see okay how the data system was the source and the system was the destination and how this data was transmitted. So Boomi gave the real time visibility into those. So those are some kind of partnering opportunities like all the way from customer to the product so we are happy to be in the middle and contributing our part of it. >> That's one of the things that I've heard a lot today is that Boomi is listening, one of the great examples of that on stage this morning was Chris McNabb talking about the Dell Boomi employee onboarding solution. They actually did an internal survey earlier this year and found, whoa, this is really not an optimal process, and in implementing an onboarding solution to make that more streamline, to obviously, you know, you hire someone who's brilliant, you want to be able to get them up and running and innovating as fast as possible. I like they shared the feedback they got from their own employees and created a solution that they're now being able to deliver to the market. >> And there was another piece to that that was really interesting which is that they utilized their partner network in order to build solution, right? They didn't build all of it in house. >> You're right, they did talk about that. >> They reach out and partners, they work with partners in a variety of ways and we really, really appreciate that. >> Yeah, that listening, that synergy that you've both talked about was really apparent. So when we look at certain business initiatives, like onboarding or customer 360 or e-commerce, any favorite joint customer example that you've helped to integrate that has approached one of those daunting business initiatives, and worked with Hathority, and you're laughing, to really transform. >> They're all like that. >> Really interesting, yeah. Do you want to talk about it here? >> Give me one of your favorite examples. >> Share, well, share. >> Okay, so with some of our customers, and especially with some of our enterprise scale, so there are a lot of systems that are at stake for them because, you know, they want to have the digital transformation journey so the major one Dell Boomi contributes to is connecting all of the system, giving them their visibility so with, not only the point to point integrations, they also pull the real time integrations capability. So we're like, with this case, where the customer go into retail store and say they want to do something at the point of sale transaction, they want to purchase something, so there and you have the credit card transaction. I mean, those need to encrypt, I mean, we cannot wait for 10 minutes to get the data so that's where, you know, like Dell Boomi is scalable and it's robust in the sense that their response time is pretty quick. So it's on a real time basis. So a lot of these cases like, you know, with the Boomi that we are able to deliver it. You know, on the the integration side, APA side, and now with the EMB hedge, which is a master data hub, a new product from them within the last two years. We have been working with our customers implementing a master data hub as well as ManyWho, which is a Dell Boomi Flow which is amazing. Some of our customers, you know, with the APAs, like can you see the data? But with the Flow, you can visualize, these are the exact UI that you are seeing. How your data is getting in on the back end and then you can throw it out so, because these enterprise customers, especially on the business side if they're working with something, so they want to try it out, but you know, they don't want to learn, you know, programming to do that so that's when, like, Flow will, is already helping, we are already seeing the value of it with our customers. >> We've heard a little bit about that today as well, Flow and terms of the automation, but also how that will enable customers, there was a cute little video on their website that I saw recently which showed an example of Flow. Somebody bangs their car into a tree, gets out, and takes a photograph of the incident, uploads it to their insurance carrier app who then actually initiates the entire claim into process, and that's was to me a clear example of you have to go where the data is. Michael Dell says frequently there's a big boom at the edge, but if I'm in that scenario as a customer, I want to know, I don't care what's on the back end, I want to be able to get this initiated quickly and I thought that was a nice, kind of, example of how they're able to abstract that so that the customer experience can be superior than the competition. >> Absolutely, so that's where Boomi has something called run time engine, which is scalable, like you could install, like, you know, a smaller device like Raspberry Pie which is like, you know, just a mini computer. Or you you could install on the big switchboard itself, so this is a scalable so earlier, as Michael Dell was mentioning, the edge of computing. So you could install on a Gateway, which sits on the-- >> On a tree >> On a tree. (laughs) So you don't have to send all the data to cloud for processing so it's an amazing leap into the next distribution computing because, as you mentioned, the fast, the fastness of response time, you know. We don't have to wait for the cloud to respond so all the combinations and real time navigation's are happening within the Edge network itself so, we are all on the same, we have implemented the same solution so, which was one of the reason why we're the winner of Innovation Partner of the Year award. >> Well congratulations again for that gentlemen. Thank you so much for stopping by. >> Thank you. >> And sharing with our viewers a little bit about Hathority and what you guys are, how you really symbiotically innovating with Dell Boomi. Philip, Vishwam, thanks so much for your time today. >> Thank you for having us. >> Thank you, thank you for having us. >> My pleasure, we want to thank you for watching theCUBE. I'm Lisa Martin live from Boomi World 2018 in Las Vegas. Stick around, I'll be back with John Frayer and our next guest after a short break. (upbeat music)
SUMMARY :
brought to you by Dell Boomi. and Human-Centered Technologist, aka Technology Wonk. And Vishwan Annam, MBA and principal at the partner summit with CTO Michael Morton, IOT is coming to the mainstream because now we have tell us a little bit about what you guys do, and some are like, you know, the gaming industry. and what opportunities are you excited that so one of the things that we demonstrated, so those are kind of I that he was mentioning, you know, talk to us exactly about what you guys do and what integrations role is in it. and you control it. because a lot of the technologies that we hear, in the way that you wanted it to be. and I expect as a customer to have a seamless experience. not the configuration of the connector itself, we share it. so from the partners like us where they see it as to make that more streamline, to obviously, you know, that was really interesting which is that and we really, really appreciate that. and you're laughing, to really transform. Do you want to talk about it here? So a lot of these cases like, you know, Flow and terms of the automation, So you could install on a Gateway, which sits on the-- the fastness of response time, you know. Thank you so much for stopping by. Hathority and what you guys are, thank you for having us. My pleasure, we want to thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris McNabb | PERSON | 0.99+ |
Vishwan Annam | PERSON | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
U.K. | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Philip | PERSON | 0.99+ |
John Frayer | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
Philip Bernick | PERSON | 0.99+ |
U.S. | LOCATION | 0.99+ |
Zappos | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Hathority | ORGANIZATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
35 | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Boomi | PERSON | 0.99+ |
Michael Morton | PERSON | 0.99+ |
Vishwam | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
four years | QUANTITY | 0.98+ |
Phoenix | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over 350 partners | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Dell Boomi | ORGANIZATION | 0.98+ |
iPads | COMMERCIAL_ITEM | 0.98+ |
Flow | TITLE | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
2012 | DATE | 0.98+ |
Hyperledger | TITLE | 0.98+ |
CTO | PERSON | 0.97+ |
first time | QUANTITY | 0.97+ |
Boomi World 2018 | EVENT | 0.97+ |
Boomi | ORGANIZATION | 0.97+ |
one system | QUANTITY | 0.96+ |
Michael Dell | PERSON | 0.96+ |
Solaris | ORGANIZATION | 0.96+ |
three years ago | DATE | 0.95+ |
five new customers | QUANTITY | 0.94+ |
first line | QUANTITY | 0.94+ |
earlier this year | DATE | 0.93+ |
one place | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
this morning | DATE | 0.91+ |
Salesforce | TITLE | 0.89+ |
Bitcoin | OTHER | 0.89+ |
Pragnya Paramita, Dell Boomi | Dell Boomi World 2018
>> Live from Las Vegas. It's theCUBE, covering Boomi World 2018. Brought to you by Dell Boomi. >> Welcome back to theCUBE, we are continuing our coverage of Boomi World 2018. I'm Lisa Martin in Las Vegas with John Furrier and we're welcoming to theCUBE, Pragnya Paramita, Senior Product Marketing Manager at Dell Boomi. Pragnya, welcome. >> Hi, nice to meet you guys. >> So second annual Dell Boomi World, we had Mandy Dhaliwal, your CMO, on shortly, ago who said doubled from last year. Some of the really cool stats that caught my ears and eyes this morning during the general session are 7500 plus customers globally that Dell Boomi has now. You're adding five new customers everyday. There are about close to 70 different customers speaking at this event. The customers are coming together to share how Dell Boomi is helping them on this nebulous, daunting transformation journey. Talk to us about some of the news coming out in the last couple of days, and as a product marketing manager, what are some of the things that excite you? >> I think, after the last few weeks, what we've been able to put out in the market with our partnership with the Blockchain consortium has been really exciting. To be working for a company that's always been at the cutting edge and looking to do things at the cutting edge, just as an employee, that's like a really cool thing to be a part of. But what I'm really excited about is tomorrow's Keynote. And I know we've probably been teasing everybody through the day about tomorrow's Keynote but I'm really excited to unveil what we are going to be showing you guys tomorrow. >> So one of the things that's exciting about you guys is that the product market fit is clear with customer traction. As you guys look at, say, Blockchain smart contracts, this is about business, so you're messaging around, connecting businesses with developer integration as a starting point with low code is a productivity question, it's a foundational question. As you have this platform, what's some of the product positionings that you guys are looking to expand on? Obviously we heard Michael Dell today say, data tsunami, scaling AI. These are questions that people want to have answers. Is that how you guys see the positioning when you go to market? >> So, at first positioning I think the true value that we do provide our customers is fast time to market, so I think speed and the ability to do things efficiently and being the first to market is what our customers really value and we want to be able to power that so that's goal to our positioning in the market. The other one is flexibility. I think with each vendor and consolidation happening around in the market, people are marking their turfs and territory and in this day and event, at Boomi, we really want to be an open ecosystem. You bring your data, you bring your application, you bring your cloud. You could have a hybrid environment as you operate your business, Boomi will connect to everything, and I think that is a cool part of our messaging that we want to make sure customers understand, we want to make sure the market understand that we'll be true to that. >> As you got the cool technology with the Cloud-Native, you guys are born in the cloud, still operating at cloud scale, as you sit at the product marketing meetings and think about the customers, you're solving a lot of problems, there's a lot of check boxes on the solving customer problems but you also want a position for the future. So I got to ask you, when you look at your customer base holistically, what's the core problem that you guys solve for your customers? >> I think unlocking the value of the data, customer data. So it resides in siloed application, it resides in parts of business that some... So if you're not the American business, your ability to interact with your Australian counterparts is not only restricted by time zones but it's also restricted by laws and data protection and all of those things which governments are waking up to. And to be able to do that securely, to be able to do that at a scale, is something that we want to be able to deliver to our customers. And I think our ability to be a Cloud-Native platform allows us that flexibility to do it in a way that customers feel comfortable and again, are able to get some value back from their data. >> So about six months ago, the Gartner Magic Quadrant for IPAAS came out and once again I think, John, we've heard today for the fifth year in a row Dell Boomi is a strong leader. I'm curious, six months later, now, today, you guys said we are re-imagining the I in iPaaS. From a market that's well established, highly competitive, that now customers, it's not just about integrating applications, it's integrating data from new sources, from existing sources, to be able to identify new revenue streams, new products, new services. What is it about this re-imagining the I to be intelligence, that, in your opinion, is going to further really kind of elevate Dell Boomi's competitive differentiation. >> So, the true differentiation is that in the market, we were the first who were a Native-Cloud application. So the value of that single instance multi-tenant cloud application is what we are really leveraging as part of our intelligence in the platform. So many of our competitors and other vendors in the market have probably caught on to this whole cloud thing in the last couple of years. But at the end of the day, we have 10 years of a lead with them, that would be hard for them to match. And again, it is value from what customers have been doing on our platform, so our ability to look at that enormous amount of data anonymously and then provide value back to them has been really critical to our success in how our customers have found value and I guess with the ability for us to leverage AI and machine learning capabilities within the platform, we want to be able to make it much more easier for our customers. >> So in terms of business initiatives, some of the key ones that Dell Boomi targets are e-commerce, order to cash, Customer 360, as well as onboarding. Talk to us, I really like that Chris McNabb, in the general session this morning kind of opened the kimono and said, "Hey, we found, "through the voice of our own employees, "we weren't so great in this particular area." Talk to us about the Dell Boomi employee onboarding solution and how it was really born based on your own internal needs for improvement. >> So I joined a year ago, I was employee number 300 something, and this year we are at employee number 700 plus, maybe going onto 800 at the last we heard, so you can imagine the scale that the company is growing at and for us and I guess what Chris articulated this morning, employee onboarding was becoming a choke point, not only in making sure employees are productive faster, but are also enjoying this new company that they've decided to, you know, become a part of. We, at Boomi, as Boomers ourselves, do really value our culture a lot, but that didn't quite reflect in the employee onboarding experience that we were providing, and I think that was a big stimulus, Chris shared the numbers of our NPS scores that he saw, for him to say that hey, we are running at a really fast pace but this is critical issue. >> Pretty big negative number a year ago or six months ago on that end. >> And as a CEO, he decided this is a priority, but then as we went through this exercise, what we were able to find out that it's not only a challenge that we are facing, but our customers, both large and small, continue facing that issue. So the approach that we took was while we were solving our own employee onboarding challenge, we were able to productize that entire solution and create an accelerator. And the value of that accelerator, it's a common problem, we know it is a problem that happens at scale, and at a certain scale it becomes really detrimental to your business. But then your business is really unique so we cannot give you a one-size-fit-all solution that you can go and turn on on day one and it'll work. What we are giving you here is a framework, we leveraged it, we had great results, we are more than happy to share that back, that something that took like 92 days for an employee to get access to 27 applications now takes minutes, like literally five minutes. What took about 19 admins across the organizations who were doing this as a second job almost, because we're a small company, the guy who bought the license for this new software that he wanted his team to use, became the admin for that product, and now his team is, from seven people, it's now 52 people. But he's still the admin of that product, along with managing that solution, so all of that effort was consolidated from 19 people to like two people, that's real gain there in just employee productivity that we have been able to standardize. And what we are doing now is taking the solution and the accelerator package to our customers and we are having some great conversation with many of our customers who had initially looked at Boomi and said like, hey, you guys provide us an integration solution to our problem. But at the end of the day, onboarding, as within an organization, is a cross-functional issue. It ties together workflows from your finance team, from your benefits team, from your recruiting team who is getting the candidate to your HR, who is going to make sure-- >> Facilities where you sit, all kinds of data. >> All kinds of things, and making sure you have your laptop and your badges and all of those things on day one. So a lot of people in the organizations are like these silent heroes who are making sure that every employee who shows up on day one has a good experience but there's only so far that a manual process can go, and being able to automate that process, and a good reason why we are now able to do this is because of Boomi Flow. The ManyWho acquisition that we did last year, it has opened doors for us to have conversations with our customers where we are like, you have cross-functional processes, you need to be able to automate them as much as possible and let your employees actually do more value added work instead of being, you know, sending emails and then collating emails with data from every place, putting it in a spreadsheet, adding that to your SAP, or your workday system and-- >> So that sounds like that's the consequence of two problems, I hear this right, one, data silos and manual or purpose-built applications that are dependent upon data silos. No data silos allows for automation, and then everything kind of goes away and solves the problem. Is that right? >> Yeah, absolutely. So cross-functional workflows are something that when people try to solve, they end up causing the integration problem at the end of the day. So you try to solve for one thing but then integration is always at the core of it. With Boomi, because we are coming integration up, we sort of automatically solve for that, but then with Boomi Flow, what we are able to do is we are able to abstract that away from users who don't really care about how you're going to get two applications to work together, so if you are in the HR team, you just want to make sure that here is the value proposition for the organization that I hired these employees for, they get to see that. I don't really care if your 15 applications need to work together at the backend. (cross talking) >> American Airlines example's a good one, they've hundreds of integrations, some will ship it and forget it. They won't have to remember it, hey, number 52, what was that again? Solved the problem but broke this over there. That's kind of the problem that is the core issue, right? >> It's a core issue. So we have a session later today with American Airlines, and MOD Pizza. So, both of them are a study in contrast. MOD Pizza is an organization that was founded a couple of years ago, around the same time that American Airlines and US Airways merges was happening. So the session is very interesting because you get a perspective from a company that started in 2011 or 2013, and took an approach of being a Cloud-Native infrastructure. So they make choices where all of their applications are in the Cloud but then when they grew at a certain scale, employee onboarding became an issue, they came to Boomi and how they are solving it, and on the flip side of it, you have a perspective from a large organization that around the same time relogged that their employee onboarding issues and then looked at Boomi and then said that, hey, how can we solve this? And as they said in the Keynote, good is not good enough, you need to have a great experience. >> Well you've also raised your NPS score 168 points, and now you've got an opportunity to reach customers in a different way, like you said to be able to integrate these functions and have to work together, that abstraction layer is critical for the business being more efficient and more productive. Finding new revenue streams faster, being more competitive, and really unlocking the value of that data so it can be used across multiple business units within organizations at the same time. Pragnya, thanks so much for stopping by and joining John and me on theCUBE today. >> Yeah, it was great talking to you guys. >> We appreciate it and have a great time at-- >> Hope you have a great Boomi World. >> Absolutely, off to a great start. Thanks so much for your time. For John Furrier, I'm Lisa Martin, you're watching theCUBE, Live from Boomi World 18 in Vegas, stick around, John and I will be back with our next guest. (light music)
SUMMARY :
Brought to you by Dell Boomi. Welcome back to theCUBE, in the last couple of days, at the cutting edge and looking to do things So one of the things that's exciting about you guys and being the first to market is what our customers you guys solve for your customers? and again, are able to get some value back from their data. to be intelligence, that, in your opinion, But at the end of the day, we have 10 years of a lead opened the kimono and said, "Hey, we found, for him to say that hey, we are running or six months ago on that end. and the accelerator package to our customers Facilities where you sit, putting it in a spreadsheet, adding that to your SAP, that's the consequence of two problems, that here is the value proposition That's kind of the problem that is the core issue, right? and on the flip side of it, you have a perspective that abstraction layer is critical for the business Absolutely, off to a great start.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
2013 | DATE | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Pragnya Paramita | PERSON | 0.99+ |
Chris McNabb | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
MOD Pizza | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Pragnya | PERSON | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
15 applications | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
92 days | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
27 applications | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
two applications | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
US Airways | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
52 people | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
19 people | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
six months later | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
second job | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
7500 plus customers | QUANTITY | 0.99+ |
168 points | QUANTITY | 0.99+ |
Boomi World 2018 | EVENT | 0.98+ |
single | QUANTITY | 0.98+ |
five new customers | QUANTITY | 0.98+ |
six months ago | DATE | 0.98+ |
Boomers | ORGANIZATION | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
Dell Boomi World | EVENT | 0.97+ |
Dell Boomi | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
this morning | DATE | 0.95+ |
this year | DATE | 0.95+ |
800 | OTHER | 0.94+ |
Dell Boomi | PERSON | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
each vendor | QUANTITY | 0.93+ |
later today | DATE | 0.92+ |
SAP | ORGANIZATION | 0.91+ |
Keynote | TITLE | 0.91+ |
ManyWho | ORGANIZATION | 0.9+ |
Magic Quadrant | COMMERCIAL_ITEM | 0.89+ |
day one | QUANTITY | 0.89+ |
one thing | QUANTITY | 0.88+ |
Blockchain | ORGANIZATION | 0.88+ |
about six months ago | DATE | 0.87+ |
Boomi | PERSON | 0.87+ |
Wikibon 2019 Predictions
>> Hi, I'm Peter Burris, Chief Research Officer for Wikibon Cube and welcome to another special digital community event. Today we are going to be presenting Wikibon's 2019 trends. Now, I'm here in our Palo Alto Studios in kind of a low tech mode. Precisely, because all our our crews are out at all the big shows bringing you the best of what's going on in the industry, and broadcasting it over The Cube. But that is okay because I've asked each of our Wikibon analysts to use a similar approach to present their insights into what would be the most impactful trends for 2019. Now the way we are going to do this is first we are going to use this video as base of getting our insights out, and then at the end we are going to utilize a crowd chat to give you an opportunity to present your insights back to the community. So, at the end of this video, please stay with us, and share your insights, share your thoughts, your experience, ask your questions about what you think will be the most impactful trends of 2019 and beyond. >> A number of years ago Wikibon predicted that cloud, while dominating computing, would not feature all data moving to the cloud but rather, the cloud experience and cloud services moving to the data. We call that true private cloud computing, and there has, nothing has occurred in the last couple of years that has suggested that we were, anyway, wrong about this prediction. In fact, if we take a look at what's going on with Edge, our expectations that increasingly Edge computing and on Premise technology, or needs, would further accelerate the rate at which cloud experiences end up on Premise, end up at the Edge, and that would be the dominant model for how we think about computing over the course of the next few years. That leads to greater distribution of data. That leads to greater distribution of places where data actually will be used. All under the aegis of cloud computing but not utilizing the centralized public cloud model that so many predicted. >> A prediction we'd like to talk about is how multi-cloud and orchestration of those environments fit together. At Wikibon, we've been looking for many years at how digital businesses are going to leverage cloud, and cloud is not a singular entity, and therefore the outcomes that you are looking for, often require that you use more than one cloud, specially if you are looking at public clouds. We've been seeing the ascendance of Kubernetes as a fundamental foundational piece of enabling this multi-cloud environment. Kubernetes is not the sole thing, and of course, you don't want to overemphasize any specific tool, but you are seeing, driven by the CNC AFT in a broad ecosystem, that Kubernetes is getting into all the platforms, both public and private cloud, and that we predict that by 2020, 90% of multi-cloud enterprise applications will use Kubernetes to lead for the enablement of their multicloud strategies. >> One of the biggest challenges that the industry is going to face over the next few years is how to deal with multi-cloud. We predict, ultimately, that a sizable percentage of the marketplace, as much as 90%, will be taking a multi--cloud approach first to how they conceive, build, and operate their high, strategic value applications that are engaging customers, engaging partners, and driving their businesses forward. However, that creates a pressing need for three new classes of Technology. Technology that provides multi-cloud inter-networking; Technology that provides orchestration services across clouds, and finally Technologies that ensure data protection across multi-cloud. While each of these domains by themselves is relatively small today, we think that over the next decade they will, each, grow into market that are tens of billions if not hundreds of billions of dollars in size. >> The picture I'd like to talk about a very few, the Robotic Process Automation, RPA. So we've observed that there's a widening gap between how many jobs are available world wide and the number of qualified candidates to fill those jobs. RPA, we believe, is going to become a fundamental approach to closing that gap, and really operationalizing artificial intelligence. Executives that we talk to in The Cube; They realize they just can't keep throwing bodies at the problem, so this so called "software robots" are going to become increasingly easy to use. And we think that low code or no code approaches to automation and automating work flows are going to drive the RPA market from its current position, which is around a billion dollars to more than ten X, or ten billion dollars plus by 2023. I predict that in 2019 what we are going to see is more containerization of AI machine learning for deployment to the Edge, throughout the multi-cloud. It's a trend that's been going on for some time. In particular, what we are going to be seeing is a increasing focus on technologies, or projects in code base such as Cube flow, which has been established in this year just gone by to support that approach for containerization of AI out to the edges. In 2019, we are going to see the big guys, like Google, and AWS, and Microsoft, and others in the whole AI space begin to march around the need for a common delatched framework suck such as Cube Flow, because really that is where many of their customers are going. The data scientists and App developers who are building these applications; They want to manage these over Kubernetes using these CNC stacks of tooling and projects to enable a degree of supportability and maintain ability and scalability around containerized intelligent applications. >> My prediction is around the move from linear programming and data models to matrix computing. This is a move that's happening very quicly, indeed, as new types of workload come on. And these workloads include AI, VR, AR, Video Gaming, very much at the edge of things. And ARM is the key provider of these types of computing chips and computing models that are enabling this type of programming to happen. So my prediction is that this type of programming is gonna start very quickly in 2019. It's going to rule very rapidly about two years from now, in 2021, into the enterprise market space, but that the preparation for this type of computing and the movement of work right to the edge, very, very close to the senses, very, very close to where the users are themselves is going to accelerate over the next decade. >> The prediction I'd like to make in 2019 is that the CNCF, as the steward of the growing cloud native stack, they'll expand the range of projects to include the frontier topics, really the frontier paradigms, in micro sources in cloud computing; I'm talking about Serverlus. My prediction is that virtual Kubelets will become an incubating project at CNCF to address the need to provide Serverlus event driven interfaces to containerize orchestrated micro sources. I'd also like to predict that VM and container coexistence will proceed apace in terms of a project such as, specially Kubevirt. I think will become also a CNCF project. And I think it will be adopted fairly widely. And one last prediction, in that vein, is that the recent working group that CNCF has established with Eclipse, around IOT, the internet of things. I think that will come to fruition. There is an Eclipse project called Ditto that uses IOT, and AI, and digital twins, a very interesting way for industrial and other applications. I think that will come under the auspices of CNC in the coming year. >> Security remains vexing to the cloud industry, and the IT industry overall. Historically, it's been about restricting access, largely at the perimeter, and once you provide through the perimeter user would have access to an entire organization's resources, digital resources, whether they be files, or applications, or identities. We think that has to change, largely as a consequence of businesses now being restructured, reorganized, and re-institutionalizing work around data. That what's gonna have to happen is a notion of zero trust security is going to be put in place that is fundamentally tied to the notion of sharing data. So, instead of restriction access at the perimeter, you have to restrict access at the level of data. That is going to have an enormous set of implication overall, for how the computing industry works. But two key technologies are essential to making zero trust security work. One is software to find infrastructure, so that you can make changes to the configuration of your security policies and instances by other software and to, very importantly, high quality analytics that are bringing the network and security functions more closely together and through the shared data are increasing the use of AI, the use of machine learning, etc and ensuring higher quality security models across multiple clouds. It's always great to hear from the Wikibon analysts about what is happening in the industry and what is likely to happen in the industry. But now, let's hear from you, so let's jump into the cloud chat as an opportunity for you to present your ideas, your insights, ask your questions, share your experience. What will be the most important trends and issues in 2019 and beyond, as far as you are concerned. Thank you very much for listening. Now let's cloud chat.
SUMMARY :
each of our Wikibon analysts to use and cloud services moving to the data. and that we predict that by 2020, 90% that the industry is going to face over the and the number of qualified candidates to fill those jobs. but that the preparation for this type of computing is that the recent working group So, instead of restriction access at the perimeter,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
ten billion dollars | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
more than ten X | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
around a billion dollars | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
tens of billions | QUANTITY | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
two key technologies | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
Eclipse | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Wikibon Cube | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
ARM | ORGANIZATION | 0.95+ |
next decade | DATE | 0.95+ |
today | DATE | 0.95+ |
hundreds of billions of dollars | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
last couple of years | DATE | 0.89+ |
Palo Alto Studios | LOCATION | 0.88+ |
Kubernetes | TITLE | 0.86+ |
this year | DATE | 0.86+ |
zero | QUANTITY | 0.84+ |
next few years | DATE | 0.84+ |
Edge | ORGANIZATION | 0.83+ |
number of years ago | DATE | 0.82+ |
Cube Flow | TITLE | 0.79+ |
Wikibon | TITLE | 0.74+ |
Process Automation | ORGANIZATION | 0.74+ |
three | QUANTITY | 0.72+ |
Serverlus | TITLE | 0.71+ |
CNC | ORGANIZATION | 0.68+ |
Premise | ORGANIZATION | 0.67+ |
few years | DATE | 0.64+ |
Kubevirt | ORGANIZATION | 0.64+ |
Cube | TITLE | 0.63+ |
about two years | QUANTITY | 0.61+ |
The Cube | ORGANIZATION | 0.58+ |
Chief Research Officer | PERSON | 0.57+ |
Cube | ORGANIZATION | 0.57+ |
Kubernetes | ORGANIZATION | 0.56+ |
classes | QUANTITY | 0.54+ |
Ditto | TITLE | 0.53+ |
Edge | TITLE | 0.51+ |
Edge | COMMERCIAL_ITEM | 0.33+ |
Siddhartha Dadana, FINRA & Gary Mikula, FINRA | Splunk .conf18
>> Live from Orlando, Florida, it's theCUBE, covering .conf 18. Brought to you by Splunk. >> We're back in Orlando, everybody, at Splunk .conf18, #splunkconf18. I'm Dave Vellante with my co-host Stu Miniman. You're watch theCUBE, the leader in live tech coverage. We like to go out to the events. We want to extract the signal from the noise. We've been documenting the ascendancy of Splunk for the last seven years, how Splunk really starts in IT operations and security, and now we hear today Splunk has aspirations to go into the line of business, but speaking of security, Gary Mikula is here. He's a senior director of cyber and information security at FINRA, and he's joined by Siddharta "Sid" Dadana, who's the director of information security engineering at FINRA. Gentlemen, welcome back to theCUBE, Gary, and Sid, first-timer, welcome on theCUBE. So, I want to start with FINRA. Why don't you explain, I mean, I think many people know what FINRA is, but explain what you guys do and, sort of, the importance of your mission. >> Sure, it's our main aspiration is to protect investors, and we do that in two ways. We actually monitor the brokers and dealers that do trades for people, but more importantly, and what precipitated our move to the Cloud was the enormous amount of data that we have to pull in daily. Every transaction on almost every US stock market has to be surveilled to ensure that people are acting properly, and we do that at the petabyte scale, and doing that with your own hardware became untenable, and so the ability to have elastic processing in the Cloud became very attractive. >> How much data are we talking about here? Is there any way you can, sort of, quantify that for us, or give us a mental picture? >> Yeah, so the example I use is, if you took every transaction that Visa has on a normal day, every Facebook like, every Facebook update, and if you took every Twitter tweet, you added them altogether, you multiplied it by 20, you would still not reach our peak on our peak day. >> (laughs) Hence, Splunk. And we'll talk about that but, Sid, what's your role, you got to architect all this stuff, the data pipeline, what do you... >> So, my role is basically to work with the webs teams, application teams to basically integrate security in the processes, how they roll out applications, how they look at data, how they use the same data that security uses for them to be able to leverage it for the webs and all the performances. >> So, your mission is to make sure security's not an afterthought, it's not a bolt-on, it's a fundamental part of the development process, so it's not thrown over the fence, "Hey, secure this application." It's built in, is that right? >> Yes. >> Okay. Gary, I wonder if you could talk about how security has changed over the last several years. You hear a lot that, well, all the spending historically has been on keeping the bad guys out the perimeter. As the perimeter disappears, things change, and the emphasis changes. Certainly, data is a bigger factor, analytics have come into play. From your perspective, what is the big change or the big changes in security? >> So, it's an interesting question. So I've been through several paradigm changes, and I don't think anyone has been as big as the move the Cloud, and... The Cloud offers so much opportunity from a cost perspective, from a processing perspective, but it also brings with it certain security concerns. And we're able to use tools like Splunk to be able to do surveillance on our AWS environments in order to give us the confidence to be able to use those services up there. And so, we now are actually looking at how we're going to secure individual AWS services before we use them, rather than looking to bring stovepipe solutions in, we're looking to leverage our AWS relationship to be able to leverage what they've built out of the box. >> Yeah, people oftentimes, Stu, talk about Cloud security like it's some binary thing. "Oh, I don't want to go the Cloud, because Cloud is dangerous" or "Cloud security is better". It's not that simple, is it? I mean, maybe the infrastructure. In fact, we heard the CIA, Stu and I were in D.C. in December, we heard the CIO of the CIA say, "The Cloud, its worse day is better than my client's server from a security perspective." But he's really talking about the infrastructure. There's so much more to security, right? >> Absolutely, and, so I agree that the Cloud gives the opportunity to be better than you are on PRAM. I think the way FINRA's rolled out, we've shown that we are more secure in the Cloud than we have been on traditional data centers, and it's because of our ability to actually monitor our whole AWS environment. Everything is API-based. We know exactly what everybody's doing. There's no shadow IT anymore, and those are all big positives. >> Yeah, I'm wondering how you've, what KPIs you look at when you look at your Splunk environment. What we hear from Splunk, you know, it's scalability, cost, performance, and then that management, the monitoring of the environment. How are they doing? How does that make your job easier? >> So, I think we still look at the same KPIs that Splunk advertises all the time, but some of the reasons, from our perspective, we kind of look at it in terms of, how much value can we give it to not just one part of the company, but how can we make it much more enhanceable part for everyone in the organization. So, the more we do that, I think that makes it a much better ROI for any organization to use a product like this one. >> You guys talk about the "shift left" movement. What is "shift left" and what is the relevance to security? >> Yeah so, "shift left" is a concept where, instead of looking at security as a bolt-on, or an add-on, or a separate entity, we're looking to leverage what are traditional DevOp tools, what are traditional SDLC pipeline roles, and we're looking at how we integrate security into that, and we use Splunk to be able to integrate collection of data into our CDCI pipelines, and it's all hands-off. So, somebody hits a button to deploy a new VPC and AWS, automatically things are monitored and into our enterprise search, I'm sorry, enterprise security SIM, and automatically being monitored. There's no hands-on that needs to be done. >> So, on a scale of one to five, thinking of a maturity model in terms of, in a DevOps context, five being, you know, the gold standard and one being you're just getting started. Where would you put FINRA on that spectrum, I mean, just subjectively? >> So, I'll never say that we're a five because I think there's always, >> You're never done. >> You're never done and there's always room for improvement, but I think we're at least a strong four. We've embraced those concepts, and we've put them into action. >> And so, I thought so, and I want to ask you from a skill standpoint how you got there. So, you've been around a long time. You had a Dev team and an Ops team before the term DevOps even came around, right? And we talk about this a lot, Stu. What did you do with the Ops guys and the Dev guys? Is it OpsDev or DevOps? Did you retrain them? Did you fire them all and hire new people? How did you go through that transition? >> Yep, that's a fair thing. I went to my CISO John Brady a couple of years ago and I told him that we were going to need to get these new skill sets in, and that I thought I had the right person in Sid to be able to head that up, and we brought in some new talent, but we also retrained the existing talent because these were really bright people, and they still had the security skills. And what Sid's been able to do is to embrace that and create a working relationship with the traditional DevOps teams so that we can integrate into their tools. >> So, it does include a little bit work even on our end to do where you kind of learn how the DevOps forces work, so you've got to do it on your own to first figure out things and then you can actually relate to the problems which they will go through and then you work through problems with them, rather than you designing up a solution and then just say, "Hey, go and implement it out." So, I think that kind of relationship has helped us and in the long run, we hope to do a bit better work. >> Yes, Sid, can you bring us in a little bit, when you look at your Splunk deployment, FINRA'S got a lot of applications, how do you get all those various applications in there? You know, Splunk talks about, you can get access to your data your way, do you find that to be the reality? >> Yes, to a certain extent, so... Let's take a step back here. So our design is much more hybrid-oriented. So, we use Splunk Cloud, but that's primarily for our indexers whereas we host our own sort of class receptor. All the data basically goes in from servers from AWS components, from on-prem, basically it flows into our Splunk Cloud indexers, and we use a role-based access management to actually give everyone access to whatever data they need to be looking at. >> Alright. The number of enhancements from 702, updates, the Cloud, Gar-Gar, is there anything that's jumped out that's going to architecturally help your team? >> So, I think one of the interesting things is the new data pipeline, and to be able to actually mangle that data before I get it into my Splunk indexers is going to be really really life-changing for us. One of the hard parts is that developers write code and they don't necessarily create logs that are event-driven. They don't have date-time stamps, they do dumps. So, I'm going to be able to actually massage that before it hits the indexers, and it's going to speed up our ability to be able to provide quick searches because the indexers won't be working on mangling that data. >> And how big of a deal is it for you? They announced yesterday the ability to scale storage and compute separately in a more granular fashion, is that a big deal for you? >> So, I actually, I remember speaking to Doug Merritt probably three years ago. >> You started this! (laughing) >> And I said, "Doug", I said, "I really think that's the direction that you need to go. You're going to have to separate those two, eventually, because we're doing a petabyte scale, we realized very early that that'd need to be done. And so, it's really really refreshing to see, because it's going to be transformative to be able to do compute-on-demand after that. Because now we can start looking at API brokers, and we can start looking at containers, and all those other things can be integrated into Splunk. >> Love having customers on like you guys, so knowledgeable. I have to ask, switch gears a little bit, I want to ask you about your security regime. We had a customer on yesterday, and it was the CISO who reported to him. He was the EVP, and he reported to the CIO. A lot of organizations say, "You know what? We want the CISO to be separate from the CIO. Cause it's like the, you know, the fox in the henhouse kind of thing. And we want that a little bit of tension in there." How do you guys approach it? What's the regime you have for... >> That is a fair question, and I've heard that from many other CISOs that have that same sort of complaint. And I think it's really organization-based. And I think, do you have the checks and balances in place? First of all, our CIO, Steve Randich, is extremely, he cares a lot about security, and he is very good at getting funding for us for initiatives to help secure the environment. But more importantly, our board of directors bring up security at every board event. They care about it, they know about it, and that permeates through the organization. So there's a checks and balances to make sure that we have the right security in place. And it's a working relationship, not adversarial at all, so, having our CISO John Brady report to Steve Randich, the CIO, has not been a hindrance. >> And I think that's a change in the last several years, because that regime that I described, which was, there was sort of a wave there, where that became common, and I think you just hit on it. When security became a board-level issue, and for every Fortune 1000, Global 2000 company, it's a board-level issue. They talk about it every board meeting. When that occurred, I think there was an epiphany of, "We need the CIO to actually be on this." And you want the CIO to be responsible for that. And the change was, it used to be, "Hey, if I fail, I get fired." And I think boards now realize that "failure" in security doesn't mean you got breached. >> Sure. >> You know. Breaches are going to happen. It's how you respond to them and, you know, how you react to them that is becoming more important. So there's much more transparency around security in our view. I wonder if you agree with that. >> I think there's transparency. And the other thing is is that you have to put the decision-making where it makes the most sense. Most of the security breaches that we're talking about are highly technical in nature, where a CIO is better able to evaluate some of those decisions, not all companies have a CEO that came from a technology train in order to be able to make those decisions. So, I think it makes more sense to have the CISO report to somebody in the technology world. >> Great, thank you for that. Now, the other question I have for you is, in terms of FINRA's experience with Splunk, did it start with SecOps and security, or was it, sort of, IT operations, or...? >> It did, it started with security. We were disenfranchised with traditional SIMs that were out there, and we decided to go with Splunk, and we made the decision that security was going to own it, but we wanted it to be a corporate asset from day one. And we worked our tails off to integrate, through brown bags, through training. So we permeated through the organization. And, on any given week, we pull about 35-40% of all of technology is using Splunk at FINRA. >> So, I'm curious as to, we heard some announcements today, I don't know if you saw them, about, you know, Splunk Next, building on that, Splunk for the line of business, the business flow, they did a nice demo there. Do you see, because security sort of was the starting point, and your mission was always to permeate the organization, do you see that continuing to other parts of the organization more aggressively now given this sort of democratization of data for the business lines, and... Will you guys be a part of that, directly? >> We hope so. We hope we are part of that change, too. I mean, the more we can use the same data for even business users that will help them, that would relieve a lot of, and they made this point again and again in the keynote, too, that, the It Ops and SecOps are already burdened enough. So, how do we make life easy for business users who actually leverage the same data? So we hope to be able to put these tools up and see if it can make any difference to business users. >> So, you guys have put a lot of emphasis on integrating with Splunk and AWS Cloud. You have a presentation later on today at .conf18 around the AWS Firehose that you have with Splunk. What's that all about? What's the AWS Firehose? How are you integrating it? Why is it important? >> So, it is streaming and it allows me to get information from AWS that's typically in something called the CloudWatch Logs, that is really difficult to be able to talk to. And I want to get it into the Splunk so I can get more value from it. And what I'm able to do is put something called a subscription filter on it, and flow that data directly into Splunk. So, Splunk worked with AWS to create this integration between the two tools, and we think we've taken it to a high level. We use it for Lambda, to grab those logs, we use it for VPC Flow Logs, we're using it for SaaS Providers, provide APIs into their data, we use it for that, and finally, we're going to be doing database activity monitoring, all leveraging this same technology. >> Love it, I mean, you guys are on the forefront of Cloud and Splunk integration, Cloud adoption, DevOps, you guys have always been great about sharing your knowledge, you know, with others, and we really appreciate you guys coming on theCUBE. Thank you. >> Thanks for having us. >> You're welcome. Alright, keep it right there, everybody. Stu and I will be back. You're watching theCUBE from .conf18, Splunk's big user conference. We'll be right back. (electronic music)
SUMMARY :
Brought to you by Splunk. We like to go out to the events. the ability to have elastic and if you took every Twitter tweet, the data pipeline, what do you... to be able to leverage it to make sure security's and the emphasis changes. to be able to leverage what I mean, maybe the infrastructure. to be better than you are on PRAM. What we hear from Splunk, you know, So, the more we do that, is the relevance to security? There's no hands-on that needs to be done. So, on a scale of one to five, and we've put them into action. and I want to ask you to be able to head that and in the long run, we hope need to be looking at. that's going to So, I'm going to be able speaking to Doug Merritt that's the direction that you need to go. What's the regime you have for... And I think, do you have the "We need the CIO to actually be on this." to them and, you know, in order to be able to Now, the other question I have for you is, decided to go with Splunk, Splunk for the line of business, I mean, the more we can use the same data that you have with Splunk. between the two tools, and we think guys are on the forefront Stu and I will be back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
FINRA | ORGANIZATION | 0.99+ |
Steve Randich | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
Gary Mikula | PERSON | 0.99+ |
December | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sid | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Siddharta "Sid" Dadana | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Siddhartha Dadana | PERSON | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two ways | QUANTITY | 0.99+ |
John Brady | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
three years ago | DATE | 0.98+ |
one part | QUANTITY | 0.98+ |
D.C. | LOCATION | 0.98+ |
John Brady | PERSON | 0.98+ |
Lambda | TITLE | 0.98+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
20 | QUANTITY | 0.96+ |
#splunkconf18 | EVENT | 0.96+ |
One | QUANTITY | 0.96+ |
.conf18 | EVENT | 0.95+ |
Cloud | TITLE | 0.95+ |
ORGANIZATION | 0.95+ | |
702 | OTHER | 0.95+ |
Global 2000 | ORGANIZATION | 0.94+ |
Splunk Cloud | TITLE | 0.93+ |
Firehose | COMMERCIAL_ITEM | 0.93+ |
Visa | ORGANIZATION | 0.93+ |
ORGANIZATION | 0.91+ | |
SecOps | TITLE | 0.9+ |
Aparna Sinha, Google & Chen Goldberg, Google Cloud | Google Cloud Next 2018
live from San Francisco it's the cube covering Google cloud next 2018 brought to you by Google cloud and its ecosystem partners ok welcome back everyone we're live here in San Francisco this is the cubes exclusive coverage of Google clouds event next 18 Google next 18 s the hashtag we got two great guests talking about services kubernetes sto and the future of cloud aparna scene how's the group product manager of kubernetes and we have hen goldberg director of engineering of google cloud - amazing cube alumni x' really awesome guests here to break down why kubernetes why is Google cloud really doubling down on that is do a variety of other great multi cloud and on-premise activities guys welcome to the queue great to see you guys again thank you always a pleasure and again you know we love kubernetes the CN CF and we've talked many times about you know we were riffing and you know Luke who Chuck it was on Francisco who loves sto we thought service meshes are amazing you guys had a great open source presence with cube flow and a variety of other great things the open source contribution is recognized by Diane green and the whole industry as number one congratulations why is this deal so important we're seeing the big news at least for me this kind of nuances one datos available you get general availability we're supposed to be kind of after kubernetes made it but now sto is now happening faster why so what we've seen in the industry is that it only becomes too easy to create micro services or services overall but we still want to move fast so with the industry today how can you make sure that you have the right security policies how do you manage those services at scale and what if tio does really in one sense is to expand it it's decoupled the service development from the service operations so developers are free they don't need to take care of monitoring audit logging network traffic for example but instead the operation team has really sophisticated tool to manage all of that on behalf of the developers in a consistent way you know Penn and I did a session yesterday a spotlight session and it covered cloud services platform including ISTE oh we had a guest from eBay and eBay has been with Google kubernetes engine for a long time and they're also a contributor to the kubernetes open source project they talked about how they have hundreds of micro services and they're written in different languages so they're using gold Python Ruby everything under the Sun and as an operator how do you figure out how the services are communicating with each other how do you know which ones are healthy so they I asked him you know so how did you solve that complexity problem and he said boom you assist EO and I deployed this deal it deploys as just kind of like a sidecar proxy and it's auto injected so none of your developers have to do anything and then it's available in every service and it gives you so much out of the box it gives you traffic management it gives you security it gives you observability it gives you the ability to set quotas and to have SL o--'s and and that's really you know something that operators haven't had before describe SL lows for a second what is why is that important objectives so you can see an example so you can have an availability objective that this service should always always be available you know 99.9 percent of the time that's an SLO or you know the response rate needs to be have a certain type of latency so you can have a latency SLO but the key here with this deal is that as an operator previously Jeff was working Jeff from eBay he was working at the at the VM or container or network port level now he's working at the service level so he understands intelligence about the parts of the application that weren't there before and that has two things it makes him powerful right and more intelligent and secondly the developer doesn't need to worry about those things and I think one of the things for network guys out there is that it's like policy breeze policy to the equation now I want to ask course on the auto injections what's the role of the how much coding is involved in doing this zero coding how much how much developer times involved in injecting the sidecar proxies zero from a developer perspective that's not something that you need to worry about you you can focus on you know the chatbot your writing or the webpage your writing or whatever logic you're developing that's critical for your business that's gonna make you more competitive that's why you were hired as a developer right so you don't have to worry about the auto injection of sto and what we announced was really managed it's d1 gke so that's something that Google will manage for you in the future oh go ahead I want less thing about sto I think it also represented changing the transformation because before we were all about kubernetes and containers but definitely when we see the adoption the complexity is much broader so in DCP were actually introducing new solutions that are appropriate for that so easier for example works on both container eyes applications and VM based applications cloud build that we announced right it also works across applications of all types doesn't have to be only containers we introduced some tools for multi cluster management because we know all customers have multi cluster the large ones so really thinking about it how is in a holistic way we are solving those problems we've seen Google evolve its position in the enterprise clearly when we John and I first started talking to Google about cloud is like everything's going to cloud now we're seeing a lot of recognition of some of the challenges that enterprises face we heard a lot of announcements today that are resonating or going to resonate with the enterprise can you talk about the cloud services platform is that essentially your hybrid strategy is it encompass that maybe you could talk about that little bit closer services platform is a big part of our hybrid cloud strategy I mean for as a Google platform we also have networking and compute and we bridge private and public and that's a foundation but cloud services platform it comes from our heritage with open source it comes from our engagement with many large enterprises banks healthcare institutions retailers do so many of them here you know we had HSBC speaking we had target speaking we know that there are large portions of enterprise IT that are going to remain on premise that have to remain on premise because you know they're in a branch office or they have some sort of regulatory compliance or you know that's just where their developers are and they want to have a local environment so so we're very very sensitive and and knowledgeable about that and that's why we introduced cloud services platform as Google's technology in your environment on Prem so you can modernize where you are at your own pace so some of the things we heard today in the keynote we heard support for Oracle RAC and Exadata and sa P that's obviously traditional enterprises partnership with NetApp cloud armor shielded VMs these are all you know traditional enterprise things what enterprise grade features should we be looking for from cloud services platform so the first one which I actually love the most is the G key policy management one of the things we've heard from our customers they say okay portability is great consistency great but we want security portability right they now have those all of those environment how can they ensure that they're combined with the gtp are in all of their environments how they manage tenants in all of their environments in the same way and G key policy measurement is exactly that okay we're allowing customers to apply the same policy while not locking them in okay we're fully compatible with the kubernetes approach and the primitives of our bug enrolls but it is also aligned with G CPI M so you can actually manage it once and apply it to all your environment including clusters kubernetes cluster everywhere you have so I expect we'll have more and more effort in this area I'm making sure that everything is secured and consistent auto-scaling is that enterprise greed auto-scaling yes yes I mean auto-scaling is a inherent part of kubernetes so kubernetes scales your pods automatically that's a very mature I mean it's been stable for more than a year or probably two years and it's used everywhere so auto skip on auto scaling is something that's used and everywhere the thing about gke is that we also do cluster auto scaling cluster auto scaling is actually harder and we not only do it for CPU as we do it for GPUs which is innovative you know so we can scale an auto scale and auto implements Auto provision your GPUs if you machine learning we're gonna bring that on-prem - it's not in the first version but that's something that with the approach that we've taken to GK on Prem we're gonna be adding those kinds of capabilities that gonna be the go on parameters it's just an extension just got to get the job done or what time frame we look API that we've built it's a downward API that works with some sort of hardware clustering technology right now it's working with vSphere right and so it basically if you're under an underlying technology has that capability we will auto scale the cluster in the future you know I got to say you guys are like the dynamic duo of kubernetes seen you in the shows you had Linux Foundation events talk about the relationship between you guys you have an engineering your product management how were you guys organizer you're moving fast I mean just the progress since we've been interviewing you to CN CF segoe all just been significant since we started talking on the cube you see in kubernetes obviously you guys have some inside knowledge of that but it's really moving fast how is the team organized what's the magic internal formula that you guys are engineering and you guys are working as a team I've seen you guys opens is it just open stores is the internal talk about some of the dynamics we're working as one team one thing I love mostly about the Google culture is about doing the right thing for the user like the announcements you've seen yesterday on the on the keynote there are many many teams and I've been working together you know to get that done but you cannot see that right you don't see that there are so many different teams and different product managers and different engineering managers all working together but well I I think where we are right now I know is that really Google is backing up kubernetes and you can see it everywhere right you can see with ours our announcement about key native yeah for example so the idea of portability the idea of no lock-in is really important for us the idea of open cloud freedom of choice so because we're all aligned to that direction and we all agree about the principles is actually super easy to the she's very modest you know this type of thing doesn't just happen by itself right I mean of course google has a wonderful culture and we have a great team but I you know I really enjoy working with hen and she is an amazing leader she is the leader of the engineering team she also brings together these other teams you know every large company has many teams and the announcement at the scale that we made it and the vision that you see the cohesiveness of it right it comes from collaboration it comes from thinking as a team and you know the management and leadership depend has brought to the kubernetes project and to kubernetes and gke and cloud services platform is phenomenal it's an inspiration I really enjoy the progress congratulate and it's been great progress so I hear a lot of customers talk about things like hey you know they evaluate vendors you know those guys have done the work and it's kind of a categorical way of saying it's complete they're working hard they're doing the right things as you guys continue this mission what's some of the work that you're continuing to what's the work that you guys are doing the work we see some of that evidence if it does ascribe to someone says hey have you done the work to earn the cred in the crowd cloud what would it be how would you describe the work that you've done and the work that you're doing and continue to do what does that work what would you say that I mean I hope that we have done the work to you know to earn the credit I think we're very very conscientious you know in the kubernetes open source project I can say we have 300 plus contributors we are working not just on the future functionality but we work on the testing and the we work on the QA we work on all the documentation stuff we work on all the nitty-gritty details so I think that's where we earn the credit on the open source side I think in cloud and in Enterprise do well you're seeing a lot of it here today you know the announcements that you mentioned we're very very cognizant and I think the thing I like about one of the things that Diane said I liked very much as I think the industry underestimates us well when you talk about well we look at the kubernetes if I can call it a playbook it took the world by storm obviously solving some of your own problems you open source it develop the community should we think about it Co the same it's still the same way are you going to use that sort of similar approach it seems to be working yes doing open source is not easy okay managing and investing and building something like kubernetes requires a lot of effort by the way not just from Google we have a lot of people that working full time just on kubernetes the way we look at that we we look about the thing that we have valued the most like portability for example if there is anything that you would like to make a standard like with K native those are kind of thing that we really want to bring to the industry as open source technologies because we want to make sure that they will work for customers everywhere right we need we need to be genuine and really stand behind what we were saying to our customers so this is the way we look at things again another example you can see about Q flow right so we actually have a lot of examples or we want to make sure that we give those options so that's one it's one is for the customer the second thing I want actually the emphasize is the ecosystem and partners yeah we know that innovation not a lot of innovation will come from Google and we want to make sure that we empower our powders and the ecosystem to build new solutions and is again another way to do it yes I mean because we're talking before we came on camera about the importance of ecosystems Dave and I have covered many industries within you know enterprise and now cloud and big data and I see blockchain on the horizon another part of our coverage area ecosystems are super important when you have openness and you have inclusion inclusion Airy culture around building together and co-creation this is the ethos of open source but people need to make money right so at the end of the day we're you guys are not you're not a non-profit you know it's gonna make profit so instead of the partners so as the world turns to cloud there's going to be new value opportunities how do you guys view that ecosystem because is it yeah is it more educational is it more just keep up a lot of people want to be on the right side of history with cloud and begin a lot of things are changing how do you guys view that ecosystem in terms of nurturing it identifying it working with it building it sharing what's your thoughts sure you know I I believe that new technology comes with lots of opportunity we've seen this with kubernetes and I think going forward we see it it's not a zero-sum game you know there's a huge ecosystem that's grown up around kubernetes and now we see actually around sto a huge ecosystem as well the types of opportunities in the value chain I think that it changes it's not what it used to be right it's not so much I think taking care of hardware racking and stacking hardware it's higher level when we talked about SEO and how that raises the level of management I think there's a huge role for operators it's a transformative role you know and we've seen it at Google we have this thing called site reliability engineering sre it's a big thing like those people are God you know when it comes to your services I think that's gonna happen in the enterprise that's gonna be a real role that's an Operations role and then of course developers their life changes and I think even like for regular people you know for kids for you and I and normal people they can become developers and start writing applications so I think there's a huge shift that's a huge thing you're touching on a lot of areas of IT transformation you know talking about the operations piece we've touched upon some of the application development how do you guys look at IT transformation and what are some of your customers doing IT transformation is enabled by you know this raising of the level of abstraction by having a multi cluster multi cloud environment what I see in in the customer base is that they don't want to be limited to one type of cloud they don't want to be limited to just what's on Prem or just what's in one you know in any one cloud they want to be able to consume best-of-breed they want to be able to take what they have and modernize it even if it's even if they can't completely rewrite or even if they can't completely transform it they want to be able they wanted to be able to participate so they even they want their mainframes to be able to participate but yeah I had one customers say you know I I don't want to have two platforms a slow platform and a fast platform I want just a fast platform know about the future now as we end the segment here I want to get your thoughts we're gonna see CN CF s coming up to Seattle in a couple months and also his ST O's got great traction with I'll see with the support and and general availability but what's the impact of the customers because gke Google Cabernets engine is evolving to be the single in her face it's almost as ease of use because that's a real part of what you guys are trying to do is make it easy the abstraction layer is gonna create new business models obviously we see that with the transformation fee she were just mentioning the end of the day I got to operate something I'm a network guy I'm now gonna might be a operating the entire environment I'm gonna enable my developers to be modern fast or whatever they want to be in the day you got to run things got to manage it so what does gke turn into what's the vision can you share your thoughts on on how this transforms and what's the trajectory look like so our goal is actually to help automate that for our customers so they can focus elsewhere as we said from the operations perspective making things more reliable defining the SLO understanding what kind of service they want to provide their customers and our hope you know you can again you can see in other things that we are building like Auto ml okay actually giving more tools to provide those capabilities to the application I think that's really see more and more so the operators will manage services and they will do it across clusters and across environments this is this is a new skill set you know it's the sre skill set but but even bigger because it's not just in one cloud it's across clouds yeah it's not easy they're gonna do it with centralized policy centralized control security compliance all of that so you see us re which is site reliability engineers at Google term but you see that being a role in enterprises and it's also knowing what services to use when what's going to be the most cost effective the right service for the right job that's really an important point I agree I think yeah I think security I think cost perspective was something definitely that will see enterprises investing more in and understanding and how they can leverage that right for their own benefit the admin the operator is gonna say okay I've got this on Prem I've got these three different regions I have to be that traffic coordinator to figure out who can talk to who where should this traffic go there's who should have how much quota all of that right that's the operator role that's the new roles so it's a it's an opportunity for operations people who might have spent their lives managing lawns to really transform their careers yes there's no better time to be an operator I mean you can I want to be an operator and I can't tell you how my dear sorry impacts our team like the engineering team how much they bring the focus on customer the service we are giving to our customers thinking about our services in different ways I think that actually is super important for any engineering team to have that balance okay final questions just put you on the spot real quick answer great stuff congratulations on the work you guys are doing great to follow the progress but I'm a customer I'll put my customer hat on par in ahead I can get that on Amazon Microsoft's got kubernetes why Google cloud what makes Google cloud different if kubernetes is open why should I use Google Cloud so you're right and the wonderful thing is that Google is actually all in kubernetes and we are the first public cloud that actually providing a managed kubernetes on-prem well the first cloud provider to have a GCP marketplace with a kubernetes application production-ready with our partners so if you're all in kubernetes I would say that it's obvious yeah III see most of the customers wanting to be multi cloud and to have choice and that is something that you know is very aligned with what we're look at this crowd win open source is winning great to have you on a part of hend thanks for coming on dynamic duo and kubernetes is - a lot of new services are happening we're bringing all those services here in the cube it's our content here from Google cloud Google next I'm Jennifer and David Lonnie we'll be right back stay with us for more day two coverage after this short break thank you
SUMMARY :
right so at the end of the day we're you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jennifer | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
David Lonnie | PERSON | 0.99+ |
99.9 percent | QUANTITY | 0.99+ |
HSBC | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
more than a year | QUANTITY | 0.99+ |
Aparna Sinha | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
two years | QUANTITY | 0.99+ |
hen goldberg | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two things | QUANTITY | 0.99+ |
two platforms | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
John | PERSON | 0.98+ |
Chen Goldberg | PERSON | 0.98+ |
today | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
300 plus contributors | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
vSphere | TITLE | 0.98+ |
Diane green | PERSON | 0.98+ |
Luke | PERSON | 0.98+ |
first version | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
2018 | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
Python Ruby | TITLE | 0.96+ |
one customers | QUANTITY | 0.95+ |
both | QUANTITY | 0.94+ |
Exadata | ORGANIZATION | 0.94+ |
18 | DATE | 0.94+ |
hundreds of micro services | QUANTITY | 0.94+ |
first cloud | QUANTITY | 0.93+ |
Francisco | LOCATION | 0.93+ |
two great guests | QUANTITY | 0.92+ |
gke | ORGANIZATION | 0.91+ |
Foundation | EVENT | 0.91+ |
CN CF | ORGANIZATION | 0.9+ |
Google cloud | TITLE | 0.88+ |
Google cloud | TITLE | 0.87+ |
one team | QUANTITY | 0.87+ |
Chuck | PERSON | 0.86+ |
lot of examples | QUANTITY | 0.83+ |
next 18 s | DATE | 0.83+ |
one sense | QUANTITY | 0.81+ |
Google cloud | TITLE | 0.81+ |
one type | QUANTITY | 0.8+ |
lot of announcements | QUANTITY | 0.8+ |
next 2018 | DATE | 0.79+ |
Penn | ORGANIZATION | 0.78+ |
three different regions | QUANTITY | 0.78+ |
one cloud | QUANTITY | 0.77+ |
one of the things | QUANTITY | 0.75+ |
NetApp | ORGANIZATION | 0.74+ |
G | TITLE | 0.74+ |
cloud | TITLE | 0.73+ |
secondly | QUANTITY | 0.7+ |
lot of people | QUANTITY | 0.7+ |
kubernetes | ORGANIZATION | 0.7+ |
Google Cabernets | TITLE | 0.7+ |
Google Cloud | TITLE | 0.7+ |
Sun | TITLE | 0.7+ |
lot of people | QUANTITY | 0.7+ |
second | QUANTITY | 0.68+ |
Dan Aharon, Google | Google Cloud Next 2018
>> Live from San Francisco, it's The Cube, Covering, Google Cloud Next 2018, brought to you by, Google Cloud and it's ecosystem partners. >> Everyone, welcome back, this is The Cube, live in San Francisco for Google Cloud, big event here, called Google Next 2018, #GoogleNext18, I'm John Furrier, Dave Vellante, bringing down all the top stories, all the top technology news, all the stuff that they're announcing on stage, some of the executives, the product managers, customers, analysts, you name it we want to get that signal and extract it and share that with you. Our next guest is Dan here and he's the product manager for Cloud AI at Google, and dialogue flow with a hot product here under his preview. Thanks for joining us! Good to see you! >> Ah, yeah, excited to be here! >> We were bantering off camera because we love video, we love speech to text, we love all kinds of automation that can add value to someone's products rather than having to do a lot of grunt work, or not having any capabilities, so super excited about what your working on, the variety of things, this one's the biggest, dialogue flow, talk about the product. >> Sure, yeah, yeah. >> What is it? Yeah, so Dialogue Flow it's a platform for building conversational applications, conversation interfaces, so could be chatbox, it could be voicebox, and it started from the acquisition of APIAI, that we did a year and a half ago, and its been gaining a lot of momentum since then so last year at Google Cloud Next, we announced that we just crossed 150,000 developers in the Dialog Flow community, yesterday we just announced that we now crossed 600,000 and yeah its uh-- >> Hold on, back up, slow down. I think I just missed that. You had what and then turned in to what? Say it again. >> So it was a 150,000 last year or over a 150,000 and now its now its over 600,000. >> Congratulations, that's massive. >> So yeah, I-- >> That's traction! >> It's very, very exciting. >> Four X. (laughs) >> And yeah, we you know, were still seeing like a lot of strong growth and you know with the new announcements we made yesterday, we think it's going to take a much larger role, especially in larger enterprises and especially in sort of powering enterprise contact centers. >> You know, natural language processing, also know as NLP for the folks that you know, know the jargon, or don't know the jargon, its been around for a long time, there's been a series of open sores, academias done it, just, it just, ontologys been around, its like, it just never cracked the code. Nothing has actually blown me away over the years, until cloud came. So with cloud, you're seeing a rebirth of NLP because now you have scale, you've got compute power, more access to data, this is a real big deal, can you just talk about the importance of why Cloud and NLP and other things that were, I won't say stunted or hit a glass ceiling and the capability, why is cloud so important because you're seeing a surge in new services. >> Yeah, sure, so there's two big things, one is cloud, the other is machine learning and the AI, and they kind of advanced speech recognition, natural language understanding, speech symphysis, all of the big technologies that we're working on, so with Cloud, there's now sort of a lot more processing that's done centrally and there's more availability of data, that he could use to trains models and that feeds well into machine learning and so you know with machine learning we can do stuff that was much harder to do before machine learning existed. And with some of these new tools, like what makes Dialog Flow special is you could use it to build stuff very, very easily, so I showed last year at Google Cloud Next how you build a bot for an imaginary Google Hardware store, we built the whole thing in 15 minutes, and deployed it on a messaging platform and it was done and its so quick and easy anyone can do it now. >> So Dave we could an ask the cube bot, take our transcripts and just have canned answers maybe down the road you automate it away. >> Yeah, yeah, yeah! >> You'd kill our job! (laughs) >> No its pretty awesome. What's interesting is its shifting the focus from kind of developers and IT more to the business users, so what we're seeing is a lot of our customers, one of the people that went on stage yesterday in the Dialog Flow section, they were saying that now 90% of the work is actually done by the business users that are programming the tool. >> Really? Because a low code type of environment? >> Yeah, you can build simple things without coding, now you know, if you were a large enterprise you're probably going to need to have a fulfillment layer, that has code, but it's somewhat abstracted from the analoopies, and so you can do a lot of things directly on the UY without any code. >> So I get started as a business user, develop some function, get used to it and then learn over time and add more value and then bring in my real hardcore devs when I really want some new functions. >> Right. So what it handles is understanding what the user wants. So if you're building a cube bot, and what Dialog Flow will do is help you understand what the user is saying to the cube bot and then what you need to bring in a developer for is to then fulfill it so if you want that, for example, every time they ask for cube merchandise, you want to send them a shirt or a toy or something, you want your developer to connect it to your warehouse or wherever. >> Give us the best bot chain content you have? >> Right. >> There it is. >> So how would we go about that? We have all this corpus of data that we ingest and and we would just, what would we do with that? Take us through an example. >> So you would want to identify what are the really important use cases, that you want to fulfill, you don't want to do everything, you're going to spread yourself thin and it won't be high quality, you want to pick what are the 20% of things that drive 80% of of the traffic, and then fulfill those, and then for the rest, you probably want to just transition to a human and have it handled by a human. >> So, lets say for us we want it to be topical, right, so would we somehow go through and auto categorize the data and pick the top topics and say okay now we want to chat bot to be able to ask questions about the most relevant content in these five areas, ten areas, or whatever, would that be a reasonable use case that you could actually tackle? >> Yeah, definitely. You know there's a lot of tools, some Google offer, some that other offer that can do that kind of of categorization but you would want to kind of figure out what the important use cases that you want to fulfill and then sort of build paths around them. >> Okay and then you've got ML behind this and this is a function I can, this fits into your servalist strategy, your announced GA today, >> We announced GA a few months ago, but what we announced yesterday was five new features that help transform Dialog Flow into sort or from a tool-- >> What are those features take a minute to explain. >> Sure, yeah, yeah, so first is our Dialog Flow phone gateway, what is does is it can turn any bot into a an IVR that can respond within, it take 30 seconds to set up. You basically just choose a phone number and it attaches a phone number and it cost zero dollars per month, zero, nothing, you juts pay for usage if it actually goes above a certain limit, and then it does all of the speech recognition, speech symphysis, natural language understanding orchestration, it does it all for you. So setting up and IVR, a few years ago used to be something that you needed millions of dollars to set up. >> A science project! Yeah absolutely! >> Now you can do it in a few minutes. >> Wow! >> Second is our knowledge connectors. What it does it lets you incorporate enterprise knowledge into your chat bot, it could either be FAQs or articles, and so now if you have some sort of FAQ, again in like less than a minute, you can build it into Dialog Flow without having to intense for it. Then there are a few other smaller ones that we introduced also are speech symphysis, automatic spell correction, which is really important for a chat box because people always have typos, I'm guilty just as much as everyone. Last but not least sentiment analysis, so when it helps you understand when you want to transition to a human, for example, if you have someone sort of that's not super happy-- >> Agent! >> Yeah exactly! >> And some of these capabilities were available separately so for example you could have built a phone gateway and connected it to Dialog Flow before, but it used to be a big project that took a lot of work so, we had a guest speaker yesterday, in the session for Dialog Flow and they've been running POC with a few vendors right now, its been going on for a few months, and they told us that with Dialog Flow, phone gateway and knowledge connectors, they were able to build something in a few hours that took a few months to do with other vendors because they have to stitch together multiple services, configure them, set them up, do all of that. >> So the use case for this, just to kind of, first of all to, chat box have been hot for a while, super great, but now you have an integrated complex system behind it powering an elegant front end, I could see this as a great bolt on to products, whether it's websites or apps, how-tos, instrumentation, education, lot of different apps, that seems to be the use case. How does someone learn more about how they get involved? Do they go to the website, download some code? Just take us through. I want to jump in tomorrow or now, what do I do? >> There's a free edition I can have right? >> Exactly, yeah, so the good news is you could go to either cloud@google.com/dialogflow or dialogflow.com, there's, if you go to dialogflow.com you can sign up for the standard edition which is 100% free, its for text interactions, its unlimited up to small amount of traffic, and you can even play around with the phone gateway and knowledge connectors with a limited amount, without even giving a credit card. If you want cloud terms of service and enterprise grade reliability, we also offer Dialog Flow enterprise edition, which is available on cloud or google.com, and you can sign up there. >> That comes with an SLA that-- >> Exactly, an SLA and like cloud data terms of service, and everything that's kind of attached with that. I'd also encourage people to check out the YouTube clip for the session that was yesterday that was where we demoed all of these new features. >> What was the name of the session? >> Automating you contact center with a virtual agents. >> Okay check that out on YouTube, good session. Okay so take us through the road map, your on the products, so you're product manager so this is, you got to decide priorities, maybe cut some things, make things work better, what's on the roadmap, what's the guiding principles, what's the north star for this product? >> Yeah, so, for us it's all about the quality of the end user experience, so the reality is there's many thousands of bots out there in the world, and most of them are not great. >> I'll say, most of them really suck. (laughs) >> If you Google for why chat bots, why chat bots fail is the first result, and so that's kind of our north star, we want to solve that, we want to help different developers, whether they're start ups, experience they're enterprises, we want to help them build a high quality bots, and so a lot of the features we announced yesterday, are kind of part of that journey, for example, send integrated sentiment experience that as you transition to humans, cause we know we can't solve everything so helps you understand, or knowledge connectors-- >> Automation helps to a certain point but humans are really important, that crossover point. Trying to understand that's important. >> Exactly, and we'd rather help people build bots that are focused on specific use cases, but do them really, really well, versus do a lot, but leave users with a feeling that they were talking to a bot that doesn't understand them and have a bad experience. >> We could take all the questions we've done on the cube, Dave, and turn them into a chat bot. What's the future of bots? >> Yeah. >> Go ahead, answer the question. (laughs) >> So I think, so we're kind of in the last year or two, we've been at an inflection point, where speech recognition has advanced dramatically, and it's now good enough it can understand really complex questions, so you can see with, sort of Google Assistant and Google Home and bunch of other things that people can now converse with bots and get sort of reasonably good answers back. >> And that just feed ML in a big way. >> Right, exactly, so now, you know, Dialog Flow introduced speech recognition in recognition, which just introduced speech recognition yesterday, and so we're now looking to empower all of our developers to build these amazing voice voice based experiences with Dialog-- >> Give an anecdote or an experience that the customers had where you guys are like wow, that blow me away! That is so cool, or that is just so technically amazing, or that was unique and we've never seen that coming, give us, share some color commentary around some of the implementations of the bot, bot world and the Dialog Flow's impact to someones business or life. >> Sure, so I think yesterday the ticketmaster team was showing how they look at their current idea of that's based in the old world, where you have to give very short response like yes or no or like San Francisco California, and because it's built on these short responses, it kind of a guided IVR, it takes 11 steps-- >> What's an IVR again? >> Integrated Voice Response or Interactive Voice Response, it's a system that answers the phone. >> Just want to get the jargon right. >> So now that with something like Dialog Flow they can go and build something like that instead of 11 steps, takes 3 steps. So because someone can just say, I'd like to buy tickets for so and so and complete the sentence. And the cool thing is sort of the example that they gave a recording that I made with them about a year, plus ago, and the example was, I'd like to book tickets for Chainsmokers and then they were showing it yesterday in the conference, they were like oh we know why you chose it, its because the Chainsmokers are preforming at Google Cloud Next! Its probably just a funny coincidence but... >> So they've deployed this now or they're in the processes of deploying it? >> They're in the process of deploying it, first for customer service, and at a later stage its going to be for sales as well. >> Yeah, because of the IVR for Ticketmaster today, I know it well, I'm a customer, I love Ticketmaster, but you're right, it tells you what you just asked them pretty well, but it really doesn't quite solve your problem well so. >> I mean the recognize the sales one was built a long time ago, but they're kind of overhauling all of that. >> I'm excited to see it because its a good point of comparison, you know good reference point that you understand, it's , the takeaway that I'm getting, Dan, is the advice you're giving is, nail the use case, narrow it down, and then start there, don't try to do too wide of a scope. >> Exactly, exactly. Handle the most important thing is delivering great end user experiences because you want people to really enjoy talking to the bot, so in surveys people say, 60% of consumers say that the thing they want to improve most in customer service is getting more self serve tools. They're not looking to talk to humans, but they're forced to because the self services, yeah they're terrible. >> If can get it quickly self served, I'd love that every time, I'd serve myself gas and a variety of other things, airport kiosks have gotten so much better, I don't mind those anymore. Okay one quick follow up on Dave's point about making a focus, I totally agree, that's a great point. Is there a recommendation on how the data should be structured on the ingest side? What's the training data, si there a certain best practice you recommend on having certain kinds of data, is it Q and A, is it just text, speaks this way, is it just a blob of data that gets parsed by the engine? Take us through on the data piece. >> So that really changes a lot, depending on the specific use case, the specific companies, the specific customers, so someone asked in the adience yesterday, asked the guest speaker has many intense they felt in Dialog Flow and each one of them had very different answer, so it depends a lot. But I would say the goal is to kind of focus on the top use cases that really matter, built high quality conversations, and then built a lot of intents and text examples in those, and when I say a lot, it doesn't, we don't need a lot because Dialog Flow is built on machine learning, sometimes a few dozen is enough, or maybe a couple hundred if you need to, but like we see people trying tens of thousands, we don't need that much data. And then for the other stuff that's not in your core use cases, that's where you can use things like knowledge connectors, or other ways to respond to people rather than to manually build them in, or just divert them to human associates that can fill those. >> Great job Dan! So you're the lead product manager? >> I'm the lead product manager on Dialog Flow Enterprise Edition, and there's a large team kind of working with me. >> How big is the team? Roughly. >> We don't talk about that actually. >> What other products do you own? >> I'm also product manager for cloud speech to text and cloud text to speech. >> Well awesome. Glad to have you on, thanks for sharing. Super exciting, love the focus. I think its a great strategy of having something that's not a one trick pony bot kind model, having something that is more comprehensive, see that's why bots fail. But I think there's a real need for great self service, its the Google way, search yourself, get out quick. Get your results, I mean its the Google ethos. (laughs) Get in, get your answer. >> Yeah, we're all about democratizing AI so now with cloud speech to text and cloud text to speech, put the power of Google speech recognition, speech synthesis into the hands of any developer, now with Dialog Flow we are taking that a step further, anyone can build their voice bots with ease, what used to cost like millions of dollars, you don't need special expertise. >> Alright, Dan Harron is the product manager for the Dialog Flow Enterprise Edition and doing Cloud AI for Google to bring you all the best dialog here in the cube, doing our part, soon we'll have a cube bot, you can ask us any question, we'll have a canned answer from one of the cube interviews. Dave Vellante is here with me, I'm John Furrier, thanks for watching! Stay with us we'll be right back! (music)
SUMMARY :
brought to you by, Google Cloud and it's ecosystem partners. it and share that with you. dialogue flow, talk about the product. Say it again. and now its now its over 600,000. (laughs) and you know with the new announcements and the capability, why is cloud so important so you know with machine learning we can do you automate it away. that are programming the tool. the analoopies, and so you can do a lot and then learn over time and then what you need to bring in and we would just, what would we do with that? and then for the rest, you probably want to what the important use cases that you want to fulfill something that you needed millions of dollars to set up. and so now if you have some sort of FAQ, so for example you could have built a phone gateway lot of different apps, that seems to be the use case. and you can even play around with the YouTube clip for the session that was yesterday this is, you got to decide priorities, and most of them are not great. I'll say, most of them really suck. but humans are really important, that crossover point. that they were talking to a bot that We could take all the questions we've done Go ahead, answer the question. so you can see with, sort of Google Assistant and and the Dialog Flow's impact to someones it's a system that answers the phone. for so and so and complete the sentence. They're in the process of deploying it, Yeah, because of the IVR for Ticketmaster today, I mean the recognize the sales one was built a long Dan, is the advice you're giving is, nail the use case, that the thing they want to improve most in customer service just a blob of data that gets parsed by the engine? So that really changes a lot, depending on the I'm the lead product manager on How big is the team? I'm also product manager for cloud speech to text and Glad to have you on, thanks for sharing. what used to cost like millions of dollars, you don't need Google to bring you all the best dialog here in the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dan Aharon | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dan Harron | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Ticketmaster | ORGANIZATION | 0.99+ |
3 steps | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
11 steps | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five new features | QUANTITY | 0.99+ |
150,000 | QUANTITY | 0.99+ |
ten areas | QUANTITY | 0.99+ |
Dialog Flow | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
tens of thousands | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
five areas | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
less than a minute | QUANTITY | 0.99+ |
San Francisco California | LOCATION | 0.99+ |
150,000 developers | QUANTITY | 0.99+ |
dialogflow.com | OTHER | 0.99+ |
a year and a half ago | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
over 600,000 | QUANTITY | 0.98+ |
600,000 | QUANTITY | 0.98+ |
each one | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
tomorrow | DATE | 0.96+ |
Dialogue Flow | TITLE | 0.95+ |
today | DATE | 0.95+ |
two big things | QUANTITY | 0.95+ |
hours | QUANTITY | 0.95+ |
Four | QUANTITY | 0.94+ |
cloud@google.com/dialogflow | OTHER | 0.93+ |
thousands of bots | QUANTITY | 0.92+ |
few months ago | DATE | 0.91+ |
NLP | ORGANIZATION | 0.91+ |
about a year | DATE | 0.9+ |
#GoogleNext18 | EVENT | 0.88+ |
Google Home | COMMERCIAL_ITEM | 0.88+ |
zero dollars per month | QUANTITY | 0.88+ |
Dialog Flow Enterprise Edition | TITLE | 0.86+ |
90% of | QUANTITY | 0.85+ |
a couple hundred | QUANTITY | 0.85+ |
Google Next 2018 | EVENT | 0.83+ |
a few months | QUANTITY | 0.82+ |
Todd Mcllory, Eastern Bank| WTG Transform 2018
>> From Boston, Massachusetts, it's the Cube, covering WTG Transform 2018, brought to you by Winslow Technology Group. >> Welcome back, I'm Stu Miniman and this is the Cube's coverage of WTG Transform 2018. We're excited, we've actually gotten to speak to quite a few end users here at the show which always is one of our favorite things to do on the Cube. Joining me, first time on the program, Todd McIlroy, who's Vice President and Systems Engineering Manager, Eastern Bank, thanks so much for joining me. >> You're welcome, glad to be here. >> All right, so first of all, how many of these shows have you been to. >> This is my first WTG event, but I tend to go to several smaller type conferences per year to keep on top of the technology, network, and with so much rapid innovation these days, you need to get out there and talk to people and learn little tidbits of information each one, do a couple of conferences each year. >> Yeah, you said, technology's changing really fast, which is great, my joke usually is like you're in financial, things don't change fast, there's nothing going on. What is happening in your role these days? >> Eastern Bank is a 200 year old bank, we're celebrating our 200th anniversary this month. >> Hey, congratulations. >> A big celebration next week. A lot of traditional architecture, it's a very large bank, it's a 11 billion dollar bank, but very small community feel as well. Even our team is very tight, and very close together. Even though it's an older bank, we have a lot of innovation and technology going on, in the last few years, especially. >> It's one of those things, if I was going to start a company, or had started a company in the last five or ten years, here's the technology I'd choose, here's the applications I'd roll out. It's a 200 year old bank, walk us through a little bit about the pros and cons of having that legacy if you will. >> We have been around for like I said 200 years and even a lot of the technology we have on premises has a lot of legacy applications. We have to keep supporting that for a long time. We're challenged with keeping those older systems up and running as well as providing new technology to the business so they can innovate and bring new and better products to the market. Both worlds. >> Tell me what's under your purview when it comes to the bank. >> When it comes to the bank, I manage the systems engineering team, I manage the team that does servers and virtualization, storage, we're getting into the cloud as well, this is a big push to start innovating in the cloud as well to allow our developers to use services to help them innovate faster and better. That sort of thing. >> Okay, so in the keynotes this morning, there's the discussion really of hybrid cloud. When you say in the cloud, that tends to make me think of public cloud, maybe some SAAS in there, tell me what cloud means to your organization. >> Well, right now our cloud footprint is primarily software and service type applications, like Office 365. We had a major migration to move our email and Skype and users facing applications to the cloud. But we're also trying to expand our footprint in the cloud so we can enable services to our customers, internal customers, to innovate as well. That's why we're looking at technology like Nutanix, the innovation that they're bringing to market to allow our developers to be more self-sufficient, provide the platform for them, allow them to innovate and develop on a platform, both on premises to keep it secure as well as in the cloud to keep it secure as well. >> Okay, it's interesting. I was actually talking to some of the Nutanix team here and been talking to customers that are doing development, playing with the containers, things like that, a couple years ago, if you talking about developers, you'd say, oh, okay, they're building something in the public cloud. >> Right. >> Because that's where it is. Help us understand how you decide, what do I start playing with in the cloud, where does the Nutanix fit into that discussion? >> We're a relatively new Nutanix customer, within the last year or two. We started with a small concept to give some of our workloads for the developers to work on, but now we've expanded upon that it's now become our primary production platform. It's going to take a lot of our older Hypervisor virtualization technology and move it to Nutanix so we're trying to grow that footprint because the amount of innovation that they're bringing with Calm and Flow and all those sorts of new services is going to enable us to build a platform that they can develop on a lot better. >> Great, what virtualization are you using on this? >> For? >> For the Nutanix, like are you using VMware, >> No, we're HV, Native HV. >> Using all HV? >> All HV, yup. >> Okay, were you VMware before? >> Not at Eastern, no. We have a small VMware footprint for specialized application, but the rest of our virtualization platform is Hyper-V. >> Okay, and you said you're running in production, does Nutanix run all of your on-premises applications? >> No it doesn't, we have a lot of still physical infrastructure as well, but anything that new is going towards Nutanix. We have some older hardware that we're aging out, as those age out and we have to expand the new hardware, we're going to go with Nutanix platform. >> You've got some I-series sitting in the back, I'm sure, the old AS/400, most banks have, things like that. >> Exactly, exactly, yeah. >> Great, tell us a little more from a cloud standpoint, how did you determine what goes where? >> We haven't really determined that yet, we're really in the early stages of our own adoption to the cloud. We're really taking the first steps and making sure we are governing it properly, and we're finding the right use case for it. Really we're trying to find the right use case for our developers. We had some meetings recently and we outlined a few things that we could target. So we're really taking our first steps, getting our own competencies up with our own engineers and our developers and making sure we, learning from the people that already done it, and maybe learning from some mistakes they've made and using partners like Winslow to help us get there. >> Great, can you speak a little about from an operational standpoint? You've talked about developers, you've got public cloud, you've got your infrastructure, how do those all play together? >> Well today I mean systems engineering and my department is really the go to department when they request a service, or request a new server, request a new application be built. We interface with a lot of different teams at the bank so we're really the go to team that is going to help them innovate. They know what applications they need to run, they need, they make requests for services. We're trying to reduce the time to fulfillment, allowing them to have a platform where they can build on, innovate and be more self-sufficient. >> Yeah, you bring up a really interesting thing. How long people think it will take from when I ask for something to where I get it. Used to be I put in a support ticket, 24, 48 hours that was great. Some things it's like ah, heck, we're going to have to buy a server, or allocate different pieces. Today it's come on, it's instantaneous. >> Yeah, everybody's ready to go. >> Talk to that, the good and the bad of that from your standpoint. >> That's what we want to be in the business of allowing them to self-sufficient, build the platform for them, we don't want to be managing building VMs over and over again. We want to templatize things and allow them to be on their own timeline, to be able to develop, deploy, break down, so we're really trying to innovate in that way. I think that's our job as an engineering team to provide that to the business so they can innovate more quickly. >> Love the idea of self-service. The concern always is sprawl. Used to be I stuck a server in the corner and then I forgot about it. Now VMs pop up and I forget about them all the time, doesn't the CFO ever come say, hey, am I really using all this stuff? >> That's the big driving factor is cost, making sure you don't going to drive over a huge cost in the cloud. I think governance and managing that and using tools and using some of the use cases and some of the knowledge others have been before to help you build that framework so you're not breaking the budget, or what not, you find the right use case, whether it's on premises cloud or in the public cloud. >> Great, last thing, Todd, as you look forward through the rest of 2018, any interesting new technologies you're looking at or other things coming down the pike other than celebrating the big 200th anniversary. >> Well I mean, I'm really excited about the bank has really made a commitment to move towards innovation and using cloud technology so I'm really excited to be part of the team that's going to help innovate and drive the business forward in that regard. >> Todd McIlroy, I really appreciate the updates here, Eastern Bank a 200 year old company driving innovation forward. This has been our live coverage here from WTG Transform 2018. Be sure to check out thecube.net for all the shows we're going to be at, all the replays that we've been at before. Stu Miniman, once again thank you so much to Winslow Technology Group, their partners and of course all the customers, thanks so much, for the viewers also for watching, thank you. (techno music)
SUMMARY :
brought to you by Winslow Technology Group. is one of our favorite things to do on the Cube. have you been to. and talk to people and learn little tidbits of information Yeah, you said, technology's changing really fast, Eastern Bank is a 200 year old bank, we're celebrating in the last few years, especially. or had started a company in the last five or ten years, and better products to the market. to the bank. When it comes to the bank, I manage the systems Okay, so in the keynotes this morning, there's the the innovation that they're bringing to market and been talking to customers that are doing development, Help us understand how you decide, and move it to Nutanix so we're trying to grow but the rest of our virtualization platform is Hyper-V. We have some older hardware that we're aging out, You've got some I-series sitting in the back, to the cloud. is really the go to department when they request a service, for something to where I get it. Talk to that, the good and the bad of that provide that to the business so they can innovate Used to be I stuck a server in the corner and then knowledge others have been before to help you build the rest of 2018, any interesting new technologies excited to be part of the team that's going to help Be sure to check out thecube.net for all the shows
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Todd McIlroy | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Todd Mcllory | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Todd | PERSON | 0.99+ |
Eastern Bank | ORGANIZATION | 0.99+ |
Winslow Technology Group | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
first steps | QUANTITY | 0.99+ |
Skype | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Office 365 | TITLE | 0.99+ |
200 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
11 billion dollar | QUANTITY | 0.99+ |
Winslow | ORGANIZATION | 0.99+ |
this month | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
200th anniversary | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
each year | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
thecube.net | OTHER | 0.98+ |
200 year old | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
WTG Transform 2018 | EVENT | 0.95+ |
Both worlds | QUANTITY | 0.94+ |
each one | QUANTITY | 0.93+ |
2018 | DATE | 0.92+ |
Hyper-V | TITLE | 0.91+ |
VMware | ORGANIZATION | 0.9+ |
couple years ago | DATE | 0.85+ |
Cube | ORGANIZATION | 0.84+ |
rd | QUANTITY | 0.81+ |
WTG | EVENT | 0.81+ |
years | DATE | 0.81+ |
Calm and Flow | ORGANIZATION | 0.81+ |
ten years | QUANTITY | 0.8+ |
this morning | DATE | 0.79+ |
last | DATE | 0.75+ |
24, 48 hours | QUANTITY | 0.73+ |
two | QUANTITY | 0.71+ |
-series | COMMERCIAL_ITEM | 0.6+ |
Transform | TITLE | 0.53+ |
users | QUANTITY | 0.49+ |
WTG | ORGANIZATION | 0.48+ |
Transform 2018 | TITLE | 0.47+ |
Cube | COMMERCIAL_ITEM | 0.46+ |
last five | DATE | 0.42+ |
Hypervisor | COMMERCIAL_ITEM | 0.35+ |