Image Title

Search Results for Len:

Teresa Carlson, Flexport | International Women's Day


 

(upbeat intro music) >> Hello everyone. Welcome to theCUBE's coverage of International Women's Day. I'm your host, John Furrier, here in Palo Alto, California. Got a special remote guest coming in. Teresa Carlson, President and Chief Commercial Officer at Flexport, theCUBE alumni, one of the first, let me go back to 2013, Teresa, former AWS. Great to see you. Thanks for coming on. >> Oh my gosh, almost 10 years. That is unbelievable. It's hard to believe so many years of theCUBE. I love it. >> It's been such a great honor to interview you and follow your career. You've had quite the impressive run, executive level woman in tech. You've done such an amazing job, not only in your career, but also helping other women. So I want to give you props to that before we get started. Thank you. >> Thank you, John. I, it's my, it's been my honor and privilege. >> Let's talk about Flexport. Tell us about your new role there and what it's all about. >> Well, I love it. I'm back working with another Amazonian, Dave Clark, who is our CEO of Flexport, and we are about 3,000 people strong globally in over 90 countries. We actually even have, we're represented in over 160 cities and with local governments and places around the world, which I think is super exciting. We have over 100 network partners and growing, and we are about empowering the global supply chain and trade and doing it in a very disruptive way with the use of platform technology that allows our customers to really have visibility and insight to what's going on. And it's a lot of fun. I'm learning new things, but there's a lot of technology in this as well, so I feel right at home. >> You quite have a knack from mastering growth, technology, and building out companies. So congratulations, and scaling them up too with the systems and processes. So I want to get into that. Let's get into your personal background. Then I want to get into the work you've done and are doing for empowering women in tech. What was your journey about, how did it all start? Like, I know you had a, you know, bumped into it, you went Microsoft, AWS. Take us through your career, how you got into tech, how it all happened. >> Well, I do like to give a shout out, John, to my roots and heritage, which was a speech and language pathologist. So I did start out in healthcare right out of, you know, university. I had an undergraduate and a master's degree. And I do tell everyone now, looking back at my career, I think it was super helpful for me because I learned a lot about human communication, and it has done me very well over the years to really try to understand what environments I'm in and what kind of individuals around the world culturally. So I'm really blessed that I had that opportunity to work in healthcare, and by the way, a shout out to all of our healthcare workers that has helped us get through almost three years of COVID and flu and neurovirus and everything else. So started out there and then kind of almost accidentally got into technology. My first small company I worked for was a company called Keyfile Corporation, which did workflow and document management out of Nashua, New Hampshire. And they were a Microsoft goal partner. And that is actually how I got into big tech world. We ran on exchange, for everybody who knows that term exchange, and we were a large small partner, but large in the world of exchange. And those were the days when you would, the late nineties, you would go and be in the same room with Bill Gates and Steve Ballmer. And I really fell in love with Microsoft back then. I thought to myself, wow, if I could work for a big tech company, I got to hear Bill on stage about saving, he would talk about saving the world. And guess what my next step was? I actually got a job at Microsoft, took a pay cut and a job downgrade. I tell this story all the time. Took like three downgrades in my role. I had been a SVP and went to a manager, and it's one of the best moves I ever made. And I shared that because I really didn't know the world of big tech, and I had to start from the ground up and relearn it. I did that, I just really loved that job. I was at Microsoft from 2000 to 2010, where I eventually ran all of the U.S. federal government business, which was a multi-billion dollar business. And then I had the great privilege of meeting an amazing man, Andy Jassy, who I thought was just unbelievable in his insights and knowledge and openness to understanding new markets. And we talked about government and how government needed the same great technology as every startup. And that led to me going to work for Andy in 2010 and starting up our worldwide public sector business. And I pinch myself some days because we went from two people, no offices, to the time I left we had over 10,000 people, billions in revenue, and 172 countries and had done really amazing work. I think changing the way public sector and government globally really thought about their use of technology and Cloud computing in general. And that kind of has been my career. You know, I was there till 2020, 21 and then did a small stint at Splunk, a small stint back at Microsoft doing a couple projects for Microsoft with CEO, Satya Nadella, who is also an another amazing CEO and leader. And then Dave called me, and I'm at Flexport, so I couldn't be more honored, John. I've just had such an amazing career working with amazing individuals. >> Yeah, I got to say the Amazon One well-documented, certainly by theCUBE and our coverage. We watched you rise and scale that thing. And like I said at a time, this will when we look back as a historic run because of the build out. I mean as a zero to massive billions at a historic time where government was transforming, I would say Microsoft had a good run there with Fed, but it was already established stuff. Federal business was like, you know, blocking and tackling. The Amazon was pure build out. So I have to ask you, what was your big learnings? Because one, you're a Seattle big tech company kind of entrepreneurial in the sense of you got, here's some working capital seed finance and go build that thing, and you're in DC and you're a woman. What did you learn? >> I learned that you really have to have a lot of grit. You, my mom and dad, these are kind of more southern roots words, but stick with itness, you know. you can't give up and no's not in your vocabulary. I found no is just another way to get to yes. That you have to figure out what are all the questions people are going to ask you. I learned to be very patient, and I think one of the things John, for us was our secret sauce was we said to ourselves, if we're going to do something super transformative and truly disruptive, like Cloud computing, which the government really had not utilized, we had to be patient. We had to answer all their questions, and we could not judge in any way what they were thinking because if we couldn't answer all those questions and prove out the capabilities of Cloud computing, we were not going to accomplish our goals. And I do give so much credit to all my colleagues there from everybody like Steve Schmidt who was there, who's still there, who's the CISO, and Charlie Bell and Peter DeSantis and the entire team there that just really helped build that business out. Without them, you know, we would've just, it was a team effort. And I think that's the thing I loved about it was it was not just sales, it was product, it was development, it was data center operations, it was legal, finance. Everybody really worked as a team and we were on board that we had to make a lot of changes in the government relations team. We had to go into Capitol Hill. We had to talk to them about the changes that were required and really get them to understand why Cloud computing could be such a transformative game changer for the way government operates globally. >> Well, I think the whole world and the tech world can appreciate your work and thank you later because you broke down those walls asking those questions. So great stuff. Now I got to say, you're in kind of a similar role at Flexport. Again, transformative supply chain, not new. Computing wasn't new when before Cloud came. Supply chain, not a new concept, is undergoing radical change and transformation. Online, software supply chain, hardware supply chain, supply chain in general, shipping. This is a big part of our economy and how life is working. Similar kind of thing going on, build out, growth, scale. >> It is, it's very much like that, John, I would say, it's, it's kind of a, the model with freight forwarding and supply chain is fairly, it's not as, there's a lot of technology utilized in this global supply chain world, but it's not integrated. You don't have a common operating picture of what you're doing in your global supply chain. You don't have easy access to the information and visibility. And that's really, you know, I was at a conference last week in LA, and it was, the themes were so similar about transparency, access to data and information, being able to act quickly, drive change, know what was happening. I was like, wow, this sounds familiar. Data, AI, machine learning, visibility, common operating picture. So it is very much the same kind of themes that you heard even with government. I do believe it's an industry that is going through transformation and Flexport has been a group that's come in and said, look, we have this amazing idea, number one to give access to everyone. We want every small business to every large business to every government around the world to be able to trade their goods, think about supply chain logistics in a very different way with information they need and want at their fingertips. So that's kind of thing one, but to apply that technology in a way that's very usable across all systems from an integration perspective. So it's kind of exciting. I used to tell this story years ago, John, and I don't think Michael Dell would mind that I tell this story. One of our first customers when I was at Keyfile Corporation was we did workflow and document management, and Dell was one of our customers. And I remember going out to visit them, and they had runners and they would run around, you know, they would run around the floor and do their orders, right, to get all those computers out the door. And when I think of global trade, in my mind I still see runners, you know, running around and I think that's moved to a very digital, right, world that all this stuff, you don't need people doing this. You have machines doing this now, and you have access to the information, and you know, we still have issues resulting from COVID where we have either an under-abundance or an over-abundance of our supply chain. We still have clogs in our shipping, in the shipping yards around the world. So we, and the ports, so we need to also, we still have some clearing to do. And that's the reason technology is important and will continue to be very important in this world of global trade. >> Yeah, great, great impact for change. I got to ask you about Flexport's inclusion, diversity, and equity programs. What do you got going on there? That's been a big conversation in the industry around keeping a focus on not making one way more than the other, but clearly every company, if they don't have a strong program, will be at a disadvantage. That's well reported by McKinsey and other top consultants, diverse workforces, inclusive, equitable, all perform better. What's Flexport's strategy and how are you guys supporting that in the workplace? >> Well, let me just start by saying really at the core of who I am, since the day I've started understanding that as an individual and a female leader, that I could have an impact. That the words I used, the actions I took, the information that I pulled together and had knowledge of could be meaningful. And I think each and every one of us is responsible to do what we can to make our workplace and the world a more diverse and inclusive place to live and work. And I've always enjoyed kind of the thought that, that I could help empower women around the world in the tech industry. Now I'm hoping to do my little part, John, in that in the supply chain and global trade business. And I would tell you at Flexport we have some amazing women. I'm so excited to get to know all. I've not been there that long yet, but I'm getting to know we have some, we have a very diverse leadership team between men and women at Dave's level. I have some unbelievable women on my team directly that I'm getting to know more, and I'm so impressed with what they're doing. And this is a very, you know, while this industry is different than the world I live in day to day, it's also has a lot of common themes to it. So, you know, for us, we're trying to approach every day by saying, let's make sure both our interviewing cycles, the jobs we feel, how we recruit people, how we put people out there on the platforms, that we have diversity and inclusion and all of that every day. And I can tell you from the top, from Dave and all of our leaders, we just had an offsite and we had a big conversation about this is something. It's a drum beat that we have to think about and live by every day and really check ourselves on a regular basis. But I do think there's so much more room for women in the world to do great things. And one of the, one of the areas, as you know very well, we lost a lot of women during COVID, who just left the workforce again. So we kind of went back unfortunately. So we have to now move forward and make sure that we are giving women the opportunity to have great jobs, have the flexibility they need as they build a family, and have a workplace environment that is trusted for them to come into every day. >> There's now clear visibility, at least in today's world, not withstanding some of the setbacks from COVID, that a young girl can look out in a company and see a path from entry level to the boardroom. That's a big change. A lot than even going back 10, 15, 20 years ago. What's your advice to the folks out there that are paying it forward? You see a lot of executive leaderships have a seat at the table. The board still underrepresented by most numbers, but at least you have now kind of this solidarity at the top, but a lot of people doing a lot more now than I've seen at the next levels down. So now you have this leveled approach. Is that something that you're seeing more of? And credit compare and contrast that to 20 years ago when you were, you know, rising through the ranks? What's different? >> Well, one of the main things, and I honestly do not think about it too much, but there were really no women. There were none. When I showed up in the meetings, I literally, it was me or not me at the table, but at the seat behind the table. The women just weren't in the room, and there were so many more barriers that we had to push through, and that has changed a lot. I mean globally that has changed a lot in the U.S. You know, if you look at just our U.S. House of Representatives and our U.S. Senate, we now have the increasing number of women. Even at leadership levels, you're seeing that change. You have a lot more women on boards than we ever thought we would ever represent. While we are not there, more female CEOs that I get an opportunity to see and talk to. Women starting companies, they do not see the barriers. And I will share, John, globally in the U.S. one of the things that I still see that we have that many other countries don't have, which I'm very proud of, women in the U.S. have a spirit about them that they just don't see the barriers in the same way. They believe that they can accomplish anything. I have two sons, I don't have daughters. I have nieces, and I'm hoping someday to have granddaughters. But I know that a lot of my friends who have granddaughters today talk about the boldness, the fortitude, that they believe that there's nothing they can't accomplish. And I think that's what what we have to instill in every little girl out there, that they can accomplish anything they want to. The world is theirs, and we need to not just do that in the U.S., but around the world. And it was always the thing that struck me when I did all my travels at AWS and now with Flexport, I'm traveling again quite a bit, is just the differences you see in the cultures around the world. And I remember even in the Middle East, how I started seeing it change. You've heard me talk a lot on this program about the fact in both Saudi and Bahrain, over 60% of the tech workers were females and most of them held the the hardest jobs, the security, the architecture, the engineering. But many of them did not hold leadership roles. And that is what we've got to change too. To your point, the middle, we want it to get bigger, but the top, we need to get bigger. We need to make sure women globally have opportunities to hold the most precious leadership roles and demonstrate their capabilities at the very top. But that's changed. And I would say the biggest difference is when we show up, we're actually evaluated properly for those kind of roles. We have a ways to go. But again, that part is really changing. >> Can you share, Teresa, first of all, that's great work you've done and I wan to give you props of that as well and all the work you do. I know you champion a lot of, you know, causes in in this area. One question that comes up a lot, I would love to get your opinion 'cause I think you can contribute heavily here is mentoring and sponsorship is huge, comes up all the time. What advice would you share to folks out there who were, I won't say apprehensive, but maybe nervous about how to do the networking and sponsorship and mentoring? It's not just mentoring, it's sponsorship too. What's your best practice? What advice would you give for the best way to handle that? >> Well yeah, and for the women out there, I would say on the mentorship side, I still see mentorship. Like, I don't think you can ever stop having mentorship. And I like to look at my mentors in different parts of my life because if you want to be a well-rounded person, you may have parts of your life every day that you think I'm doing a great job here and I definitely would like to do better there. Whether it's your spiritual life, your physical life, your work life, you know, your leisure life. But I mean there's, and there's parts of my leadership world that I still seek advice from as I try to do new things even in this world. And I tried some new things in between roles. I went out and asked the people that I respected the most. So I just would say for sure have different mentorships and don't be afraid to have that diversity. But if you have mentorships, the second important thing is show up with a real agenda and questions. Don't waste people's time. I'm very sensitive today. If you're, if you want a mentor, you show up and you use your time super effectively and be prepared for that. Sponsorship is a very different thing. And I don't believe we actually do that still in companies. We worked, thank goodness for my great HR team. When I was at AWS, we worked on a few sponsorship programs where for diversity in general, where we would nominate individuals in the company that we felt that weren't, that had a lot of opportunity for growth, but they just weren't getting a seat at the table. And we brought 'em to the table. And we actually kind of had a Chatham House rules where when they came into the meetings, they had a sponsor, not a mentor. They had a sponsor that was with them the full 18 months of this program. We would bring 'em into executive meetings. They would read docs, they could ask questions. We wanted them to be able to open up and ask crazy questions without, you know, feeling wow, I just couldn't answer this question in a normal environment or setting. And then we tried to make sure once they got through the program that we found jobs and support and other special projects that they could go do. But they still had that sponsor and that group of individuals that they'd gone through the program with, John, that they could keep going back to. And I remember sitting there and they asked me what I wanted to get out of the program, and I said two things. I want you to leave this program and say to yourself, I would've never had that experience if I hadn't gone through this program. I learned so much in 18 months. It would probably taken me five years to learn. And that it helped them in their career. The second thing I told them is I wanted them to go out and recruit individuals that look like them. I said, we need diversity, and unless you all feel that we are in an inclusive environment sponsoring all types of individuals to be part of this company, we're not going to get the job done. And they said, okay. And you know, but it was really one, it was very much about them. That we took a group of individuals that had high potential and a very diverse with diverse backgrounds, held 'em up, taught 'em things that gave them access. And two, selfishly I said, I want more of you in my business. Please help me. And I think those kind of things are helpful, and you have to be thoughtful about these kind of programs. And to me that's more sponsorship. I still have people reach out to me from years ago, you know, Microsoft saying, you were so good with me, can you give me a reference now? Can you talk to me about what I should be doing? And I try to, I'm not pray 100%, some things pray fall through the cracks, but I always try to make the time to talk to those individuals because for me, I am where I am today because I got some of the best advice from people like Don Byrne and Linda Zecker and Andy Jassy, who were very honest and upfront with me about my career. >> Awesome. Well, you got a passion for empowering women in tech, paying it forward, but you're quite accomplished and that's why we're so glad to have you on the program here. President and Chief Commercial Officer at Flexport. Obviously storied career and your other jobs, specifically Amazon I think, is historic in my mind. This next chapter looks like it's looking good right now. Final question for you, for the few minutes you have left. Tell us what you're up to at Flexport. What's your goals as President, Chief Commercial Officer? What are you trying to accomplish? Share a little bit, what's on your mind with your current job? >> Well, you kind of said it earlier. I think if I look at my own superpowers, I love customers, I love partners. I get my energy, John, from those interactions. So one is to come in and really help us build even a better world class enterprise global sales and marketing team. Really listen to our customers, think about how we interact with them, build the best executive programs we can, think about new ways that we can offer services to them and create new services. One of my favorite things about my career is I think if you're a business leader, it's your job to come back around and tell your product group and your services org what you're hearing from customers. That's how you can be so much more impactful, that you listen, you learn, and you deliver. So that's one big job. The second job for me, which I am so excited about, is that I have an amazing group called flexport.org under me. And flexport.org is doing amazing things around the world to help those in need. We just announced this new funding program for Tech for Refugees, which brings assistance to millions of people in Ukraine, Pakistan, the horn of Africa, and those who are affected by earthquakes. We just took supplies into Turkey and Syria, and Flexport, recently in fact, just did sent three air shipments to Turkey and Syria for these. And I think we did over a hundred trekking shipments to get earthquake relief. And as you can imagine, it was not easy to get into Syria. But you know, we're very active in the Ukraine, and we are, our goal for flexport.org, John, is to continue to work with our commercial customers and team up with them when they're trying to get supplies in to do that in a very cost effective, easy way, as quickly as we can. So that not-for-profit side of me that I'm so, I'm so happy. And you know, Ryan Peterson, who was our founder, this was his brainchild, and he's really taken this to the next level. So I'm honored to be able to pick that up and look for new ways to have impact around the world. And you know, I've always found that I think if you do things right with a company, you can have a beautiful combination of commercial-ity and giving. And I think Flexport does it in such an amazing and unique way. >> Well, the impact that they have with their system and their technology with logistics and shipping and supply chain is a channel for societal change. And I think that's a huge gift that you have that under your purview. So looking forward to finding out more about flexport.org. I can only imagine all the exciting things around sustainability, and we just had Mobile World Congress for Big Cube Broadcast, 5Gs right around the corner. I'm sure that's going to have a huge impact to your business. >> Well, for sure. And just on gas emissions, that's another thing that we are tracking gas, greenhouse gas emissions. And in fact we've already reduced more than 300,000 tons and supported over 600 organizations doing that. So that's a thing we're also trying to make sure that we're being climate aware and ensuring that we are doing the best job we can at that as well. And that was another thing I was honored to be able to do when we were at AWS, is to really cut out greenhouse gas emissions and really go global with our climate initiatives. >> Well Teresa, it's great to have you on. Security, data, 5G, sustainability, business transformation, AI all coming together to change the game. You're in another hot seat, hot roll, big wave. >> Well, John, it's an honor, and just thank you again for doing this and having women on and really representing us in a big way as we celebrate International Women's Day. >> I really appreciate it, it's super important. And these videos have impact, so we're going to do a lot more. And I appreciate your leadership to the industry and thank you so much for taking the time to contribute to our effort. Thank you, Teresa. >> Thank you. Thanks everybody. >> Teresa Carlson, the President and Chief Commercial Officer of Flexport. I'm John Furrier, host of theCUBE. This is International Women's Day broadcast. Thanks for watching. (upbeat outro music)

Published Date : Mar 6 2023

SUMMARY :

and Chief Commercial Officer It's hard to believe so honor to interview you I, it's my, it's been Tell us about your new role and insight to what's going on. and are doing for And that led to me going in the sense of you got, I learned that you really Now I got to say, you're in kind of And I remember going out to visit them, I got to ask you about And I would tell you at Flexport to 20 years ago when you were, you know, And I remember even in the Middle East, I know you champion a lot of, you know, And I like to look at my to have you on the program here. And I think we did over a I can only imagine all the exciting things And that was another thing I Well Teresa, it's great to have you on. and just thank you again for and thank you so much for taking the time Thank you. and Chief Commercial Officer of Flexport.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Satya NadellaPERSON

0.99+

Jeremy BurtonPERSON

0.99+

DavePERSON

0.99+

CiscoORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

Dave VellantePERSON

0.99+

Dave VallentePERSON

0.99+

Ryan PetersonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Andy JassyPERSON

0.99+

TeresaPERSON

0.99+

JohnPERSON

0.99+

Linda ZeckerPERSON

0.99+

AmazonORGANIZATION

0.99+

MikePERSON

0.99+

John FurrierPERSON

0.99+

Steve BallmerPERSON

0.99+

CanadaLOCATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

FlexportORGANIZATION

0.99+

Dave ClarkPERSON

0.99+

Mike FrancoPERSON

0.99+

Stu MinimanPERSON

0.99+

2010DATE

0.99+

SyriaLOCATION

0.99+

HallmarkORGANIZATION

0.99+

UkraineLOCATION

0.99+

Don ByrnePERSON

0.99+

Keyfile CorporationORGANIZATION

0.99+

Steve SchmidtPERSON

0.99+

DellORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Dave StanfordPERSON

0.99+

TurkeyLOCATION

0.99+

BostonLOCATION

0.99+

JuneDATE

0.99+

Middle EastLOCATION

0.99+

second jobQUANTITY

0.99+

Michael DellPERSON

0.99+

dozensQUANTITY

0.99+

2013DATE

0.99+

MayDATE

0.99+

2019DATE

0.99+

LALOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

100%QUANTITY

0.99+

CUBE Analysis of Day 1 of MWC Barcelona 2023 | MWC Barcelona 2023


 

>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies creating technologies that drive human progress. (upbeat music) >> Hey everyone, welcome back to theCube's first day of coverage of MWC 23 from Barcelona, Spain. Lisa Martin here with Dave Vellante and Dave Nicholson. I'm literally in between two Daves. We've had a great first day of coverage of the event. There's been lots of conversations, Dave, on disaggregation, on the change of mobility. I want to be able to get your perspectives from both of you on what you saw on the show floor, what you saw and heard from our guests today. So we'll start with you, Dave V. What were some of the things that were our takeaways from day one for you? >> Well, the big takeaway is the event itself. On day one, you get a feel for what this show is like. Now that we're back, face-to-face kind of pretty much full face-to-face. A lot of excitement here. 2000 plus exhibitors, I mean, planes, trains, automobiles, VR, AI, servers, software, I mean everything. I mean, everybody is here. So it's a really comprehensive show. It's not just about mobile. That's why they changed the name from Mobile World Congress. I think the other thing is from the keynotes this morning, I mean, you heard, there's a lot of, you know, action around the telcos and the transformation, but in a lot of ways they're sort of protecting their existing past from the future. And so they have to be careful about how fast they move. But at the same time if they don't move fast, they're going to get disrupted. We heard some complaints, essentially, you know, veiled complaints that the over the top guys aren't paying their fair share and Telco should be able to charge them more. We heard the chairman of Ericsson talk about how we can't let the OTTs do that again. We're going to charge directly for access through APIs to our network, to our data. We heard from Chris Lewis. Yeah. They've only got, or maybe it was San Ji Choha, how they've only got eight APIs. So, you know the developers are the ones who are going to actually build out the innovation at the edge. The telcos are going to provide the connectivity and the infrastructure companies like Dell as well. But it's really to me all about the developers. And that's where the action's going to be. And it's going to be interesting to see how the developers respond to, you know, the gun to the head. If you want access, you're going to have to pay for it. Now maybe there's so much money to be made that they'll go for it, but I feel like there's maybe a different model. And I think some of the emerging telcos are going to say, you know what, here developers, here's a platform, have at it. We're not going to charge you for all the data until you succeed. Then we're going to figure out a monetization model. >> Right. A lot of opportunity for the developer. That skillset is certainly one that's in demand here. And certainly the transformation of the telecom industry is, there's a lot of conundrums that I was hearing going on today, kind of chicken and egg scenarios. But Dave, you had a chance to walk around the show floor. We were here interviewing all day. What were some of the things that you saw that really stuck out to you? >> I think I was struck by how much attention was being paid to private 5G networks. You sort of read between the lines and it appears as though people kind of accept that the big incumbent telecom players are going to be slower to move. And this idea of things like open RAN where you're leveraging open protocols in a stack to deliver more agility and more value. So it sort of goes back to the generalized IT discussion of moving to cloud for agility. It appears as though a lot of players realize that the wild wild west, the real opportunity, is in the private sphere. So it's really interesting to see how that works, how 5G implemented into an environment with wifi how that actually works. It's really interesting. >> So it's, obviously when you talk to companies like Dell, I haven't hit HPE yet. I'm going to go over there and check out their booth. They got an analyst thing going on but it's really early days for them. I mean, they started in this business by taking an X86 box, putting a name on it, you know, that sounded like it was edged, throwing it over, you know, the wall. That's sort of how they all started in this business. And now they're, you know, but they knew they had to form partnerships. They had to build purpose-built systems. Now with 16 G out, you're seeing that. And so it's still really early days, talking about O RAN, open RAN, the open RAN alliance. You know, it's just, I mean, not even, the game hasn't even barely started yet but we heard from Dish today. They're trying to roll out a massive 5G network. Rakuten is really focused on sort of open RAN that's more reliable, you know, or as reliable as the existing networks but not as nearly as huge a scale as Dish. So it's going to take a decade for this to evolve. >> Which is surprising to the average consumer to hear that. Because as far as we know 5G has been around for a long time. We've been talking about 5G, implementing 5G, you sort of assume it's ubiquitous but the reality is it is just the beginning. >> Yeah. And you know, it's got a fake 5G too, right? I mean you see it on your phone and you're like, what's the difference here? And it's, you know, just, >> Dave N.: What does it really mean? >> Right. And so I think your point about private is interesting, the conversation Dave that we had earlier, I had throughout, hey I don't think it's a replacement for wifi. And you said, "well, why not?" I guess it comes down to economics. I mean if you can get the private network priced close enough then you're right. Why wouldn't it replace wifi? Now you got wifi six coming in. So that's a, you know, and WiFi's flexible, it's cheap, it's good for homes, good for offices, but these private networks are going to be like kickass, right? They're going to be designed to run whatever, warehouses and robots, and energy drilling facilities. And so, you know the economics I don't think are there today but maybe they can be at volume. >> Maybe at some point you sort of think of today's science experiment becoming the enterprise-grade solution in the future. I had a chance to have some conversations with folks around the show. And I think, and what I was surprised by was I was reminded, frankly, I wasn't surprised. I was reminded that when we start talking about 5G, we're talking about spectrum that is managed by government entities. Of course all broadcast, all spectrum, is managed in one way or another. But in particular, you can't simply put a SIM in every device now because there are a lot of regulatory hurdles that have to take place. So typically what these things look like today is 5G backhaul to the network, communication from that box to wifi. That's a huge improvement already. So yeah, my question about whether, you know, why not put a SIM in everything? Maybe eventually, but I think, but there are other things that I was not aware of that are standing in the way. >> Your point about spectrum's an interesting one though because private networks, you're going to be able to leverage that spectrum in different ways, and tune it essentially, use different parts of the spectrum, make it programmable so that you can apply it to that specific use case, right? So it's going to be a lot more flexible, you know, because I presume the needs spectrum needs of a hospital are going to be different than, you know, an agribusiness are going to be different than a drilling, you know, unit, offshore drilling unit. And so the ability to have the flexibility to use the spectrum in different ways and apply it to that use case, I think is going to be powerful. But I suspect it's going to be expensive initially. I think the other thing we talked about is public policy and regulation, and it's San Ji Choha brought up the point, is telcos have been highly regulated. They don't just do something and ask for permission, you know, they have to work within the confines of that regulated environment. And there's a lot of these greenfield companies and private networks that don't necessarily have to follow those rules. So that's a potential disruptive force. So at the same time, the telcos are spending what'd we hear, a billion, a trillion and a half over the next seven years? Building out 5G networks. So they got to figure out, you know how to get a payback on that. They'll get it I think on connectivity, 'cause they have a monopoly but they want more. They're greedy. They see the over, they see the Netflixes of the world and the Googles and the Amazons mopping up services and they want a piece of that action but they've never really been good at it. >> Well, I've got a question for both of you. I mean, what do you think the odds are that by the time the Shangri La of fully deployed 5G happens that we have so much data going through it that effectively it feels exactly the same as 3G? What are the odds? >> That's a good point. Well, the thing that gets me about 5G is there's so much of it on, if I go to the consumer side when we're all consumers in our daily lives so much of it's marketing hype. And, you know all the messaging about that, when it's really early innings yet they're talking about 6G. What does actual fully deployed 5G look like? What is that going to enable a hospital to achieve or an oil refinery out in the middle of the ocean? That's something that interests me is what's next for that? Are we going to hear that at this event? >> I mean, walking around, you see a fair amount of discussion of, you know, the internet of things. Edge devices, the increase in connectivity. And again, what I was surprised by was that there's very little talk about a sim card in every one of those devices at this point. It's like, no, no, no, we got wifi to handle all that but aggregating it back into a central network that's leveraging 5G. That's really interesting. That's really interesting. >> I think you, the odds of your, to go back to your question, I think the odds are even money, that by the time it's all built out there's going to be so much data and so much new capability it's going to work similarly at similar speeds as we see in the networks today. You're just going to be able to do so many more things. You know, and your video's going to look better, the graphics are going to look better. But I think over the course of history, this is what's happening. I mean, even when you go back to dial up, if you were in an AOL chat room in 1996, it was, you know, yeah it took a while. You're like, (screeches) (Lisa laughs) the modem and everything else, but once you were in there- >> Once you're there, 2400 baud. >> It was basically real time. And so you could talk to your friends and, you know, little chat room but that's all you could do. You know, if you wanted to watch a video, forget it, right? And then, you know, early days of streaming video, stop, start, stop, start, you know, look at Amazon Prime when it first started, Prime Video was not that great. It's sort of catching up to Netflix. But, so I think your point, that question is really prescient because more data, more capability, more apps means same speed. >> Well, you know, you've used the phrase over the top. And so just just so we're clear so we're talking about the same thing. Typically we're talking about, you've got, you have network providers. Outside of that, you know, Netflix, internet connection, I don't need Comcast, right? Perfect example. Well, what about the over the top that's coming from direct satellite communications with devices. There are times when I don't have a signal on my, happens to be an Apple iPhone, when I get a little SOS satellite logo because I can communicate under very limited circumstances now directly to the satellite for very limited text messaging purposes. Here at the show, I think it might be a Motorola device. It's a dongle that allows any mobile device to leverage direct satellite communication. Again, for texting back to the 2,400 baud modem, you know, days, 1200 even, 300 even, go back far enough. What's that going to look like? Is that too far in the future to think that eventually it's all going to be over the top? It's all going to be handset to satellite and we don't need these RANs anymore. It's all going to be satellite networks. >> Dave V.: I think you're going to see- >> Little too science fiction-y? (laughs) >> No, I, no, I think it's a good question and I think you're going to see fragments. I think you're going to see fragmentation of private networks. I think you're going to see fragmentation of satellites. I think you're going to see legacy incumbents kind of hanging on, you know, the cable companies. I think that's coming. I think by 2030 it'll, the picture will be much more clear. The question is, and I think it's come down to the innovation on top, which platform is going to be the most developer friendly? Right, and you know, I've not heard anything from the big carriers that they're going to be developer friendly. I've heard "we have proprietary data that we're going to charge access for and developers are going to have to pay for that." But I haven't heard them saying "Developers, developers, developers!" You know, Steve Bomber running around, like bend over backwards for developers, they're asking the developers to bend over. And so if a network can, let's say the satellite network is more developer friendly, you know, you're going to see more innovation there potentially. You know, or if a dish network says, "You know what? We're going after developers, we're going after innovation. We're not going to gouge them for all this network data. Rather we're going to make the platform open or maybe we're going to do an app store-like model where we take a piece of the action after they succeed." You know, take it out of the backend, like a Silicon Valley VC as opposed to an East Coast VC. They're not going to get you in the front end. (Lisa laughs) >> Well, you can see the sort of disruptive forces at play between open RAN and the legacy, call it proprietary stack, right? But what is the, you know, if that's sort of a horizontal disruptive model, what's the vertically disruptive model? Is it private networks coming in? Is it a private 5G network that comes in that says, "We're starting from the ground up, everything is containerized. We're going to go find people at KubeCon who are, who understand how to orchestrate with Kubernetes and use containers in microservices, and we're going to have this little 5G network that's going to deliver capabilities that you can't get from the big boys." Is there a way to monetize that? Is there a way for them to be disrupted, be disruptive, or are these private 5G networks that everybody's talking about just relegated to industrial use cases where you're just squeezing better economics out of wireless communication amongst all your devices in your factory? >> That's an interesting question. I mean, there are a lot of those smart factory industrial use cases. I mean, it's basically industry 4.0 use cases. But yeah, I don't count the cloud guys out. You know, everybody says, "oh, the narrative is, well, the latency of the cloud." Well, not if the cloud is at the edge. If you take a local zone and put storage, compute, and data right next to each other and the cloud model with the cloud APIs, and then you got an asynchronous, you know, connection back. I think that's a reasonable model. I think the cloud guys figured out developers, right? Pretty well. Certainly Microsoft and, and Amazon and Google, they know developers. I don't see any reason why they can't bring their model to the edge. So, and that's really disruptive to the legacy telco guys, you know? So they have to be careful. >> One step closer to my dream of eliminating the word "cloud" from IT lexicon. (Lisa laughs) I contend that it has always been IT, and it will always be IT. And this whole idea of cloud, what is cloud? If AWS, for example, is delivering hardware to the edge where it needs to be, is that cloud? Do we go back to the idea that cloud is an operational model and not a question of physical location? I hope we get to that point. >> Well, what's Apex and GreenLake? Apex is, you know, Dell's as a service. GreenLake is- >> HPE. >> HPE's as a service. That's outposts. >> Dave N.: Right. >> Yeah. >> That's their outpost. >> Yeah. >> Well AWS's position used to be, you know, to use them as a proxy for hyperscale cloud. We'll just, we'll grow in a very straight trajectory forever on the back of net new stuff. Forget about the old stuff. As James T. Kirk said of the Klingons, "let them die." (Lisa laughs) As far as the cloud providers were concerned just, yeah, let, let that old stuff go away. Well then they found out, there came a point in time where they realized there's a lot of friction and stickiness associated with that. So they had to deal with the reality of hybridity, if that's the word, the hybrid nature of things. So what are they doing? They're pushing stuff out to the edge, so... >> With the same operating model. >> With the same operating model. >> Similar. I mean, it's limited, right? >> So you see- >> You can't run a lot of database on outpost, you can run RES- >> You see this clash of Titans where some may have written off traditional IT infrastructure vendors, might have been written off as part of the past. Whereas hyperscale cloud providers represent the future. It seems here at this show they're coming head to head and competing evenly. >> And this is where I think a company like Dell or HPE or Cisco has some advantages in that they're not going to compete with the telcos, but the hyperscalers will. >> Lisa: Right. >> Right. You know, and they're already, Google's, how much undersea cable does Google own? A lot. Probably more than anybody. >> Well, we heard from Google and Microsoft this morning in the keynote. It'd be interesting to see if we hear from AWS and then over the next couple of days. But guys, clearly there is, this is a great wrap of day one. And the crazy thing is this is only day one. We've got three more days of coverage, more news, more information to break down and unpack on theCUBE. Look forward to doing that with you guys over the next three days. Thank you for sharing what you saw on the show floor, what you heard from our guests today as we had about 10 interviews. Appreciate your insights and your perspectives and can't wait for tomorrow. >> Right on. >> All right. For Dave Vellante and Dave Nicholson, I'm Lisa Martin. You're watching theCUBE's day one wrap from MWC 23. We'll see you tomorrow. (relaxing music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. of coverage of the event. are going to say, you know what, of the telecom industry is, are going to be slower to move. And now they're, you know, Which is surprising to the I mean you see it on your phone I guess it comes down to economics. I had a chance to have some conversations And so the ability to have the flexibility I mean, what do you think the odds are What is that going to of discussion of, you know, the graphics are going to look better. And then, you know, early the 2,400 baud modem, you know, days, They're not going to get you that you can't get from the big boys." to the legacy telco guys, you know? dream of eliminating the word Apex is, you know, Dell's as a service. That's outposts. So they had to deal with I mean, it's limited, right? they're coming head to going to compete with the telcos, You know, and they're already, Google's, And the crazy thing is We'll see you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TelcoORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

ComcastORGANIZATION

0.99+

Steve BomberPERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Chris LewisPERSON

0.99+

AWSORGANIZATION

0.99+

James T. KirkPERSON

0.99+

LisaPERSON

0.99+

1996DATE

0.99+

EricssonORGANIZATION

0.99+

MotorolaORGANIZATION

0.99+

AmazonsORGANIZATION

0.99+

HPEORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

Dave V.PERSON

0.99+

Dave N.PERSON

0.99+

1200QUANTITY

0.99+

twoQUANTITY

0.99+

tomorrowDATE

0.99+

first dayQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

RakutenORGANIZATION

0.99+

2,400 baudQUANTITY

0.99+

telcosORGANIZATION

0.99+

bothQUANTITY

0.99+

2400 baudQUANTITY

0.99+

todayDATE

0.99+

ApexORGANIZATION

0.99+

San Ji ChohaORGANIZATION

0.99+

AOLORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

300QUANTITY

0.99+

GooglesORGANIZATION

0.98+

2030DATE

0.98+

GreenLakeORGANIZATION

0.98+

iPhoneCOMMERCIAL_ITEM

0.98+

MWC 23EVENT

0.98+

day oneQUANTITY

0.98+

MWC 23EVENT

0.98+

X86COMMERCIAL_ITEM

0.97+

eight APIsQUANTITY

0.97+

OneQUANTITY

0.96+

2023DATE

0.96+

DishORGANIZATION

0.96+

PrimeCOMMERCIAL_ITEM

0.95+

this morningDATE

0.95+

Day 1QUANTITY

0.95+

a billion, a trillion and a halfQUANTITY

0.94+

Prime VideoCOMMERCIAL_ITEM

0.94+

three more daysQUANTITY

0.94+

AppleORGANIZATION

0.93+

firstQUANTITY

0.92+

Keynote Analysis with Sarbjeet Johal & Chris Lewis | MWC Barcelona 2023


 

(upbeat instrumental music) >> TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (uplifting instrumental music) >> Hey everyone. Welcome to Barcelona, Spain. It's theCUBE Live at MWC '23. I'm Lisa Martin, Dave Vellante, our co-founder, our co-CEO of theCUBE, you know him, you love him. He's here as my co-host. Dave, we have a great couple of guests here to break down day one keynote. Lots of meat. I can't wait to be part of this conversation. Chris Lewis joins us, the founder and MD of Lewis Insight. And Sarbjeet Johal, one of you know him as well. He's a Cube contributor, cloud architect. Guys, welcome to the program. Thank you so much for joining Dave and me today. >> Lovely to be here. >> Thank you. >> Chris, I want to start with you. You have covered all aspects of global telecoms industries over 30 years working as an analyst. Talk about the evolution of the telecom industry that you've witnessed, and what were some of the things you heard in the keynote that excite you about the direction it's going? >> Well, as ever, MWC, there's no lack of glitz and glamour, but it's the underlying issues of the industry that are really at stake here. There's not a lot of new revenue coming into the telecom providers, but there's a lot of adjustment, readjustment of the underlying operational environment. And also, really importantly, what came out of the keynotes is the willingness and the necessity to really engage with the API community, with the developer community, people who traditionally, telecoms would never have even touched. So they're sorting out their own house, they're cleaning their own stables, getting the cost base down, but they're also now realizing they've got to engage with all the other parties. There's a lot of cloud providers here, there's a lot of other people from outside so they're realizing they cannot do it all themselves. It's quite a tough lesson for a very conservative, inward looking industry, right? So should we be spending all this money and all this glitz and glamour of MWC and all be here, or should would be out there really building for the future and making sure the services are right for yours and my needs in a business and personal lives? So a lot of new changes, a lot of realization of what's going on outside, but underlying it, we've just got to get this right this time. >> And it feels like that monetization is front and center. You mentioned developers, we've got to work with developers, but I'm hearing the latest keynote from the Ericsson CEOs, we're going to monetize through those APIs, we're going to charge the developers. I mean, first of all, Chris, am I getting that right? And Sarbjeet, as somebody who's close to the developer community, is that the right way to build bridges? But Chris, are we getting that right? >> Well, let's take the first steps first. So, Ericsson, of course, acquired Vonage, which is a massive API business so they want to make money. They expect to make money by bringing that into the mainstream telecom community. Now, whether it's the developers who pay for it, or let's face it, we are moving into a situation as the telco moves into a techco model where the techco means they're going to be selling bits of the technology to developer guys and to other application developers. So when he says he needs to charge other people for it, it's the way in which people reach in and will take going through those open APIs like the open gateway announced today, but also the way they'll reach in and take things like network slicing. So we're opening up the telecom community, the treasure chest, if you like, where developers' applications and other third parties can come in and take those chunks of technology and build them into their services. This is a complete change from the old telecom industry where everybody used to come and you say, "all right, this is my product, you've got to buy it and you're going to pay me a lot of money for it." So we are looking at a more flexible environment where the other parties can take those chunks. And we know we want collectivity built into our financial applications, into our government applications, everything, into the future of the metaverse, whatever it may be. But it requires that change in attitude of the telcos. And they do need more money 'cause they've said, the baseline of revenue is pretty static, there's not a lot of growth in there so they're looking for new revenues. It's in a B2B2X time model. And it's probably the middle man's going to pay for it rather than the customer. >> But the techco model, Sarbjeet, it looks like the telcos are getting their money on their way in. The techco company model's to get them on their way out like the app store. Go build something of value, build some kind of app or data product, and then when it takes off, we'll take a piece of the action. What are your thoughts from a developer perspective about how the telcos are approaching it? >> Yeah, I think before we came here, like I said, I did some tweets on this, that we talk about all kind of developers, like there's game developers and front end, back end, and they're all talking about like what they're building on top of cloud, but nowhere you will hear the term "telco developer," there's no API from telcos given to the developers to build IoT solutions on top of it because telco as an IoT, I think is a good sort of hand in hand there. And edge computing as well. The glimmer of hope, if you will, for telcos is the edge computing, I believe. And even in edge, I predicted, I said that many times that cloud players will dominate that market with the private 5G. You know that story, right? >> We're going to talk about that. (laughs) >> The key is this, that if you see in general where the population lives, in metros, right? That's where the world population is like flocking to and we have cloud providers covering the local zones with local like heavy duty presence from the big cloud providers and then these telcos are getting sidetracked by that. Even the V2X in cars moving the autonomous cars and all that, even in that space, telcos are getting sidetracked in many ways. What telcos have to do is to join the forces, build some standards, if not standards, some consortium sort of. They're trying to do that with the open gateway here, they have only eight APIs. And it's 2023, eight APIs is nothing, right? (laughs) So they should have started this 10 years back, I think. So, yeah, I think to entice the developers, developers need the employability, we need to train them, we need to show them some light that hey, you can build a lot on top of it. If you tell developers they can develop two things or five things, nobody will come. >> So, Chris, the cloud will dominate the edge. So A, do you buy it? B, the telcos obviously are acting like that might happen. >> Do you know I love people when they've got their heads in the clouds. (all laugh) And you're right in so many ways, but if you flip it around and think about how the customers think about this, business customers and consumers, they don't care about all this background shenanigans going on, do they? >> Lisa: No. >> So I think one of the problems we have is that this is a new territory and whether you call it the edge or whatever you call it, what we need there is we need connectivity, we need security, we need storage, we need compute, we need analytics, and we need applications. And are any of those more important than the others? It's the collective that actually drives the real value there. So we need all those things together. And of course, the people who represented at this show, whether it's the cloud guys, the telcos, the Nokia, the Ericssons of this world, they all own little bits of that. So that's why they're all talking partnerships because they need the combination, they cannot do it on their own. The cloud guys can't do it on their own. >> Well, the cloud guys own all of those things that you just talked about though. (all laugh) >> Well, they don't own the last bit of connectivity, do they? They don't own the access. >> Right, exactly. That's the one thing they don't own. So, okay, we're back to pipes, right? We're back to charging for connectivity- >> Pipes are very valuable things, right? >> Yeah, for sure. >> Never underestimate pipes. I don't know about where you live, plumbers make a lot of money where I live- >> I don't underestimate them but I'm saying can the telcos charge for more than that or are the cloud guys going to mop up the storage, the analytics, the compute, and the apps? >> They may mop it up, but I think what the telcos are doing and we've seen a lot of it here already, is they are working with all those major cloud guys already. So is it an unequal relationship? The cloud guys are global, massive global scale, the telcos are fundamentally national operators. >> Yep. >> Some have a little bit of regional, nobody has global scale. So who stitches it all together? >> Dave: Keep your friends close and your enemies closer. >> Absolutely. >> I know that saying never gets old. It's true. Well, Sarbjeet, one of the things that you tweeted about, I didn't get to see the keynote but I was looking at your tweets. 46% of telcos think they won't make it to the next decade. That's a big number. Did that surprise you? >> No, actually it didn't surprise me because the competition is like closing in on them and the telcos are competing with telcos as well and the telcos are competing with cloud providers on the other side, right? So the smaller ones are getting squeezed. It's the bigger players, they can hook up the newer platforms, I think they will survive. It's like that part is like any other industry, if you will. But the key is here, I think why the pain points were sort of described on the main stage is that they're crying out loud to tell the big tech cloud providers that "hey, you pay your fair share," like we talked, right? You are not paying, you're generating so much content which reverses our networks and you are not paying for it. So they are not able to recoup the cost of laying down their networks. By the way, one thing actually I want to mention is that they said the cloud needs earth. The cloud and earth, it's like there's no physical need to cloud, you know that, right? So like, I think it's the other way around. I think the earth needs the cloud because I'm a cloud guy. (Sarbjeet and Lisa laugh) >> I think you need each other, right? >> I think so too. >> They need each other. When they said cloud needs earth, right? I think they're still in denial that the cloud is a big force. They have to partner. When you can't compete with somebody, what do you do? Partner with them. >> Chris, this is your world. Are they in denial? >> No, I think they're waking up to the pragmatism of the situation. >> Yeah. >> They're building... As we said, most of the telcos, you find have relationships with the cloud guys, I think you're right about the industry. I mean, do you think what's happened since US was '96, the big telecom act when we started breaking up all the big telcos and we had lots of competition came in, we're seeing the signs that we might start to aggregate them back up together again. So it's been an interesting experiment for like 30 years, hasn't it too? >> It made the US less competitive, I would argue, but carry on. >> Yes, I think it's true. And Europe is maybe too competitive and therefore, it's not driven the investment needed. And by the way, it's not just mobile, it's fixed as well. You saw the Orange CEO was talking about the her investment and the massive fiber investments way ahead of many other countries, way ahead of the UK or Germany. We need that fiber in the ground to carry all your cloud traffic to do this. So there is a scale issue, there is a competition issue, but the telcos are very much aware of it. They need the cloud, by the way, to improve their operational environments as well, to change that whole old IT environment to deliver you and I better service. So no, it absolutely is changing. And they're getting scale, but they're fundamentally offering the basic product, you call it pipes, I'll just say they're offering broadband to you and I and the business community. But they're stepping on dangerous ground, I think, when saying they want to charge the over the top guys for all the traffic they use. Those over the top guys now build a lot of the global networks, the backbone submarine network. They're putting a lot of money into it, and by giving us endless data for our individual usage, that cat is out the bag, I think to a large extent. >> Yeah. And Orange CEO basically said that, that they're not paying their fair share. I'm for net neutrality but the governments are going to have to fund this unless you let us charge the OTT. >> Well, I mean, we could of course renationalize. Where would that take us? (Dave laughs) That would make MWC very interesting next year, wouldn't it? To renationalize it. So, no, I think you've got to be careful what we wish for here. Creating the absolute clear product that is required to underpin all of these activities, whether it's IoT or whether it's cloud delivery or whether it's just our own communication stuff, delivering that absolutely ubiquitously high quality for business and for consumer is what we have to do. And telcos have been too conservative in the past. >> I think they need to get together and create standards around... I think they have a big opportunity. We know that the clouds are being built in silos, right? So there's Azure stack, there's AWS and there's Google. And those are three main ones and a few others, right? So that we are fighting... On the cloud side, what we are fighting is the multicloud. How do we consume that multicloud without having standards? So if these people get together and create some standards around IoT and edge computing sort of area, people will flock to them to say, "we will use you guys, your API, we don't care behind the scenes if you use AWS or Google Cloud or Azure, we will come to you." So market, actually is looking for that solution. I think it's an opportunity for these guys, for telcos. But the problem with telcos is they're nationalized, as you said Chris versus the cloud guys are still kind of national in a way, but they're global corporations. And some of the telcos are global corporations as well, BT covers so many countries and TD covers so many... DT is in US as well, so they're all over the place. >> But you know what's interesting is that the TM forum, which is one of the industry associations, they've had an open digital architecture framework for quite some years now. Google had joined that some years ago, Azure in there, AWS just joined it a couple of weeks ago. So when people said this morning, why isn't AWS on the keynote? They don't like sharing the limelight, do they? But they're getting very much in bed with the telco. So I think you'll see the marriage. And in fact, there's a really interesting statement, if you look at the IoT you mentioned, Bosch and Nokia have been working together 'cause they said, the problem we've got, you've got a connectivity network on one hand, you've got the sensor network on the other hand, you're trying to merge them together, it's a nightmare. So we are finally seeing those sort of groups talking to each other. So I think the standards are coming, the cooperation is coming, partnerships are coming, but it means that the telco can't dominate the sector like it used to. It's got to play ball with everybody else. >> I think they have to work with the regulators as well to loosen the regulation. Or you said before we started this segment, you used Chris, the analogy of sports, right? In sports, when you're playing fiercely, you commit the fouls and then ask for ref to blow the whistle. You're now looking at the ref all the time. The telcos are looking at the ref all the time. >> Dave: Yeah, can I do this? Can I do that? Is this a fair move? >> They should be looking for the space in front of the opposition. >> Yeah, they should be just on attack mode and commit these fouls, if you will, and then ask for forgiveness then- >> What do you make of that AWS not you there- >> Well, Chris just made a great point that they don't like to share the limelight 'cause I thought it was very obvious that we had Google Cloud, we had Microsoft there on day one of this 80,000 person event. A lot of people back from COVID and they weren't there. But Chris, you brought up a great point that kind of made me think, maybe you're right. Maybe they're in the afternoon keynote, they want their own time- >> You think GSMA invited them? >> I imagine so. You'd have to ask GSMA. >> I would think so. >> Get Max on here and ask that. >> I'm going to ask them, I will. >> But no, and they don't like it because I think the misconception, by the way, is that everyone says, "oh, it's AWS, it's Google Cloud and it's Azure." They're not all the same business by any stretch of the imagination. AWS has been doing loads of great work, they've been launching private network stuff over the last couple of weeks. Really interesting. Google's been playing catch up. We know that they came in readily late to the market. And Azure, they've all got slightly different angles on it. So perhaps it just wasn't right for AWS and the way they wanted to pitch things so they don't have to be there, do they? >> That's a good point. >> But the industry needs them there, that's the number one cloud. >> Dave, they're there working with the industry. >> Yeah, of course. >> They don't have to be on the keynote stage. And in fact, you think about this show and you mentioned the 80,000 people, the activity going on around in all these massive areas they're in, it's fantastic. That's where the business is done. The business isn't done up on the keynote stage. >> That's why there's the glitz and the glamour, Chris. (all laugh) >> Yeah. It's not glitz, it's espresso. It's not glamour anymore, it's just espresso. >> We need the espresso. >> Yeah. >> I think another thing is that it's interesting how an average European sees the tech market and an average North American, especially you from US, you have to see the market. Here, people are more like process oriented and they want the rules of the road already established before they can take a step- >> Chris: That's because it's your pension in the North American- >> Exactly. So unions are there and the more employee rights and everything, you can't fire people easily here or in Germany or most of the Europe is like that with the exception of UK. >> Well, but it's like I said, that Silicone Valley gets their money on the way out, you know? And that's how they do it, that's how they think it. And they don't... They ask for forgiveness. I think the east coast is more close to Europe, but in the EU, highly regulated, really focused on lifetime employment, things like that. >> But Dave, the issue is the telecom industry is brilliant, right? We keep paying every month whatever we do with it. >> It's a great business, to your point- >> It's a brilliant business model. >> Dave: It's fantastic. >> So it's about then getting the structure right behind it. And you know, we've seen a lot of stratification where people are selling off towers, Orange haven't sold their towers off, they made a big point about that. Others are selling their towers off. Some people are selling off their underlying network, Telecom Italia talking about KKR buying the whole underlying network. It's like what do you want to be in control of? It's a great business. >> But that's why they complain so much is that they're having to sell their assets because of the onerous CapEx requirements, right? >> Yeah, they've had it good, right? And dare I say, perhaps they've not planned well enough for the future. >> They're trying to protect their past from the future. I mean, that's... >> Actually, look at the... Every "n" number of years, there's a new faster network. They have to dig the ground, they have to put the fiber, they have to put this. Now, there are so many booths showing 6G now, we are not even done with 5G yet, now the next 6G you know, like then- >> 10G's coming- >> 10G, that's a different market. (Dave laughs) >> Actually, they're bogged down by the innovation, I think. >> And the generational thing is really important because we're planning for 6G in all sorts of good ways but actually what we use in our daily lives, we've gone through the barrier, we've got enough to do that. So 4G gives us enough, the fiber in the ground or even old copper gives us enough. So the question is, what are we willing to pay for more than that basic connectivity? And the answer to your point, Dave, is not a lot, right? So therefore, that's why the emphasis is on the business market on that B2B and B2B2X. >> But we'll pay for Netflix all day long. >> All day long. (all laugh) >> The one thing Chris, I don't know, I want to know your viewpoints and we have talked in the past as well, there's absence of think tanks in tech, right? So we have think tanks on the foreign policy and economic policy in every country, and we have global think tanks, but tech is becoming a huge part of the economy, global economy as well as national economies, right? But we don't have think tanks on like policy around tech. For example, this 4G is good for a lot of use cases. Then 5G is good for smaller number of use cases. And then 6G will be like, fewer people need 6G for example. Why can't we have sort of those kind of entities dictating those kind of like, okay, is this a wiser way to go about it? >> Lina Khan wants to. She wants to break up big tech- >> You're too young to remember but the IT used to have a show every four years in Geneva, there were standards around there. So I think there are bodies. I think the balance of power obviously has gone from the telecom to the west coast to the IT markets. And it's changing the balance about, it moves more quickly, right? Telecoms has never moved quickly enough. I think there is hope by the way, that telecoms now that we are moving to more softwarized environment, and God forbid, we're moving into CICD in the telecom world, right? Which is a massive change, but I think there's hopes for it to change. The mentality is changing, the culture is changing, but to change those old structured organizations from the British telecom or the France telecom into the modern world, it's a hell of a long journey. It's not an overnight journey at all. >> Well, of course the theme of the event is velocity. >> Yeah, I know that. >> And it's been interesting sitting here with the three of you talking about from a historic perspective, how slow and molasseslike telecom has been. They don't have a choice anymore. As consumers, we have this expectation we're going to get anything we want on our mobile device, 24 by seven. We don't care about how the sausage is made, we just want the end result. So do you really think, and we're only on day one guys... And Chris we'll start with you. Is the theme really velocity? Is it disruption? Are they able to move faster? >> Actually, I think invisibility is the real answer. (Lisa laughs) We want communication to be invisible, right? >> Absolutely. >> We want it to work. When we switch our phones on, we want it to work and we want to... Well, they're not even phones anymore, are they really? I mean that's the... So no, velocity, we've got... There is momentum in the industry, there's no doubt about that. The cloud guys coming in, making telecoms think about the way they run their own business, where they meet, that collision point on the edges you talked about Sarbjeet. We do have velocity, we've got momentum. There's so many interested parties. The way I think of this is that the telecom industry used to be inward looking, just design its own technology and then expect everyone else to dance to our tune. We're now flipping that 180 degrees and we are now having to work with all the different outside forces shaping us. Whether it's devices, whether it's smart cities, governments, the hosting guys, the Equinoxis, all these things. So everyone wants a piece of this telecom world so we've got to make ourselves more open. That's why you get in a more open environment. >> But you did... I just want to bring back a point you made during COVID, which was when everybody switched to work from home, started using their landlines again, telcos had to respond and nothing broke. I mean, it was pretty amazing. >> Chris: It did a good job. >> It was kind of invisible. So, props to the telcos for making that happen. >> They did a great job. >> So it really did. Now, okay, what have you done for me lately? So now they've got to deal with the future and they're talking monetization. But to me, monetization is all about data and not necessarily just the network data. Yeah, they can sell that 'cause they own that but what kind of incremental value are they going to create for the consumers that... >> Yeah, actually that's a problem. I think the problem is that they have been strangled by the regulation for a long time and they cannot look at their data. It's a lot more similar to the FinTech world, right? I used to work at Visa. And then Visa, we did trillion dollars in transactions in '96. Like we moved so much money around, but we couldn't look at these things, right? So yeah, I think regulation is a problem that holds you back, it's the antithesis of velocity, it slows you down. >> But data means everything, doesn't it? I mean, it means everything and nothing. So I think the challenge here is what data do the telcos have that is useful, valuable to me, right? So in the home environment, the fact that my broadband provider says, oh, by the way, you've got 20 gadgets on that network and 20 on that one... That's great, tell me what's on there. I probably don't know what's taking all my valuable bandwidth up. So I think there's security wrapped around that, telling me the way I'm using it if I'm getting the best out of my service. >> You pay for that? >> No, I'm saying they don't do it yet. I think- >> But would you pay for that? >> I think I would, yeah. >> Would you pay a lot for that? I would expect it to be there as part of my dashboard for my monthly fee. They're already charging me enough. >> Well, that's fine, but you pay a lot more in North America than I do in Europe, right? >> Yeah, no, that's true. >> You're really overpaying over there, right? >> Way overpaying. >> So, actually everybody's looking at these devices, right? So this is a radio operated device basically, right? And then why couldn't they benefit from this? This is like we need to like double click on this like 10 times to find out why telcos failed to leverage this device, right? But I think the problem is their reliance on regulations and their being close to the national sort of governments and local bodies and authorities, right? And in some countries, these telcos are totally controlled in very authoritarian ways, right? It's not like open, like in the west, most of the west. Like the world is bigger than five, six countries and we know that, right? But we end up talking about the major economies most of the time. >> Dave: Always. >> Chris: We have a topic we want to hit on. >> We do have a topic. Our last topic, Chris, it's for you. You guys have done an amazing job for the last 25 minutes talking about the industry, where it's going, the evolution. But Chris, you're registered blind throughout your career. You're a leading user of assertive technologies. Talk about diversity, equity, inclusion, accessibility, some of the things you're doing there. >> Well, we should have had 25 minutes on that and five minutes on- (all laugh) >> Lisa: You'll have to come back. >> Really interesting. So I've been looking at it. You're quite right, I've been using accessible technology on my iPhone and on my laptop for 10, 20 years now. It's amazing. And what I'm trying to get across to the industry is to think about inclusive design from day one. When you're designing an app or you're designing a service, make sure you... And telecom's a great example. In fact, there's quite a lot of sign language around here this week. If you look at all the events written, good to see that coming in. Obviously, no use to me whatsoever, but good for the hearing impaired, which by the way is the biggest category of disability in the world. Biggest chunk is hearing impaired, then vision impaired, and then cognitive and then physical. And therefore, whenever you're designing any service, my call to arms to people is think about how that's going to be used and how a blind person might use it or how a deaf person or someone with physical issues or any cognitive issues might use it. And a great example, the GSMA and I have been talking about the app they use for getting into the venue here. I downloaded it. I got the app downloaded and I'm calling my guys going, where's my badge? And he said, "it's top left." And because I work with a screen reader, they hadn't tagged it properly so I couldn't actually open my badge on my own. Now, they changed it overnight so it worked this morning, which is fantastic work by Trevor and the team. But it's those things that if you don't build it in from scratch, you really frustrate a whole group of users. And if you think about it, people with disabilities are excluded from so many services if they can't see the screen or they can't hear it. But it's also the elderly community who don't find it easy to get access to things. Smart speakers have been a real blessing in that respect 'cause you can now talk to that thing and it starts talking back to you. And then there's the people who can't afford it so we need to come down market. This event is about launching these thousand dollars plus devices. Come on, we need below a hundred dollars devices to get to the real mass market and get the next billion people in and then to educate people how to use it. And I think to go back to your previous point, I think governments are starting to realize how important this is about building the community within the countries. You've got some massive projects like NEOM in Saudi Arabia. If you have a look at that, if you get a chance, a fantastic development in the desert where they're building a new city from scratch and they're building it so anyone and everyone can get access to it. So in the past, it was all done very much by individual disability. So I used to use some very expensive, clunky blind tech stuff. I'm now using mostly mainstream. But my call to answer to say is, make sure when you develop an app, it's accessible, anyone can use it, you can talk to it, you can get whatever access you need and it will make all of our lives better. So as we age and hearing starts to go and sight starts to go and dexterity starts to go, then those things become very useful for everybody. >> That's a great point and what a great champion they have in you. Chris, Sarbjeet, Dave, thank you so much for kicking things off, analyzing day one keynote, the ecosystem day, talking about what velocity actually means, where we really are. We're going to have to have you guys back 'cause as you know, we can keep going, but we are out of time. But thank you. >> Pleasure. >> We had a very spirited, lively conversation. >> Thanks, Dave. >> Thank you very much. >> For our guests and for Dave Vellante, I'm Lisa Martin, you're watching theCUBE live in Barcelona, Spain at MWC '23. We'll be back after a short break. See you soon. (uplifting instrumental music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. the founder and MD of Lewis Insight. of the telecom industry and making sure the services are right is that the right way to build bridges? the treasure chest, if you like, But the techco model, Sarbjeet, is the edge computing, I believe. We're going to talk from the big cloud providers So, Chris, the cloud heads in the clouds. And of course, the people Well, the cloud guys They don't own the access. That's the one thing they don't own. I don't know about where you live, the telcos are fundamentally Some have a little bit of regional, Dave: Keep your friends Well, Sarbjeet, one of the and the telcos are competing that the cloud is a big force. Are they in denial? to the pragmatism of the situation. the big telecom act It made the US less We need that fiber in the ground but the governments are conservative in the past. We know that the clouds are but it means that the telco at the ref all the time. in front of the opposition. that we had Google Cloud, You'd have to ask GSMA. and the way they wanted to pitch things But the industry needs them there, Dave, they're there be on the keynote stage. glitz and the glamour, Chris. It's not glitz, it's espresso. sees the tech market and the more employee but in the EU, highly regulated, the issue is the telecom buying the whole underlying network. And dare I say, I mean, that's... now the next 6G you know, like then- 10G, that's a different market. down by the innovation, I think. And the answer to your point, (all laugh) on the foreign policy Lina Khan wants to. And it's changing the balance about, Well, of course the theme Is the theme really velocity? invisibility is the real answer. is that the telecom industry But you did... So, props to the telcos and not necessarily just the network data. it's the antithesis of So in the home environment, No, I'm saying they don't do it yet. Would you pay a lot for that? most of the time. topic we want to hit on. some of the things you're doing there. So in the past, We're going to have to have you guys back We had a very spirited, See you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NokiaORGANIZATION

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

Chris LewisPERSON

0.99+

DavePERSON

0.99+

EuropeLOCATION

0.99+

Dave VellantePERSON

0.99+

Lina KhanPERSON

0.99+

LisaPERSON

0.99+

BoschORGANIZATION

0.99+

GermanyLOCATION

0.99+

EricssonORGANIZATION

0.99+

Telecom ItaliaORGANIZATION

0.99+

SarbjeetPERSON

0.99+

AWSORGANIZATION

0.99+

KKRORGANIZATION

0.99+

20 gadgetsQUANTITY

0.99+

GenevaLOCATION

0.99+

25 minutesQUANTITY

0.99+

10 timesQUANTITY

0.99+

Saudi ArabiaLOCATION

0.99+

USLOCATION

0.99+

GoogleORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

TrevorPERSON

0.99+

OrangeORGANIZATION

0.99+

180 degreesQUANTITY

0.99+

30 yearsQUANTITY

0.99+

five minutesQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

EricssonsORGANIZATION

0.99+

North AmericaLOCATION

0.99+

telcoORGANIZATION

0.99+

20QUANTITY

0.99+

46%QUANTITY

0.99+

threeQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

next yearDATE

0.99+

Barcelona, SpainLOCATION

0.99+

'96DATE

0.99+

GSMAORGANIZATION

0.99+

telcosORGANIZATION

0.99+

VisaORGANIZATION

0.99+

trillion dollarsQUANTITY

0.99+

thousand dollarsQUANTITY

0.99+

Amir Khan & Atif Khan, Alkira | Supercloud2


 

(lively music) >> Hello, everyone. Welcome back to the Supercloud presentation here. I'm theCUBE, I'm John Furrier, your host. What a great segment here. We're going to unpack the networking aspect of the cloud, how that translates into what Supercloud architecture and platform deployment scenarios look like. And demystify multi-cloud, hybridcloud. We've got two great experts. Amir Khan, the Co-Founder and CEO of Alkira, Atif Khan, Co-Founder and CTO of Alkira. These guys been around since 2018 with the startup, but before that story, history in the tech industry. I mean, routing early days, multiple waves, multiple cycles. >> Welcome three decades. >> Welcome to Supercloud. >> Thanks. >> Thanks for coming on. >> Thank you so much for having us. >> So, let's get your take on Supercloud because it's been one of those conversations that really galvanized the industry because it kind of highlights almost this next wave, this next side of the street that everyone's going to be on that's going to be successful. The laggards on the legacy seem to be stuck on the old model. SaaS is growing up, it's ISVs, it's ecosystems, hyperscale, full hybrid. And then multi-cloud around the corners cause all this confusion, everyone's hand waving. You know, this is a solution, that solution, where are we? What do you guys see as this supercloud dynamic? >> So where we start from is always focusing on the customer problem. And in 2018 when we identified the problem, we saw that there were multiple clouds with many diverse ways of doing things from the network perspective, and customers were struggling with that. So we delved deeper into that and looked at each one of the cloud architectures completely independent. And there was no common solution and customers were struggling with that from the perspective. They wanted to be in multiple clouds, either through mergers and acquisitions or running an application which may be more cost effective to run in something or maybe optimized for certain reasons to run in a different cloud. But from the networking perspective, everything needed to come together. So that's, we are starting to define it as a supercloud now, but basically, it's a common infrastructure across all clouds. And then integration of high lift services like, you know, security or IPAM services or many other types of services like inter-partner routing and stuff like that. So, Amir, you agree then that multi-cloud is simply a default result of having whatever outcomes, either M&A, some productivity software, maybe Azure. >> Yes. >> Amazon has this and then I've got on-premise application, so it's kinds mishmash. >> So, I would qualify it with hybrid multi-cloud because everything is going to be interconnected. >> John: Got it. >> Whether it's on-premise, remote users or clouds. >> But have CTO perspective, obviously, you got developers, multiple stacks, got AWS, Azure and GCP, other. Not everyone wants to kind of like go all in, but yet they don't want to hedge too much because it's a resource issue. And I got to learn this stack, I got to learn that stack. So then now, you have this default multi-cloud, hybrid multi-cloud, then it's like, okay, what do I do? How do you spread that around? Is it dangerous? What's the the approach technically? What's some of the challenges there? >> Yeah, certainly. John, first, thanks for having us here. So, before I get to that, I'll just add a little bit to what Amir was saying, like how we started, what we were seeing and how it, you know, correlates with the supercloud. So, as you know, before this company, Alkira, we were doing, we did the SD-WAN company, which was Viptela. So there, we started seeing when people started deploying SD-WAN at like a larger scale. We started like, you know, customers coming to us and saying they needed connectivity into the cloud from the SD-WAN. They wanted to extend the SD-WAN fabric to the cloud. So we came up with an architecture, which was like later we started calling them Cloud onRamps, where we built, you know, a transit VPC and put like the virtual instances of SD-WAN appliances extended from there to the cloud. But before we knew, like it started becoming very complicated for the customers because it wasn't just connectivity, it also required, you know, other use cases. You had to instantiate or bring in security appliances in there. You had to secure all of that stuff. There were requirements for, you know, different regions. So you had to bring up the same thing in different regions. Then multiple clouds, what did you do? You had to replicate the same thing in multiple clouds. And now if there was was requirement between clouds, how were you going to do it? You had to route traffic from somewhere, and come up with all those routing controls and stuff. So, it was very complicated. >> Like spaghetti code, but on network. >> The games begin, in fact, one of our customers called it spaghetti mess. And so, that's where like we thought about where was the industry going and which direction the industry was going into? And we came up with the Alkira where what we are doing is building a common infrastructure across multiple clouds, across in, you know, on-prem locations, be it data centers or physical sites, branches sites, et cetera, with integrated security and network networking services inside. And, you know, nowadays, networking is not only about connectivity, you have to secure everything. So, security has to be built in. Redundancy, high availability, disaster recovery. So all of that needs to be built in. So that's like, you know, kind of a definition of like what we thought at that time, what is turning into supercloud now. >> Yeah. It's interesting too, you mentioned, you know, VPCs is not, configuration of loans a hassle. Nevermind the manual mistakes could be made, but as you decide to do something you got to, "Oh, we got to get these other things." A lot of the hyper scales and a lot of the alpha cloud players now, and cloud native folks, they're kind of in that mode of, "Wow, look at what we've built." Now, they're got to maintain, how do I refresh it? Like, how do I keep the talent? So they got this similar chaotic environment where it's like, okay, now they're already already through, so I think they're going to be okay. But then some people want to bypass it completely. So there's a lot of customers that we see out there that fit the makeup of, I'm cloud first, I've lifted and shifted, I move some stuff to the cloud. But I want to bypass all that learnings from all the people that are gone through the past three years. Can I just skip that and go to a multi-cloud or coherent infrastructure? What do you think about that? What's your view? >> So yeah, so if you look at these enterprises, you know, many of them just to find like the talent, which for one cloud as far as the IT staff is concerned, it's hard enough. And now, when you have multiple clouds, it's hard to find people the talent which is, you know, which has expertise across different clouds. So that's where we come into the picture. So our vision was always to simplify all of this stuff. And simplification, it cannot be just simplification because you cannot just automate the workflows of the cloud providers underneath. So you have to, you know, provide your full data plane on top of it, fed full control plane, management plane, policy and management on top of it. And coming back to like your question, so these nowadays, those people who are working on networking, you know, before it used to be like CLI. You used to learn about Cisco CLI or Juniper CLI, and you used to work on it. Nowadays, it's very different. So automation, programmability, all of that stuff is the key. So now, you know, Ops guys, the DevOps guys, so these are the people who are in high demand. >> So what do you think about the folks out there that are saying, okay, you got a lot of fragmentation. I got the stacks, I got a lot of stove pipes, if you will, out there on the stack. I got to learn this from Azure. Can you guys have with your product abstract the way that's so developers don't need to know the ins and outs of stack's, almost like a gateway, if you will, the old days. But like I'm a developer or team develop, why should I have to learn the management layer of Azure? >> That's exactly what we started, you know, out with to solve. So it's, what we have built is a platform and the platform sits inside the cloud. And customers are able to build their own network or a virtual network on top using that platform. So the platform has its own data plane, own control plane and management plane with a policy layer on top of it. So now, it's the platform which is sitting in different clouds, but from a customer's point of view, it's one way of doing networking. One way of instantiating or bringing in services or security services in the middle. Whether those are our security services or whether those are like services from our partners, like Palo Alto or Checkpoint or Cisco. >> So you guys brought the SD-WAN mojo and refactored it for the cloud it sounds like. >> No. >> No? (chuckles) >> We cannot said. >> All right, explain. >> It's way more than that. >> I mean, SD-WAN was wan. I mean, you're talking about wide area networks, talking about connected, so explain the difference. >> SD-WAN was primarily done for one major reason. MPLS was expensive, very strong SLAs, but very low speed. Internet, on the other hand, you sat at home and you could access your applications much faster. No SLA, very low cost, right? So we wanted to marry the two together so you could have a purely private infrastructure and a public infrastructure and secure both of them by creating a common secure fabric across all those environments. And then seamlessly tying it into your internal branch and data center and cloud network. So, it merely brought you to the edge of the cloud. It didn't do anything inside the cloud. Now, the major problem resides inside the clouds where you have to optimize the clouds themselves. Take a step back. How were the clouds built? Basically, the cloud providers went to the Ciscos and Junipers and the rest of the world, built the network in the data centers or across wide area infrastructure, and brought it all together and tried to create a virtualized layer on top of that. But there were many limitations of this underlying infrastructure that they had built. So number of routes per region, how inter region connectivity worked, or how many routes you could carry to the VPCs of V nets? That all those were becoming no common policy across, you know, these environments, no segmentation across these environments, right? So the networking constructs that the enterprise customers were used to as enterprise class carry class capabilities, they did not exist in the cloud. So what did the customer do? They ended up stitching it together all manually. And that's why Atif was alluding to earlier that it became a spaghetti mess for the customers. And then what happens is, as a result, day two operations, you know, troubleshooting, everything becomes a nightmare. So what do you do? You have to build an infrastructure inside the cloud. Cloud has enough raw capabilities to build the solutions inside there. Netflix's of the world. And many different companies have been born in the cloud and evolved from there. So why could we not take the raw capabilities of the clouds and build a network cloud or a supercloud on top of these clouds to optimize the whole infrastructure and seamlessly connecting it into the on-premise and remote user locations, right? So that's your, you know, hybrid multi-cloud solution. >> Well, great call out on the SD-WAN in common versus cloud. 'Cause I think this is important because you're building a network layer in the cloud that spans out so the customers don't have to get into the, there's a gap in the system that I'm used to, my operating environment, of having lockdown security and network. >> So yeah. So what you do is you use the raw capabilities like bandwidth or virtual machines, or you know, containers, or, you know, different types of serverless capabilities. And you bring it all together in a way to solve the networking problems, thereby creating a supercloud, which is an abstraction layer which hides all the complexity of the underlying clouds from the customer, right? And it provides a common infrastructure across all environments to that customer, right? That's the beauty of it. And it does it in a way that it looks like, if they have the networking knowledge, they can apply it to this new environment and carry it forward. One way of doing security across all clouds and hybrid environments. One way of doing routing. One way of doing large-scale network address translation. One way of doing IPAM services. So people are tired of doing individual things and individual clouds and on-premise locations, right? So now they're getting something common. >> You guys brought that, you brought all that to bear and flexible for the customer to essentially self-serve their network cloud. >> Yes, yeah. Is that the wave? >> And nowadays, from business perspective, agility is the key, right? You have to move at the pace of the business. If you don't, you are losing. >> So, would it be safe to say that you guys have a network supercloud? >> Absolutely, yeah. >> We, pretty much, yeah. Absolutely. >> What does that mean to our customer? What's in it for them? What's the benefit to the customer? I got a network supercloud, it connects, provides SLA, all the capabilities I need. What do they get? What's the end point for them? What's the end? >> Atif, maybe you can talk some examples. >> The IT infrastructure is all like distributed now, right? So you have applications running in data centers. You have applications running in one cloud. Other cloud, public clouds, enterprises are depending on so many SaaS applications. So now, these are, you can call these endpoints. So a supercloud or a network cloud, from our perspective, it's a cloud in the middle or a network in the middle, which provides connectivity from any endpoint to any endpoint. So, you are able to connect to the supercloud or network cloud in one way no matter where you are. So now, whichever cloud you are in, whichever cloud you need to connect to. And also, it's not just connecting to the cloud. So you need to do a lot of stuff, a lot of networking inside the cloud also. So now, as Amir was saying, every cloud has its own from a networking, you know, the concept perspective or the construct, they are different. There are limitations in there also. So this supercloud, which is sitting on top, basically, your platform is sitting into the cloud, but the supercloud is built on top of using your platform. So that abstracts all those complexities, all those limitations. So now your limitations are whatever the limitations of that platform are. So now your platform, that platform is in our control. So we can keep building it, we can keep scaling it horizontally. Because one of the things is that, you know, in this cloud era, one of the things is autoscaling these services. So why can't the network now autoscale also, just like your other services. >> Network autoscaling is a genius idea, and I think that's a killer. I want to ask the the follow on question because I think, first of all, I love what you guys are doing. So, I think it's a great example of this new innovation. It's not obvious until you see it, right? Geographical is huge. So, you know, single instance, global instances, multiple instances, you're seeing global. How do you guys look at that global equation? Because as companies expand their clouds into geos, and then ultimately, you know, it's obviously continent, region and locales. You're going to have geographic issues. So, this is an extension of your network cloud? >> Amir: It is the extension of the network cloud because if you look at this hyperscalers, they're sitting pretty much everywhere in the globe. So, wherever their regions are, the beauty of building a supercloud is that you can by definition, be available in those regions. It literally takes a day or two of testing for our stack to run in those regions, to make sure there are no nuances that we run into, you know, for that region. The moment we bring it up in that region, all customers can onboard into that solution. So literally, what used to take months or years to build a global infrastructure, now, you can configure it in 10 minutes basically, and bring it up in less than one hour. Since when did we see any solution- >> And by the way, >> that can come up with. >> when the edge comes out too, you're going to start to see more clouds get bolted on. >> Exactly. And you can expand to the edge of the network. That's why we call cloud the new edge, right? >> John: Yeah, it is. Now, I think you guys got a good solutions, network clouds, superclouds, good. So the question on the premise side, so I get the cloud play. It's very cool. You can expand out. It's a nice layer. I'm sure you manage the SLAs between latency and all kinds of things. Knowing when not to do things. Physics or physics. Okay. Now, you've got the on-premise. What's the on-premise equation look like? >> So on-premise, the kind of customers, we are working with large enterprises, mid-size enterprises. So they have on-prem networks, they have deployed, in many cases, they have deployed SD-WAN. In many cases, they have MPLS. They have data centers also. And a lot of these companies are, you know, moving the applications from the data center into the cloud. But we still have large enterprise- >> But for you guys, you can sit there too with non server or is it a box or what is it? >> It's a software stack, right? So, we are a software company. >> Okay, so no box. >> No box. >> Okay, got it. >> No box. >> It's even better. So, we can connect any, as I mentioned, any endpoint, whether it's data centers. So, what happens is usually these enterprises from the data centers- >> John: It's a cloud endpoint for you. >> Cloud endpoint for us. And they need highspeed connectivity into the cloud. And our network cloud is sitting inside the or supercloud is sitting inside the cloud. So we need highspeed connectivity from the data centers. This is like multi-gig type of connectivity. So we enable that connectivity as a service. And as Amir was saying, you are able to bring it up in minutes, pretty much. >> John: Well, you guys have a great handle on supercloud. I really appreciate you guys coming on. I have to ask you guys, since you have so much experience in the industry, multiple inflection points you've guys lived through and we're all old, and we can remember those glory days. What's the big deal going on right now? Because you can connect the dots and you can imagine, okay, like a Lambda function spinning up some connectivity. I need instant access to a new route, throw some, I need to send compute to an edge point for process data. A lot of these kind of ad hoc services are going to start flying around, which used to be manually configured as you guys remember. >> Amir: And that's been the problem, right? The shadow IT, that was the biggest problem in the enterprise environment. So that's what we are trying to get the customers away from. Cloud teams came in, individuals or small groups of people spun up instances in the cloud. It was completely disconnected from the on-premise environment or the existing IT environment that the customer had. So, how do you bring it together? And that's what we are trying to solve for, right? At a large scale, in a carrier cloud center (indistinct). >> What do you call that? Shift right or shift left? Shift left is in the cloud native world security. >> Amir: Yes. >> Networking and security, the two hottest areas. What are you shifting? Up or down? I mean, the network's moving up the stack. I mean, you're seeing the run times at Kubernetes later' >> Amir: Right, right. It's true we're end-to-end virtualization. So you have plumbing, which is the physical infrastructure. Then on top of that, now for the first time, you have true end-to-end virtualization, which the cloud-like constructs are providing to us. We tried to virtualize the routers, we try to virtualize instances at the server level. Now, we are bringing it all together in a truly end-to-end virtualized manner to connect any endpoint anywhere across the globe. Whether it's on-premise, home, multiple clouds, or SaaS type environments. >> Yeah. If you talk about the technical benefits beyond virtualizations, you kind of see in virtualization be abstracted away. So you got end-to-end virtualization, but you don't need to know virtualization to take advantage of it. >> Exactly. Exactly. >> What are some of the tech involved where, what's the trend around on top of virtual? What's the easy button for that? >> So there are many, many use cases from the customers and they're, you know, some of those use cases, they used to deliver out of their data centers before. So now, because you, know, it takes a long time to spend something up in the data center and stuff. So the trend is and what enterprises are looking for is agility. And to achieve that agility, they are moving those services or those use cases into the cloud. So another technical benefit of like something like a supercloud and what we are doing is we allow customers to, you know, move their services from existing data centers into the cloud as well. And I'll give you some examples. You know, these enterprises have, you know, tons of partners. They provide connectivity to their partners, to select resources. It used to happen inside the data center. You would bring in connectivity into the data center and apply like tons of ACLs and whatnot to make sure that you are able to only connect. And now those use cases are, they need to be enabled inside the cloud. And the customer's customers are also, it's not just coming from the on-prem, they're coming from the cloud as well. So, if they're coming from the cloud as well as from on-prem, so you need like an infrastructure like supercloud, which is sitting inside the cloud and is able to handle all these use cases. So all of these use cases have to be, so that requires like moving those services from the data center into the cloud or into the supercloud. So, they're, oh, as we started building this service over the last four years, we have come across so many use cases. And to deliver those use cases, you have to have a platform. So you have to have your own platform because otherwise you are depending on somebody else's, you know, capabilities. And every time their capabilities change, you have to change. >> John: I'm glad you brought up the platform 'cause I want to get your both reaction to this. So Bob Muglia just said on theCUBE here at Supercloud, that supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. So the question is, is supercloud a platform or an architecture in your view? >> That's an interesting view on things, you know? I mean, if you think of it, you have to design or architect a solution before we turn it into a platform. >> John: It's a trick question actually. >> So it's a, you know, so we look at it as that you have to have an architectural approach end to end, right? And then you build a solution based on that approach. So, I don't think that they are mutually exclusive. I think they go hand in hand. It's an architecture that you turn into a solution and provide that agility and high availability and disaster recovery capability that it built into that. >> It's interesting that these definitions might be actually redefined with this new configuration. >> Amir: Yes. >> Because architecture and platform used to mean something, like, aight here's a platform, you buy this platform. >> And then you architecture solution. >> Architect it via vendor. >> Right, right, right. >> Okay. And they have to deal with that architecture in the place of multiple superclouds. If you have too many stove pipes, then what's the purpose of supercloud? >> Right, right, right. And because, you know, historically, you built a router and you sold it to the customer. And the poor customer was supposed to install it all, you know, and interconnect all those things. And if you have 40, 50,000 router network, which we saw in our lifetime, 'cause there used to be many more branches when we were growing up in the networking industry, right? You had to create hierarchy and all kinds of things to figure out how to solve that problem. We are no longer living in that world anymore. You cannot deploy individual virtual instances. And that's what approach a lot of people are taking, which is a pure overly network. You cannot take that approach anymore. You have to evolve the architecture and then build the solution based on that architecture so that it becomes a platform which is readily available, highly scalable, and available. And at the same time, it's very, very easy to deploy. It's a SaaS type solution, right? >> So you're saying, do the architecture to get the solution for the platform that the customer has. >> Amir: Yes. >> They're not buying a platform, they end up with a platform- >> With the platform. >> as a result of Supercloud path. All right. So that's what's, so you mentioned, that's a great point. I want to double click on what you just said. 'Cause I like that what you said. What's the deployment strategy in your mind for supercloud? I'm an architect. I'm at an enterprise in the Midwest. I'm an insurance company, got some cloud action going on. I'm mostly on-premise. I've got the mandate to transform the company. We have apps. We'll be fully transformed in five years. What's my strategy? What do I do? >> Amir: The resources. >> What's the deployment strategy? Single global instance, code in every region, on every cloud? >> It needs to be a solution which is available as a SaaS service, right? So from the customer's perspective, they are onboarding into the supercloud. And then the supercloud is allowing them to do whatever they used to do, you know, historically and in the new world, right? That needs to come together. And that's what we have built is that, we have brought everything together in a way that what used to take months or years, and now taking an hour or two hours, and then people test it for a week or so and deploy it in production. >> I want to bring up something we were talking about before we were on camera about the TCP/IP, the OSI model. That was a concept that destroyed the proprietary narcissist. Work operating systems of the mini computers, which brought in an era of tech prosperity for generations. TCP/IP was kind of the magical moment that allowed for that kind of super networking connection. Inter networking is what's called as a category. It feels like something's going on here with supercloud. The way you describe it, it feels like there's this unification idea. Like the reality is we've got multiple stuff sitting around by default, you either clean it up or get rid of it, right? Or it's almost a, it's either a nuance, a new nuisance or chaos. >> Yeah. And we live in the new world now. We don't have the luxury of time. So we need to move as fast as possible to solve the business problems. And that's what we are running into. If we don't have automated solutions which scale, which solve our problems, then it's going to be a problem. And that's why SaaS is so important in today's world. Why should we have to deploy the network piecemeal? Why can't we have a solution? We solve our problem as we move forward and we accomplish what we need to accomplish and move forward. >> And we don't really need standards here, dude. It's not that we need a standards body if you have unification. >> So because things move so fast, there's no time to create a standards body. And that's why you see companies like ours popping up, which are trying to create a common infrastructure across all clouds. Otherwise if we vent the standardization path may take long. Eventually, we should be going in that direction. But we don't have the luxury of time. That's what I was trying to get to. >> Well, what's interesting is, is that to your point about standards and ratification, what ratifies a defacto anything? In the old days there was some technical bodies involved, but here, I think developers drive everything. So if you look at the developers and how they're voting with their code. They're instantly, organically defining everything as a collective intelligence. >> And just like you're putting out the paper and making it available, everybody's contributing to that. That's why you need to have APIs and terra form type constructs, which are available so that the customers can continue to improve upon that. And that's the Net DevOps, right? So that you need to have. >> What was once sacrilege, just sayin', in business school, back in the days when I got my business degree after my CS degree was, you know, no one wants to have a better mousetrap, a bad business model to have a better mouse trap. In this case, the better mouse trap, the better solution actually could be that thing. >> It is that thing. >> I mean, that can trigger, tips over the industry. >> And that that's where we are seeing our customers. You know, I mean, we have some publicly referenceable customers like Coke or Warner Music Group or, you know, multiple others and chart industries. The way we are solving the problem. They have some of the largest environments in the industry from the cloud perspective. And their whole network infrastructure is running on the Alkira infrastructure. And they're able to adopt new clouds within days rather than waiting for months to architect and then deploy and then figure out how to manage it and operate it. It's available as a service. >> John: And we've heard from your customer, Warner, they were just on the program. >> Amir: Yes. Okay, okay. >> So they're building a supercloud. So superclouds aren't just for tech companies. >> Amir: No. >> You guys build a supercloud for networking. >> Amir: It is. >> But people are building their own superclouds on top of all this new stuff. Talk about that dynamic. >> Healthcare providers, financials, high-tech companies, even startups. One of our startup customers, Tekion, right? They have these dealerships that they provide sales and support services to across the globe. And for them to be able to onboard those dealerships, it is 80% less time to production. That is real money, right? So, maybe Atif can give you a lot more examples of customers who are deploying. >> Talk about some of the customer activity. What are they like? Are they laggards, they innovators? Are they trying to hit the easy button? Are they coming in late or are you got some high customers? >> Actually most of our customers, all of our customers or customers in general. I don't think they have a choice but to move in this direction because, you know, the cloud has, like everything is quick now. So the cloud teams are moving faster in these enterprises. So now that they cannot afford the network nor to keep up pace with the cloud teams. So, they don't have a choice but to go with something similar where you can, you know, build your network on demand and bring up your network as quickly as possible to meet all those use cases. So, I'll give you an example. >> John: So the demand's high for what you guys do. >> Demand is very high because the cloud teams have- >> John: Yeah. They're going fast. >> They're going fast and there's no stopping. And then network teams, they have to keep up with them. And you cannot keep deploying, you know, networks the way you used to deploy back in the day. And as far as the use cases are concerned, there are so many use cases which our customers are using our platform for. One of the use cases, I'll give you an example of these financial customers. Some of the financial customers, they have their customers who they provide data, like stock exchanges, that provide like market data information to their customers out of data centers part. But now, their customers are moving into the cloud as well. So they need to come in from the cloud. So when they're coming in from the cloud, you cannot be giving them data from your data center because that takes time, and your hair pinning everything back. >> Moving data is like moving, moving money, someone said. >> Exactly. >> Exactly. And the other thing is like you have to optimize your traffic flows in the cloud as well because every time you leave the cloud, you get charged a lot. So, you don't want to leave the cloud unless you have to leave the cloud, your traffic. So, you have to come up or use a service which allows you to optimize all those traffic flows as well, you know? >> My final question to you guys, first of all, thanks for coming on Supercloud Program. Really appreciate it. Congratulations on your success. And you guys have a great positioning and I'm a big fan. And I have to ask, you guys are agile, nimble startup, smart on the cutting edge. Supercloud concept seems to resonate with people who are kind of on the front range of this major wave. While all the incumbents like Cisco, Microsoft, even AWS, they're like, I think they're looking at it, like what is that? I think it's coming up really fast, this trend. Because I know people talk about multi-cloud, I get that. But like, this whole supercloud is not just SaaS, it's more going on there. What do you think is going on between the folks who get it, supercloud, get the concept, and some are who are scratching their heads, whether it's the Ciscos or someone, like I don't get it. Why is supercloud important for the folks that aren't really seeing it? >> So first of all, I mean, the customers, what we saw about six months, 12 months ago, were a little slower to adopt the supercloud kind of concept. And there were leading edge customers who were coming and adopting it. Now, all of a sudden, over the last six to nine months, we've seen a flurry of customers coming in and they are from all disciplines or all very diverse set of customers. And they're starting to see the value of that because of the practical implications of what they're doing. You know, these shadow IT type environments are no longer working and there's a lot of pressure from the management to move faster. And then that's where they're coming in. And perhaps, Atif, if you can give a few examples of. >> Yeah. And I'll also just add to your point earlier about the network needing to be there 'cause the cloud teams are like, let's go faster. And the network's always been slow because, but now, it's been almost turbocharged. >> Atif: Yeah. Yeah, exactly. And as I said, like there was no choice here. You had to move in this industry. And the other thing I would add a little bit is now if you look at all these enterprises, most of their traffic is from, even from which is coming from the on-prem, it's going to the cloud SaaS applications or public clouds. And it's more than 50% of traffic, which is leaving your, you know, what you used to call, your network or the private network. So now it's like, you know, before it used to just connect sites to data centers and sites together. Now, it's a cloud as well as the SaaS application. So it's either internet bound or the public cloud bound. So now you have to build a network quickly, which caters to all these use cases. And that's where like something- >> And you guys, your solution to me is you eliminate all that work for the customer. Now, they can treat the cloud like a bag of Legos. And do their thing. Well, I oversimplify. Well, you know I'm talking about. >> Atif: Right, exactly. >> And to answer your question earlier about what about the big companies coming in and, you know, now they slow to adopt? And, you know, what normally happens is when Cisco came up, right? There used to be 16 different protocols suites. And then we finally settled on TCP/IP and DECnet or AppleTalk or X&S or, you know, you name it, right? Those companies did not adapt to the networking the way it was supposed to be done. And guess what happened, right? So if the companies in the networking space do not adopt this new concept or new way of doing things, I think some of them will become extinct over time. >> Well, I think the force and function too is the cloud teams as well. So you got two evolutions. You got architectural relevance. That's real as impact. >> It's very important. >> Cost, speed. >> And I look at it as a very similar disruption to what Cisco's the world, very early days did to, you know, bring the networking out, right? And it became the internet. But now we are going through the cloud. It's the cloud era, right? How does the cloud evolve over the next 10, 15, 20 years? Everything's is going to be offered as a service, right? So slowly data centers go away, the network becomes a plumbing thing. Very, you know, simple to deploy. And everything on top of that is virtualized in the cloud-like manners. >> And that makes the networks hardened and more secure. >> More secure. >> It's a great way to be secure. You remember the glory days, we'll go back 15 years. The Cisco conversation was, we got to move up to stack. All the manager would fight each other. Now, what does that actually mean? Stay where we are. Stay in your lane. This is kind of like the network's version of moving up the stack because not so much up the stack, but the cloud is everywhere. It's almost horizontally scaled. >> It's extending into the on-premise. It is already moving towards the edge, right? So, you will see a lot- >> So, programmability is a big program. So you guys are hitting programmability, compatibility, getting people into an environment they're comfortable operating. So the Ops people love it. >> Exactly. >> Spans the clouds to a level of SLA management. It might not be perfectly spanning applications, but you can actually know latencies between clouds, measure that. And then so you're basically managing your network now as the overall infrastructure. >> Right. And it needs to be a very intelligent infrastructure going forward, right? Because customers do not want to wait to be able to troubleshoot. They don't want to be able to wait to deploy something, right? So, it needs to be a level of automation. >> Okay. So the question for you guys both on we'll end on is what is the enablement that, because you guys are a disruptive enabler, right? You create this fabric. You're going to enable companies to do stuff. What are some of the things that you see and your customers might be seeing as things that they're going to do as a result of having this enablement? So what are some of those things? >> Amir: Atif, perhaps you can talk through the some of the customer experience on that. >> It's agility. And we are allowing these customers to move very, very quickly and build these networks which meet all these requirements inside the cloud. Because as Amir was saying, in the cloud era, networking is changing. And if you look at, you know, going back to your comment about the existing networking vendors. Some of them still think that, you know, just connecting to the cloud using some concepts like Cloud OnRamp is cloud networking, but it's changing now. >> John: 'Cause there's apps that are depending upon. >> Exactly. And it's all distributed. Like IT infrastructure, as I said earlier, is all distributed. And at the end of the day, you have to make sure that wherever your user is, wherever your app is, you are able to connect them securely. >> Historically, it used to be about building a router bigger and bigger and bigger and bigger, you know, and then interconnecting those routers. Now, it's all about horizontal scale. You don't need to build big, you need to scale it, right? And that's what cloud brings to the customer. >> It's a cultural change for Cisco and Juniper because they have to understand that they're still could be in the game and still win. >> Exactly. >> The question I have for you, what are your customers telling you that, what's some of the anecdotal, like, 'cause you guys have a good solution, is it, "Oh my god, you guys saved my butt." Or what are some of the commentary that you hear from the customers in terms of praise and and glory from your solution? >> Oh, some even say, when we do our demo and stuff, they say it's too hard to believe. >> Believe. >> Like, too hard. It's hard, you know, it's >> I dont believe you. They're skeptics. >> I don't believe you that because now you're able to bring up a global network within minutes. With networking services, like let's say you have APAC, you know, on-prem users, cloud also there, cloud here, users here, you can bring up a global network with full routed connectivity between all these endpoints with security services. You can bring up like a firewall from a third party or our services in the middle. This is a matter of minutes now. And this is all high speed connectivity with SLAs. Imagine like before connecting, you know, Singapore to U.S. East or Hong Kong to Frankfurt, you know, if you were putting your infrastructure in columns like E-connects, you would have to go, you know, figure out like, how am I going to- >> Seal line In, connect to it? Yeah. A lot of hassles, >> If you had to put like firewalls in the middle, segmentation, you had to, you know, isolate different entities. >> That's called heavy lifting. >> So what you're seeing is, you know, it's like customer comes in, there's a disbelief, can you really do that? And then they try it out, they go, "Wow, this works." Right? It's deployed in a small environment. And then all of a sudden they start taking off, right? And literally we have seen customers go from few thousand dollars a month or year type deployments to multi-million dollars a year type deployments in very, very short amount of time, in a few months. >> And you guys are pay as you go? >> Pay as you go. >> Pay as go usage cloud-based compatibility. >> Exactly. And it's amazing once they get to deploy the solution. >> What's the variable on the cost? >> On the cost? >> Is it traffic or is it. >> It's multiple different things. It's packaged into the overall solution. And as a matter of fact, we end up saving a lot of money to the customers. And not only in one way, in multiple different ways. And we do a complete TOI analysis for the customers. So it's bandwidth, it's number of connections, it's the amount of compute power that we are using. >> John: Similar things that they're used to. >> Just like the cloud constructs. Yeah. >> All right. Networking supercloud. Great. Congratulations. >> Thank you so much. >> Thanks for coming on Supercloud. >> Atif: Thank you. >> And looking forward to seeing more of the demand. Translate, instant networking. I'm sure it's going to be huge with the edge exploding. >> Oh yeah, yeah, yeah, yeah. >> Congratulations. >> Thank you so much. >> Thank you so much. >> Okay. So this is Supercloud 2 event here in Palo Alto. I'm John Furrier. The network Supercloud is here. Checkout Alkira. I'm John Furry, the host. Thanks for watching. (lively music)

Published Date : Feb 17 2023

SUMMARY :

networking aspect of the cloud, that really galvanized the industry of the cloud architectures Amazon has this and then going to be interconnected. Whether it's on-premise, So then now, you have So you had to bring up the same So all of that needs to be built in. and a lot of the alpha cloud players now, So now, you know, Ops So what do you think So now, it's the platform which is sitting So you guys brought the SD-WAN mojo so explain the difference. So what do you do? a network layer in the So what you do is and flexible for the customer Is that the wave? agility is the key, right? We, pretty much, yeah. the benefit to the customer? So you need to do a lot of stuff, and then ultimately, you know, that we run into, you when the edge comes out too, And you can expand So the question on the premise side, So on-premise, the kind of customers, So, we are a software company. from the data centers- or supercloud is sitting inside the cloud. I have to ask you guys, since that the customer had. Shift left is in the cloud I mean, the network's moving up the stack. So you have plumbing, which is So you got end-to-end virtualization, Exactly. So you have to have your own platform So the question is, it, you have to design So it's a, you know, It's interesting that these definitions you buy this platform. in the place of multiple superclouds. And because, you know, for the platform that the customer has. 'Cause I like that what you said. So from the customer's perspective, of the mini computers, We don't have the luxury of time. if you have unification. And that's why you see So if you look at the developers So that you need to have. in business school, back in the days I mean, that can trigger, from the cloud perspective. from your customer, Warner, So they're building a supercloud. You guys build a Talk about that dynamic. And for them to be able to the customer activity. So the cloud teams are moving John: So the demand's the way you used to Moving data is like moving, And the other thing is And I have to ask, you guys from the management to move faster. about the network needing to So now you have to to me is you eliminate all So if the companies in So you got two evolutions. And it became the internet. And that makes the networks hardened This is kind of like the network's version It's extending into the on-premise. So you guys are hitting Spans the clouds to a So, it needs to be a level of automation. What are some of the things that you see of the customer experience on that. And if you look at, you know, that are depending upon. And at the end of the day, and bigger, you know, in the game and still win. commentary that you hear they say it's too hard to believe. It's hard, you know, it's I dont believe you. Imagine like before connecting, you know, Seal line In, connect to it? firewalls in the middle, can you really do that? Pay as go usage get to deploy the solution. it's the amount of compute that they're used to. Just like the cloud constructs. All right. And looking forward to I'm John Furry, the host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

AmirPERSON

0.99+

Bob MugliaPERSON

0.99+

Amir KhanPERSON

0.99+

Atif KhanPERSON

0.99+

John FurryPERSON

0.99+

John FurrierPERSON

0.99+

2018DATE

0.99+

CokeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Warner Music GroupORGANIZATION

0.99+

AtifPERSON

0.99+

CiscosORGANIZATION

0.99+

AlkiraPERSON

0.99+

Palo AltoLOCATION

0.99+

an hourQUANTITY

0.99+

AlkiraORGANIZATION

0.99+

FrankfurtLOCATION

0.99+

AmazonORGANIZATION

0.99+

JuniperORGANIZATION

0.99+

SingaporeLOCATION

0.99+

a dayQUANTITY

0.99+

NetflixORGANIZATION

0.99+

U.S. EastLOCATION

0.99+

Palo AltoORGANIZATION

0.99+

16 different protocolsQUANTITY

0.99+

JunipersORGANIZATION

0.99+

CheckpointORGANIZATION

0.99+

Hong KongLOCATION

0.99+

10 minutesQUANTITY

0.99+

less than one hourQUANTITY

0.99+

ViptelaORGANIZATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

first timeQUANTITY

0.99+

OneQUANTITY

0.99+

more than 50%QUANTITY

0.99+

one wayQUANTITY

0.99+

firstQUANTITY

0.99+

SupercloudORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

LambdaTITLE

0.98+

One wayQUANTITY

0.98+

CLITITLE

0.98+

supercloudORGANIZATION

0.98+

12 months agoDATE

0.98+

LegosORGANIZATION

0.98+

APACORGANIZATION

0.98+

oneQUANTITY

0.98+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

Jesse Cugliotta & Nicholas Taylor | The Future of Cloud & Data in Healthcare


 

(upbeat music) >> Welcome back to Supercloud 2. This is Dave Vellante. We're here exploring the intersection of data and analytics in the future of cloud and data. In this segment, we're going to look deeper into the life sciences business with Jesse Cugliotta, who leads the Healthcare and Life Sciences industry practice at Snowflake. And Nicholas Nick Taylor, who's the executive director of Informatics at Ionis Pharmaceuticals. Gentlemen, thanks for coming in theCUBE and participating in the program. Really appreciate it. >> Thank you for having us- >> Thanks for having me. >> You're very welcome, okay, we're go really try to look at data sharing as a use case and try to understand what's happening in the healthcare industry generally and specifically, how Nick thinks about sharing data in a governed fashion whether tapping the capabilities of multiple clouds is advantageous long term or presents more challenges than the effort is worth. And to start, Jesse, you lead this industry practice for Snowflake and it's a challenging and vibrant area. It's one that's hyper-focused on data privacy. So the first question is, you know there was a time when healthcare and other regulated industries wouldn't go near the cloud. What are you seeing today in the industry around cloud adoption and specifically multi-cloud adoption? >> Yeah, for years I've heard that healthcare and life sciences has been cloud diverse, but in spite of all of that if you look at a lot of aspects of this industry today, they've been running in the cloud for over 10 years now. Particularly when you look at CRM technologies or HR or HCM, even clinical technologies like EDC or ETMF. And it's interesting that you mentioned multi-cloud as well because this has always been an underlying reality especially within life sciences. This industry grows through acquisition where companies are looking to boost their future development pipeline either by buying up smaller biotechs, they may have like a late or a mid-stage promising candidate. And what typically happens is the larger pharma could then use their commercial muscle and their regulatory experience to move it to approvals and into the market. And I think the last few decades of cheap capital certainly accelerated that trend over the last couple of years. But this typically means that these new combined institutions may have technologies that are running on multiple clouds or multiple cloud strategies in various different regions to your point. And what we've often found is that they're not planning to standardize everything onto a single cloud provider. They're often looking for technologies that embrace this multi-cloud approach and work seamlessly across them. And I think this is a big reason why we, here at Snowflake, we've seen such strong momentum and growth across this industry because healthcare and life science has actually been one of our fastest growing sectors over the last couple of years. And a big part of that is in fact that we run on not only all three major cloud providers, but individual accounts within each and any one of them, they had the ability to communicate and interoperate with one another, like a globally interconnected database. >> Great, thank you for that setup. And so Nick, tell us more about your role and Ionis Pharma please. >> Sure. So I've been at Ionis for around five years now. You know, when when I joined it was, the IT department was pretty small. There wasn't a lot of warehousing, there wasn't a lot of kind of big data there. We saw an opportunity with Snowflake pretty early on as a provider that would be a lot of benefit for us, you know, 'cause we're small, wanted something that was fairly hands off. You know, I remember the days where you had to get a lot of DBAs in to fine tune your databases, make sure everything was running really, really well. The notion that there's, you know, no indexes to tune, right? There's very few knobs and dials, you can turn on Snowflake. That was appealing that, you know, it just kind of worked. So we found a use case to bring the platform in. We basically used it as a logging replacement as a Splunk kind of replacement with a platform called Elysium Analytics as a way to just get it in the door and give us the opportunity to solve a real world use case, but also to help us start to experiment using Snowflake as a platform. It took us a while to A, get the funding to bring it in, but B, build the momentum behind it. But, you know, as we experimented we added more data in there, we ran a few more experiments, we piloted in few more applications, we really saw the power of the platform and now, we are becoming a commercial organization. And with that comes a lot of major datasets. And so, you know, we really see Snowflake as being a very important part of our ecology going forward to help us build out our infrastructure. >> Okay, and you are running, your group runs on Azure, it's kind of mono cloud, single cloud, but others within Ionis are using other clouds, but you're not currently, you know, collaborating in terms of data sharing. And I wonder if you could talk about how your data needs have evolved over the past decade. I know you came from another highly regulated industry in financial services. So what's changed? You sort of touched on this before, you had these, you know, very specialized individuals who were, you know, DBAs, and, you know, could tune databases and the like, so that's evolved, but how has generally your needs evolved? Just kind of make an observation over the last, you know, five or seven years. What have you seen? >> Well, we, I wasn't in a group that did a lot of warehousing. It was more like online trade capture, but, you know, it was very much on-prem. You know, being in the cloud is very much a dirty word back then. I know that's changed since I've left. But in, you know, we had major, major teams of everyone who could do everything, right. As I mentioned in the pharma organization, there's a lot fewer of us. So the data needs there are very different, right? It's, we have a lot of SaaS applications. One of the difficulties with bringing a lot of SaaS applications on board is obviously data integration. So making sure the data is the same between them. But one of the big problems is joining the data across those SaaS applications. So one of the benefits, one of the things that we use Snowflake for is to basically take data out of these SaaS applications and load them into a warehouse so we can do those joins. So we use technologies like Boomi, we use technologies like Fivetran, like DBT to bring this data all into one place and start to kind of join that basically, allow us to do, run experiments, do analysis, basically take better, find better use for our data that was siloed in the past. You mentioned- >> Yeah. And just to add on to Nick's point there. >> Go ahead. >> That's actually something very common that we're seeing across the industry is because a lot of these SaaS applications that you mentioned, Nick, they're with from vendors that are trying to build their own ecosystem in walled garden. And by definition, many of them do not want to integrate with one another. So from a, you know, from a data platform vendor's perspective, we see this as a huge opportunity to help organizations like Ionis and others kind of deal with the challenges that Nick is speaking about because if the individual platform vendors are never going to make that part of their strategy, we see it as a great way to add additional value to these customers. >> Well, this data sharing thing is interesting. There's a lot of walled gardens out there. Oracle is a walled garden, AWS in many ways is a walled garden. You know, Microsoft has its walled garden. You could argue Snowflake is a walled garden. But the, what we're seeing and the whole reason behind the notion of super-cloud is we're creating an abstraction layer where you actually, in this case for this use case, can share data in a governed manner. Let's forget about the cross-cloud for a moment. I'll come back to that, but I wonder, Nick, if you could talk about how you are sharing data, again, Snowflake sort of, it's, I look at Snowflake like the app store, Apple, we're going to control everything, we're going to guarantee with data clean rooms and governance and the standards that we've created within that platform, we're going to make sure that it's safe for you to share data in this highly regulated industry. Are you doing that today? And take us through, you know, the considerations that you have in that regard. >> So it's kind of early days for us in Snowflake in general, but certainly in data sharing, we have a couple of examples. So data marketplace, you know, that's a great invention. It's, I've been a small IT shop again, right? The fact that we are able to just bring down terabyte size datasets straight into our Snowflake and run analytics directly on that is huge, right? The fact that we don't have to FTP these massive files around run jobs that may break, being able to just have that on tap is huge for us. We've recently been talking to one of our CRO feeds- CRO organizations about getting their data feeds in. Historically, this clinical trial data that comes in on an FTP file, we have to process it, take it through the platforms, put it into the warehouse. But one of the CROs that we talked to recently when we were reinvestigate in what data opportunities they have, they were a Snowflake customer and we are, I think, the first production customer they have, have taken that feed. So they're basically exposing their tables of data that historically came in these FTP files directly into our Snowflake instance now. We haven't taken advantage of that. It only actually flipped the switch about three or four weeks ago. But that's pretty big for us again, right? We don't have to worry about maintaining those jobs that take those files in. We don't have to worry about the jobs that take those and shove them on the warehouse. We now have a feed that's directly there that we can use a tool like DBT to push through directly into our model. And then the third avenue that's came up, actually fairly recently as well was genetics data. So genetics data that's highly, highly regulated. We had to be very careful with that. And we had a conversation with Snowflake about the data white rooms practice, and we see that as a pretty interesting opportunity. We are having one organization run genetic analysis being able to send us those genetic datasets, but then there's another organization that's actually has the in quotes "metadata" around that, so age, ethnicity, location, et cetera. And being able to join those two datasets through some kind of mechanism would be really beneficial to the organization. Being able to build a data white room so we can put that genetic data in a secure place, anonymize it, and then share the amalgamated data back out in a way that's able to be joined to the anonymized metadata, that could be pretty huge for us as well. >> Okay, so this is interesting. So you talk about FTP, which was the common way to share data. And so you basically, it's so, I got it now you take it and do whatever you want with it. Now we're talking, Jesse, about sharing the same copy of live data. How common is that use case in your industry? >> It's become very common over the last couple of years. And I think a big part of it is having the right technology to do it effectively. You know, as Nick mentioned, historically, this was done by people sending files around. And the challenge with that approach, of course, while there are multiple challenges, one, every time you send a file around your, by definition creating a copy of the data because you have to pull it out of your system of record, put it into a file, put it on some server where somebody else picks it up. And by definition at that point you've lost governance. So this creates challenges in general hesitation to doing so. It's not that it hasn't happened, but the other challenge with it is that the data's no longer real time. You know, you're working with a copy of data that was as fresh as at the time at that when that was actually extracted. And that creates limitations in terms of how effective this can be. What we're starting to see now with some of our customers is live sharing of information. And there's two aspects of that that are important. One is that you're not actually physically creating the copy and sending it to someone else, you're actually exposing it from where it exists and allowing another consumer to interact with it from their own account that could be in another region, some are running in another cloud. So this concept of super-cloud or cross-cloud could becoming realized here. But the other important aspect of it is that when that other- when that other entity is querying your data, they're seeing it in a real time state. And this is particularly important when you think about use cases like supply chain planning, where you're leveraging data across various different enterprises. If I'm a manufacturer or if I'm a contract manufacturer and I can see the actual inventory positions of my clients, of my distributors, of the levels of consumption at the pharmacy or the hospital that gives me a lot of indication as to how my demand profile is changing over time versus working with a static picture that may have been from three weeks ago. And this has become incredibly important as supply chains are becoming more constrained and the ability to plan accurately has never been more important. >> Yeah. So the race is on to solve these problems. So it start, we started with, hey, okay, cloud, Dave, we're going to simplify database, we're going to put it in the cloud, give virtually infinite resources, separate compute from storage. Okay, check, we got that. Now we've moved into sort of data clean rooms and governance and you've got an ecosystem that's forming around this to make it safer to share data. And then, you know, nirvana, at least near term nirvana is we're going to build data applications and we're going to be able to share live data and then you start to get into monetization. Do you see, Nick, in the near future where I know you've got relationships with, for instance, big pharma like AstraZeneca, do you see a situation where you start sharing data with them? Is that in the near term? Is that more long term? What are the considerations in that regard? >> I mean, it's something we've been thinking about. We haven't actually addressed that yet. Yeah, I could see situations where, you know, some of these big relationships where we do need to share a lot of data, it would be very nice to be able to just flick a switch and share our data assets across to those organizations. But, you know, that's a ways off for us now. We're mainly looking at bringing data in at the moment. >> One of the things that we've seen in financial services in particular, and Jesse, I'd love to get your thoughts on this, is companies like Goldman or Capital One or Nasdaq taking their stack, their software, their tooling actually putting it on the cloud and facing it to their customers and selling that as a new monetization vector as part of their digital or business transformation. Are you seeing that Jesse at all in healthcare or is it happening today or do you see a day when that happens or is healthier or just too scary to do that? >> No, we're seeing the early stages of this as well. And I think it's for some of the reasons we talked about earlier. You know, it's a much more secure way to work with a colleague if you don't have to copy your data and potentially expose it. And some of the reasons that people have historically copied that data is that they needed to leverage some sort of algorithm or application that a third party was providing. So maybe someone was predicting the ideal location and run a clinical trial for this particular rare disease category where there are only so many patients around the world that may actually be candidates for this disease. So you have to pick the ideal location. Well, sending the dataset to do so, you know, would involve a fairly complicated process similar to what Nick was mentioning earlier. If the company who was providing the logic or the algorithm to determine that location could bring that algorithm to you and you run it against your own data, that's a much more ideal and a much safer and more secure way for this industry to actually start to work with some of these partners and vendors. And that's one of the things that we're looking to enable going into this year is that, you know, the whole concept should be bring the logic to your data versus your data to the logic and the underlying sharing mechanisms that we've spoken about are actually what are powering that today. >> And so thank you for that, Jesse. >> Yes, Dave. >> And so Nick- Go ahead please. >> Yeah, if I could add, yeah, if I could add to that, that's something certainly we've been thinking about. In fact, we'd started talking to Snowflake about that a couple of years ago. We saw the power there again of the platform to be able to say, well, could we, we were thinking in more of a data share, but could we share our data out to say an AI/ML vendor, have them do the analytics and then share the data, the results back to us. Now, you know, there's more powerful mechanisms to do that within the Snowflake ecosystem now, but you know, we probably wouldn't need to have onsite AI/ML people, right? Some of that stuff's very sophisticated, expensive resources, hard to find, you know, it's much better for us to find a company that would be able to build those analytics, maintain those analytics for us. And you know, we saw an opportunity to do that a couple years ago and we're kind of excited about the opportunity there that we can just basically do it with a no op, right? We share the data route, we have the analytics done, we get the result back and it's just fairly seamless. >> I mean, I could have a whole another Cube session on this, guys, but I mean, I just did a a session with Andy Thurai, a Constellation research about how difficult it's been for organization to get ROI because they don't have the expertise in house so they want to either outsource it or rely on vendor R&D companies to inject that AI and machine intelligence directly into applications. My follow-up question to you Nick is, when you think about, 'cause Jesse was talking about, you know, let the data basically stay where it is and you know bring the compute to that data. If that data lives on different clouds, and maybe it's not your group, but maybe it's other parts of Ionis or maybe it's your partners like AstraZeneca, or you know, the AI/ML partners and they're potentially on other clouds or that data is on other clouds. Do you see that, again, coming back to super-cloud, do you see it as an advantage to be able to have a consistent experience across those clouds? Or is that just kind of get in the way and make things more complex? What's your take on that, Nick? >> Well, from the vendors, so from the client side, it's kind of seamless with Snowflake for us. So we know for a fact that one of the datasets we have at the moment, Compile, which is a, the large multi terabyte dataset I was talking about. They're on AWS on the East Coast and we are on Azure on the West Coast. And they had to do a few tweaks in the background to make sure the data was pushed over from, but from my point of view, the data just exists, right? So for me, I think it's hugely beneficial that Snowflake supports this kind of infrastructure, right? We don't have to jump through hoops to like, okay, well, we'll download it here and then re-upload it here. They already have the mechanism in the background to do these multi-cloud shares. So it's not important for us internally at the moment. I could see potentially at some point where we start linking across different groups in the organization that do have maybe Amazon or Google Cloud, but certainly within our providers. We know for a fact that they're on different services at the moment and it just works. >> Yeah, and we learned from Benoit Dageville, who came into the studio on August 9th with first Supercloud in 2022 that Snowflake uses a single global instance across regions and across clouds, yeah, whether or not you can query across you know, big regions, it just depends, right? It depends on latency. You might have to make a copy or maybe do some tweaks in the background. But guys, we got to jump, I really appreciate your time. Really thoughtful discussion on the future of data and cloud, specifically within healthcare and pharma. Thank you for your time. >> Thanks- >> Thanks for having us. >> All right, this is Dave Vellante for theCUBE team and my co-host, John Furrier. Keep it right there for more action at Supercloud 2. (upbeat music)

Published Date : Jan 3 2023

SUMMARY :

and analytics in the So the first question is, you know And it's interesting that you Great, thank you for that setup. get the funding to bring it in, over the last, you know, So one of the benefits, one of the things And just to add on to Nick's point there. that you mentioned, Nick, and the standards that we've So data marketplace, you know, And so you basically, it's so, And the challenge with Is that in the near term? bringing data in at the moment. One of the things that we've seen that algorithm to you and you And so Nick- the results back to us. Or is that just kind of get in the way in the background to do on the future of data and cloud, All right, this is Dave Vellante

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jesse CugliottaPERSON

0.99+

Dave VellantePERSON

0.99+

GoldmanORGANIZATION

0.99+

AstraZenecaORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

Capital OneORGANIZATION

0.99+

JessePERSON

0.99+

Andy ThuraiPERSON

0.99+

AWSORGANIZATION

0.99+

August 9thDATE

0.99+

NickPERSON

0.99+

NasdaqORGANIZATION

0.99+

Nicholas Nick TaylorPERSON

0.99+

fiveQUANTITY

0.99+

AmazonORGANIZATION

0.99+

IonisORGANIZATION

0.99+

DavePERSON

0.99+

Ionis PharmaORGANIZATION

0.99+

Nicholas TaylorPERSON

0.99+

Ionis PharmaceuticalsORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

first questionQUANTITY

0.99+

Benoit DagevillePERSON

0.99+

AppleORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

2022DATE

0.99+

todayDATE

0.99+

over 10 yearsQUANTITY

0.98+

SnowflakeTITLE

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

two aspectsQUANTITY

0.98+

firstQUANTITY

0.98+

this yearDATE

0.97+

eachQUANTITY

0.97+

two datasetsQUANTITY

0.97+

West CoastLOCATION

0.97+

four weeks agoDATE

0.97+

around five yearsQUANTITY

0.97+

threeQUANTITY

0.95+

first productionQUANTITY

0.95+

East CoastLOCATION

0.95+

third avenueQUANTITY

0.95+

one organizationQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

couple years agoDATE

0.93+

single cloudQUANTITY

0.92+

single cloud providerQUANTITY

0.92+

hree weeks agoDATE

0.91+

one placeQUANTITY

0.88+

AzureTITLE

0.86+

last couple of yearsDATE

0.85+

Veronika Durgin, Saks | The Future of Cloud & Data


 

(upbeat music) >> Welcome back to Supercloud 2, an open collaborative where we explore the future of cloud and data. Now, you might recall last August at the inaugural Supercloud event we validated the technical feasibility and tried to further define the essential technical characteristics, and of course the deployment models of so-called supercloud. That is, sets of services that leverage the underlying primitives of hyperscale clouds, but are creating new value on top of those clouds for organizations at scale. So we're talking about capabilities that fundamentally weren't practical or even possible prior to the ascendancy of the public clouds. And so today at Supercloud 2, we're digging further into the topic with input from real-world practitioners. And we're exploring the intersection of data and cloud, And importantly, the realities and challenges of deploying technology for a new business capability. I'm pleased to have with me in our studios, west of Boston, Veronika Durgin, who's the head of data at Saks. Veronika, welcome. Great to see you. Thanks for coming on. >> Thank you so much. Thank you for having me. So excited to be here. >> And so we have to say upfront, you're here, these are your opinions. You're not representing Saks in any way. So we appreciate you sharing your depth of knowledge with us. >> Thank you, Dave. Yeah, I've been doing data for a while. I try not to say how long anymore. It's been a while. But yeah, thank you for having me. >> Yeah, you're welcome. I mean, one of the highlights of this past year for me was hanging out at the airport with you after the Snowflake Summit. And we were just chatting about sort of data mesh, and you were saying, "Yeah, but." There was a yeah, but. You were saying there's some practical realities of actually implementing these things. So I want to get into some of that. And I guess starting from a perspective of how data has changed, you've seen a lot of the waves. I mean, even if we go back to pre-Hadoop, you know, that would shove everything into an Oracle database, or, you know, Hadoop was going to save our data lives. And the cloud came along and, you know, that was kind of a disruptive force. And, you know, now we see things like, whether it's Snowflake or Databricks or these other platforms on top of the clouds. How have you observed the change in data and the evolution over time? >> Yeah, so I started as a DBA in the data center, kind of like, you know, growing up trying to manage whatever, you know, physical limitations a server could give us. So we had to be very careful of what we put in our database because we were limited. We, you know, purchased that piece of hardware, and we had to use it for the next, I don't know, three to five years. So it was only, you know, we focused on only the most important critical things. We couldn't keep too much data. We had to be super efficient. We couldn't add additional functionality. And then Hadoop came along, which is like, great, we can dump all the data there, but then we couldn't get data out of it. So it was like, okay, great. Doesn't help either. And then the cloud came along, which was incredible. I was probably the most excited person. I'm lying, but I was super excited because I no longer had to worry about what I can actually put in my database. Now I have that, you know, scalability and flexibility with the cloud. So okay, great, that data's there, and I can also easily get it out of it, which is really incredible. >> Well, but so, I'm inferring from what you're saying with Hadoop, it was like, okay, no schema on write. And then you got to try to make sense out of it. But so what changed with the cloud? What was different? >> So I'll tell a funny story. I actually successfully avoided Hadoop. The only time- >> Congratulations. >> (laughs) I know, I'm like super proud of it. I don't know how that happened, but the only time I worked for a company that had Hadoop, all I remember is that they were running jobs that were taking over 24 hours to get data out of it. And they were realizing that, you know, dumping data without any structure into this massive thing that required, you know, really skilled engineers wasn't really helpful. So what changed, and I'm kind of thinking of like, kind of like how Snowflake started, right? They were marketing themselves as a data warehouse. For me, moving from SQL Server to Snowflake was a non-event. It was comfortable, I knew what it was, I knew how to get data out of it. And I think that's the important part, right? Cloud, this like, kind of like, vague, high-level thing, magical, but the reality is cloud is the same as what we had on prem. So it's comfortable there. It's not scary. You don't need super new additional skills to use it. >> But you're saying what's different is the scale. So you can throw resources at it. You don't have to worry about depreciating your hardware over three to five years. Hey, I have an asset that I have to take advantage of. Is that the big difference? >> Absolutely. Actually, from kind of like operational perspective, which it's funny. Like, I don't have to worry about it. I use what I need when I need it. And not to take this completely in the opposite direction, people stop thinking about using things in a very smart way, right? You like, scale and you walk away. And then, you know, the cool thing about cloud is it's scalable, but you also should not use it when you don't need it. >> So what about this idea of multicloud. You know, supercloud sort of tries to go beyond multicloud. it's like multicloud by accident. And now, you know, whether it's M&A or, you know, some Skunkworks is do, hey, I like Google's tools, so I'm going to use Google. And then people like you are called on to, hey, how do we clean up this mess? And you know, you and I, at the airport, we were talking about data mesh. And I love the concept. Like, doesn't matter if it's a data lake or a data warehouse or a data hub or an S3 bucket. It's just a node on the mesh. But then, of course, you've got to govern it. You've got to give people self-serve. But this multicloud is a reality. So from your perspective, from a practitioner's perspective, what are the advantages of multicloud? We talk about the disadvantages all the time. Kind of get that, but what are the advantages? >> So I think the first thing when I think multicloud, I actually think high-availability disaster recovery. And maybe it's just how I grew up in the data center, right? We were always worried that if something happened in one area, we want to make sure that we can bring business up very quickly. So to me that's kind of like where multicloud comes to mind because, you know, you put your data, your applications, let's pick on AWS for a second and, you know, US East in AWS, which is the busiest kind of like area that they have. If it goes down, for my business to continue, I would probably want to move it to, say, Azure, hypothetically speaking, again, or Google, whatever that is. So to me, and probably again based on my background, disaster recovery high availability comes to mind as multicloud first, but now the other part of it is that there are, you know, companies and tools and applications that are being built in, you know, pick your cloud. How do we talk to each other? And more importantly, how do we data share? You know, I work with data. You know, this is what I do. So if, you know, I want to get data from a company that's using, say, Google, how do we share it in a smooth way where it doesn't have to be this crazy, I don't know, SFTP file moving. So that's where I think supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> So you kind of answered my next question, is do you see use cases going beyond H? I mean, the HADR was, remember, that was the original cloud use case. That and bursting, you know, for, you know, Thanksgiving or, you know, for Black Friday. So you see an opportunity to go beyond that with practical use cases. >> Absolutely. I think, you know, we're getting to a world where every company is a data company. We all collect a lot of data. We want to use it for whatever that is. It doesn't necessarily mean sell it, but use it to our competitive advantage. So how do we do it in a very smooth, easy way, which opens additional opportunities for companies? >> You mentioned data sharing. And that's obviously, you know, I met you at Snowflake Summit. That's a big thing of Snowflake's. And of course, you've got Databricks trying to do similar things with open technology. What do you see as the trade-offs there? Because Snowflake, you got to come into their party, you're in their world, and you're kind of locked into that world. Now they're trying to open up. You know, and of course, Databricks, they don't know our world is wide open. Well, we know what that means, you know. The governance. And so now you're seeing, you saw Amazon come out with data clean rooms, which was, you know, that was a good idea that Snowflake had several years before. It's good. It's good validation. So how do you think about the trade-offs between kind of openness and freedom versus control? Is the latter just far more important? >> I'll tell you it depends, right? It's kind of like- >> Could be insulting to that. >> Yeah, I know. It depends because I don't know the answer. It depends, I think, because on the use case and application, ultimately every company wants to make money. That's the beauty of our like, capitalistic economy, right? We're driven 'cause we want to make money. But from the use, you know, how do I sell a product to somebody who's in Google if I am in AWS, right? It's like, we're limiting ourselves if we just do one cloud. But again, it's difficult because at the same time, every cloud provider wants for you to be locked in their cloud, which is why probably, you know, whoever has now data sharing because they want you to stay within their ecosystem. But then again, like, companies are limited. You know, there are applications that are starting to be built on top of clouds. How do we ensure that, you know, I can use that application regardless what cloud, you know, my company is using or I just happen to like. >> You know, and it's true they want you to stay in their ecosystem 'cause they'll make more money. But as well, you think about Apple, right? Does Apple do it 'cause they can make more money? Yes, but it's also they have more control, right? Am I correct that technically it's going to be easier to govern that data if it's all the sort of same standard, right? >> Absolutely. 100%. I didn't answer that question. You have to govern and you have to control. And honestly, it's like it's not like a nice-to-have anymore. There are compliances. There are legal compliances around data. Everybody at some point wants to ensure that, you know, and as a person, quite honestly, you know, not to be, you know, I don't like when my data's used when I don't know how. Like, it's a little creepy, right? So we have to come up with standards around that. But then I also go back in the day. EDI, right? Electronic data interchange. That was figured out. There was standards. Companies were sending data to each other. It was pretty standard. So I don't know. Like, we'll get there. >> Yeah, so I was going to ask you, do you see a day where open standards actually emerge to enable that? And then isn't that the great disruptor to sort of kind of the proprietary stack? >> I think so. I think for us to smoothly exchange data across, you know, various systems, various applications, we'll have to agree to have standards. >> From a developer perspective, you know, back to the sort of supercloud concept, one of the the components of the essential characteristics is you've got this PaaS layer that provides consistency across clouds, and it has unique attributes specific to the purpose of that supercloud. So in the instance of Snowflake, it's data sharing. In the case of, you know, VMware, it might be, you know, infrastructure or self-serve infrastructure that's consistent. From a developer perspective, what do you hear from developers in terms of what they want? Are we close to getting that across clouds? >> I think developers always want freedom and ability to engineer. And oftentimes it's not, (laughs) you know, just as an engineer, I always want to build something, and it's not always for the, to use a specific, you know, it's something I want to do versus what is actually applicable. I think we'll land there, but not because we are, you know, out of the kindness of our own hearts. I think as a necessity we will have to agree to standards, and that that'll like, move the needle. Yeah. >> What are the limitations that you see of cloud and this notion of, you know, even cross cloud, right? I mean, this one cloud can't do it all. You know, but what do you see as the limitations of clouds? >> I mean, it's funny, I always think, you know, again, kind of probably my background, I grew up in the data center. We were physically limited by space, right? That there's like, you can only put, you know, so many servers in the rack and, you know, so many racks in the data center, and then you run out space. Earth has a limited space, right? And we have so many data centers, and everybody's collecting a lot of data that we actually want to use. We're not just collecting for the sake of collecting it anymore. We truly can't take advantage of it because servers have enough power, right, to crank through it. We will run enough space. So how do we balance that? How do we balance that data across all the various data centers? And I know I'm like, kind of maybe talking crazy, but until we figure out how to build a data center on the Moon, right, like, we will have to figure out how to take advantage of all the compute capacity that we have across the world. >> And where does latency fit in? I mean, is it as much of a problem as people sort of think it is? Maybe it depends too. It depends on the use case. But do multiple clouds help solve that problem? Because, you know, even AWS, $80 billion company, they're huge, but they're not everywhere. You know, they're doing local zones, they're doing outposts, which is, you know, less functional than their full cloud. So maybe I would choose to go to another cloud. And if I could have that common experience, that's an advantage, isn't it? >> 100%, absolutely. And potentially there's some maybe pricing tiers, right? So we're talking about latency. And again, it depends on your situation. You know, if you have some sort of medical equipment that is very latency sensitive, you want to make sure that data lives there. But versus, you know, I browse on a website. If the website takes a second versus two seconds to load, do I care? Not exactly. Like, I don't notice that. So we can reshuffle that in a smart way. And I keep thinking of ways. If we have ways for data where it kind of like, oh, you are stuck in traffic, go this way. You know, reshuffle you through that data center. You know, maybe your data will live there. So I think it's totally possible. I know, it's a little crazy. >> No, I like it, though. But remember when you first found ways, you're like, "Oh, this is awesome." And then now it's like- >> And it's like crowdsourcing, right? Like, it's smart. Like, okay, maybe, you know, going to pick on US East for Amazon for a little bit, their oldest, but also busiest data center that, you know, periodically goes down. >> But then you lose your competitive advantage 'cause now it's like traffic socialism. >> Yeah, I know. >> Right? It happened the other day where everybody's going this way up. There's all the Wazers taking. >> And also again, compliance, right? Every country is going down the path of where, you know, data needs to reside within that country. So it's not as like, socialist or democratic as we wish for it to be. >> Well, that's a great point. I mean, when you just think about the clouds, the limitation, now you go out to the edge. I mean, everybody talks about the edge in IoT. Do you actually think that there's like a whole new stove pipe that's going to get created. And does that concern you, or do you think it actually is going to be, you know, connective tissue with all these clouds? >> I honestly don't know. I live in a practical world of like, how does it help me right now? How does it, you know, help me in the next five years? And mind you, in five years, things can change a lot. Because if you think back five years ago, things weren't as they are right now. I mean, I really hope that somebody out there challenges things 'cause, you know, the whole cloud promise was crazy. It was insane. Like, who came up with it? Why would I do that, right? And now I can't imagine the world without it. >> Yeah, I mean a lot of it is same wine, new bottle. You know, but a lot of it is different, right? I mean, technology keeps moving us forward, doesn't it? >> Absolutely. >> Veronika, it was great to have you. Thank you so much for your perspectives. If there was one thing that the industry could do for your data life that would make your world better, what would it be? >> I think standards for like data sharing, data marketplace. I would love, love, love nothing else to have some agreed upon standards. >> I had one other question for you, actually. I forgot to ask you this. 'Cause you were saying every company's a data company. Every company's a software company. We're already seeing it, but how prevalent do you think it will be that companies, you've seen some of it in financial services, but companies begin to now take their own data, their own tooling, their own software, which they've developed internally, and point that to the outside world? Kind of do what AWS did. You know, working backwards from the customer and saying, "Hey, we did this for ourselves. We can now do this for the rest of the world." Do you see that as a real trend, or is that Dave's pie in the sky? >> I think it's a real trend. Every company's trying to reinvent themselves and come up with new products. And every company is a data company. Every company collects data, and they're trying to figure out what to do with it. And again, it's not necessarily to sell it. Like, you don't have to sell data to monetize it. You can use it with your partners. You can exchange data. You know, you can create products. Capital One I think created a product for Snowflake pricing. I don't recall, but it just, you know, they built it for themselves, and they decided to kind of like, monetize on it. And I'm absolutely 100% on board with that. I think it's an amazing idea. >> Yeah, Goldman is another example. Nasdaq is basically taking their exchange stack and selling it around the world. And the cloud is available to do that. You don't have to build your own data center. >> Absolutely. Or for good, right? Like, we're talking about, again, we live in a capitalist country, but use data for good. We're collecting data. We're, you know, analyzing it, we're aggregating it. How can we use it for greater good for the planet? >> Veronika, thanks so much for coming to our Marlborough studios. Always a pleasure talking to you. >> Thank you so much for having me. >> You're really welcome. All right, stay tuned for more great content. From Supercloud 2, this is Dave Vellante. We'll be right back. (upbeat music)

Published Date : Dec 27 2022

SUMMARY :

and of course the deployment models Thank you so much. So we appreciate you sharing your depth But yeah, thank you for having me. And the cloud came along and, you know, So it was only, you know, And then you got to try I actually successfully avoided Hadoop. you know, dumping data So you can throw resources at it. And then, you know, the And you know, you and I, at the airport, to mind because, you know, That and bursting, you know, I think, you know, And that's obviously, you know, But from the use, you know, You know, and it's true they want you to ensure that, you know, you know, various systems, In the case of, you know, VMware, but not because we are, you know, and this notion of, you know, can only put, you know, which is, you know, less But versus, you know, But remember when you first found ways, Like, okay, maybe, you know, But then you lose your It happened the other day the path of where, you know, is going to be, you know, How does it, you know, help You know, but a lot of Thank you so much for your perspectives. to have some agreed upon standards. I forgot to ask you this. I don't recall, but it just, you know, And the cloud is available to do that. We're, you know, analyzing Always a pleasure talking to you. From Supercloud 2, this is Dave Vellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

VeronikaPERSON

0.99+

Veronika DurginPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

two secondsQUANTITY

0.99+

SaksORGANIZATION

0.99+

$80 billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

last AugustDATE

0.99+

Capital OneORGANIZATION

0.99+

OracleORGANIZATION

0.99+

M&AORGANIZATION

0.99+

SkunkworksORGANIZATION

0.99+

five yearsQUANTITY

0.99+

NasdaqORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

EarthLOCATION

0.98+

DatabricksORGANIZATION

0.98+

SupercloudEVENT

0.98+

todayDATE

0.98+

Snowflake SummitEVENT

0.98+

US EastLOCATION

0.98+

five years agoDATE

0.97+

SQL ServerTITLE

0.97+

first thingQUANTITY

0.96+

BostonLOCATION

0.95+

Black FridayEVENT

0.95+

HadoopTITLE

0.95+

over 24 hoursQUANTITY

0.95+

oneQUANTITY

0.94+

firstQUANTITY

0.94+

supercloudORGANIZATION

0.94+

one thingQUANTITY

0.93+

MoonLOCATION

0.93+

ThanksgivingEVENT

0.93+

over threeQUANTITY

0.92+

one other questionQUANTITY

0.91+

one cloudQUANTITY

0.9+

one areaQUANTITY

0.9+

SnowflakeTITLE

0.89+

multicloudORGANIZATION

0.86+

AzureORGANIZATION

0.85+

Supercloud 2ORGANIZATION

0.83+

> 100%QUANTITY

0.82+

GoldmanORGANIZATION

0.81+

SnowflakeEVENT

0.8+

a secondQUANTITY

0.73+

several years beforeDATE

0.72+

this past yearDATE

0.71+

secondQUANTITY

0.7+

MarlboroughLOCATION

0.7+

supercloudTITLE

0.66+

next five yearsDATE

0.65+

multicloudTITLE

0.59+

PaaSTITLE

0.55+

John Purcell, DoiT International & Danislav Penev, INFINOX Global | AWS re:Invent 2022


 

>>Hello friends and welcome back to Fabulous Las Vegas, Nevada, where we are live from the show floor at AWS Reinvent. My name is Savannah Peterson, joined by my fabulous co-host John Furrier. John, how was your lunch? >>My lunch was great. Wasn't very complex like it is today, so it was very easy, >>Appropriate for the conversation we're about >>To have. Great, great guests coming up Cube alumni and great question around complexity and how is wellbeing teams be good? >>Yes. And, and and on that note, let's welcome John from DeWit as well as Danny from Inox. I swear I'll be able to say that right by the end of this. Thank you guys so much for being here. How's the show going for you? >>Excellent so far. It's been a great, a great event. You know, back back to pre Covid days, >>You're still smiling day three. That's an awesome sign. John, what about you? >>Fantastic. It's, it's been busier than ever >>That that's exciting. I, I think we certainly feel that way here on the cube. We're doing dozens of videos, it's absolutely awesome. Just in case. So we can dig in a little deeper throughout the rest of the segment just in case the audience isn't familiar, let's get them acquainted with your companies. Let's start with do it John. >>Yeah, thanks Savannah. So do it as a global technology company and we're partnering with deleted cloud providers around the world and digital native companies to provide value and solve complexity. John, to your, to your introductory point with all of the complexities associated with operating in the cloud, scaling a business in the cloud, a lot of companies are just looking to sort of have somebody else take care of that problem for them or have somebody they can call when they run into, you know, into problems scaling. And so with a combination of tech, advanced technology, some of the best cloud experts in the world and unlimited tech support or we're offloading a lot of those problems for our customers and we're doing that on a global basis. So it's, it's an exciting time. >>I can imagine pretty much everyone here on the show floor is dealing with that challenge of complexity. So a couple customers for you in the house. What about you Danny? >>I, I come from a company which operates in a financial industry market. So we essentially a global broker, financial trading broker. Which what this means for those people who don't really understand, essentially we allow clients to be able to trade digitally and speculate with different pricing, pricing tools online. We offer a different products for different type of clients. We have institutional clients, we've got our affiliates, partners programs and we've got a retail clients and this is where AWS and Doit comes handy allows us to offer our products digitally across the globe. And one of the key values for us here is that we can actually offer a product in regions where other people don't. So for example, we don't compete in North America, we don't compete in EME in Europe, but we just do it in AWS to solve our complex challenges in regions that naturally by, depending on where they base, they have like issues and that's how we deliver our product. >>And which regions, Latin >>America, Latin, the entire Africa, subcontinent, middle East, southeast Asia, the culture is just demographic is different. And what you used to have here is not exactly what you have over there. And obviously that brings a lot of challenges with onboarding and clients, deposit, trading activities, CDN latency, all of >>That stuff. It's interesting how each region's different in their, their posture with the cloud. Someone roll their own, someone outta the box. So again, this brings up this theme this year guys, which is about end to end seeing purpose built like specialty solutions. A lot of solutions going end to end with data makes kind of makes it more complicated. So again, we got more complexity coming, but the greatest the cloud is, you can abstract that away. So we are seeing this is a big opportunity for partners to innovate. You're seeing a lot of joint engineering, a lot more complexities coming still, but still end to end is the end game so to speak. >>A absolutely John, I mean one, one of the sort of ways we describe what we try to do for our customers like Equinox is to be your co-pilot in the cloud, which essentially means, you know, >>What an apt analogy. >>I think so, yeah, >>Well, well >>Done there. I think it works. Yvanna. Yeah, so, so as I mentioned, these are the majority or almost all of our customers are pretty sophisticated tech savvy companies. So they don't, you know, they know for most, for the most part what they're trying to achieve. They're approaching scale, they're at scale or they're, or they're through that scale point and they, they just wanna have somebody they can call, right? They need technology to help abstract away the complex problem. So they're not doing so much manual cloud operational work or sometimes they just need help picking the next tech right to solve the end to end use case that that they're, that they're dealing with >>In business. And Danny, you're rolling out solutions so you're on, you're on the front lines, you gotta make it easier. You didn't want to get in the weeds on something that should be taken care of. >>Correct. I mean one of the reasons we go do it is you need to, in order to involve do it, you need to know your problems, understand your challenges, also like a self review only. And you have to be one way halfway through the cloud journey. You need to know your problems, what you want to achieve, where you want to end up a roadmap for the next five years, what you want to achieve. Are we fixing or developing a building? And then involve those guys to come and help you because they cannot just come with magic one and fix all your problems. You need to do that yourself. It's not like starting the journey by yourself. >>Yeah. One thing that's not played up in this event, I will say they may, I don't, they missed, maybe Verner will hit it tomorrow, but I think they kind of missed it a little bit. But the developer productivity's been a big issue. We've seen that this year. One of the big themes on the cube is developer productivity, more velocity on the development side to keep pace with what's on, what solutions are rolling out the customers. And the other one is skills gap. So, and people like, and people have old skills, like we see VMware being bought by Broadcom for instance, got a lot of IT operators at VMware, they gotta go cloud somewhere. So you got new talent, existing talent, skill gaps, people are comfortable, yet the new stuff's there, developers gotta be more productive. How do you guys see that? Cuz that's gonna be how that plays now, it's gonna impact the channel, the partnership relationship, your ability to deliver. >>What's your reaction to that first? Well I think we obviously have a tech savvy team. We've got developers, we've got dev, we've got infrastructure guys, but we only got so much resource that we can afford. And essentially by evolving due it, I've doubled our staff. So we got a tech savvy senior solution architects which comes to do the sexy stuff, actually develop and design a new better offering, better product that makes us competitive. And this is where we involved, essentially we use the due IT staff as an staff employees that our demand is richly army of qualified people. We can actually cherry pick who we want for the call to do X, Y, and Z. And they're there to, to support you. We just have to ask for help. And this is how we fill our gap from technical skills or budget constrained within, you know, within recruitment. >>And I think, I think what, what Danny is touching on, John, what you mentioned is, is really the, the sort of the core family principle of the company, right? It's hard enough for companies like Equinox to hire staff that can help them build their business and deliver the value proposition that they're, that they see, right? And so our reason for existence is to sort of take care of the rest, right? We can help, you know, operate your cloud, show you the most effective way to do that. Whether they're finops problems, whether they're DevOps problems, whether dev SEC ops problems, all of these sort of classic operational problems that get 'em the way of the core business mission. You're not in the business of running the cloud, you're in the business of delivering customer value. We can help you, you know, manage your cloud >>And it's your job to do it. >>It is to do it >>Can, couldn't raise this upon there. How long have y'all been working together? >>I would say 15 months. We took, we took a bit of a conservative approach. We hope for the baseball, prepare for the worst. So I didn't trust do it. I give them one account, start with DEF U A C because you cannot, you just have to learn the journey yourself. So I think I would, my advice for clients is give it the six months. Once you establish a relationship, build a relationship, give them one by one start slowly. You actually understand by yourself the skills, the capacity that they have. And also the, for me consultants is really important And after that just opens up and we are now involving them. We've got new project, we've got problem statement. The first thing we do, we don't Google it, we just say do it. Log a ticket, we got the team. You're >>A verb. >>Yeah. So >>In this case we have >>The puns are on list here on the Cuban general. But with something like that, it's great. >>I gotta ask you a question cuz this is interesting John. You know, we talked last year on the cube and, and again this is an example of how innovations playing out. If you look at the announcements, Adam Celski did and then sw, he had 13 or so announcements. I won't say it's getting boring, but when you hear boring, boring is good. When you start getting into these, these gaps in the platforms as it grows. I won't say they was boring cause that really wasn't boring. I like the data >>Itself. It's all fascinating, John, >>But it, but it's a lot of gap filling, you know, 50 connectors you got, you know, yeah. All glue layers being built in AI's critical. The match cloud is there. What's the innovation? You got a lot of gaps being filled, boring is good. Like Kubernetes, we say there boring means, it's being invisible. That means it's going away. What's the exciting things from your perspective in cloud here? >>Well, I think, I mean, boring is an interesting word to use cuz a company with the heritage of AWS is constantly evolving. I mean, at the core of that company's culture is innovation, technology, development and innovation. And they're building for builders as, as you know, just as well as I do. Yeah. And so, but what we find across our customer base is that companies that are scaling or at scale are using maybe a smaller set of those services, but they're really leveraging them in interesting ways. And there is a very long tail of deeper, more sophisticated fit for purpose, more specific services. And Adam announced, you know, who knows him another 20 or 30 services and it's happening year after year after year. And I think one of the things that, that Danny might attest to is, I, I spoke about the reason we exist and the reason we form the company is we hold it very, a very critical part of our mission is to stay abreast of all of those developments as they emerge so that Danny and and his crew don't have to, right? And so when they have a, a, a question about SageMaker or they have a question about sort of the new big data service that Adam has announced, we take it very seriously. Our job is to be able to answer that question quickly and >>Accurately. And I notice your shirt, if you could just give a little shirt there, ops, cloud ops, DevOps do it. The intersection of the finance, the tuning is now we're hearing a lot of price performance, cost recovery, not cost recovery, but cost management. Yeah. Optimizing. So we're seeing building scale, but now, now tuning almost a craft, the craft of the cloud is here. What's your reaction to that? It, >>It absolutely is. And this is a story as old as the cloud, honestly. And companies, you know, they'll, they'll, companies tend to follow the same sort of maturity journey when they first start, whether they're migrating to the cloud or they were born in the cloud as most of our customers are. There's a, there's a, there's an, there's an access to visibility and understanding and optimization to tuning a craft to use your term. And, and cost management truly is a 10 year old problem that is as prevalent and relevant today as it was, you know, 10 years ago. And there's a lot of talk about the economics associated with the cloud and it's not, certainly not always cheaper to run. In fact, it rarely is cheaper to run your business from any of the public cloud providers. The key is to do it and right size it and make sure it's operating in accordance and alignment with your business, right? It's okay for cloud process to go up so long as your top line is also >>Selling your proportion. You spend more cloud to save cloud. That's it's >>Penny wise, pound full. It's always a little bit, always a little bit of a, of a >>Dilemma on, on the cost saving. We didn't want to just save money. If you want to save money, just shut down your services, right? So it's about making money. So this is where do it comes, like we actually start making, okay, we spend a bit more now, but in about six months time I will be making more money. And we've just did that. We roll out the new application for all the new product offering host to AWS fully with the guys support, a lot of long, boring, boring, boring calls, but they're productive because we actually now have a better product, competitive, it's tailored for our clients, it's cost effective. And we are actually making money >>When something's invisible. It's working, you know, talking about it means it's, it's, it's operational. >>It's exactly, it's, >>Well to that point, John, one of the things we're most proud of in, you know, know this year was, was the launch of our product we called Flex Save, which essentially does exactly what you've described. It's, it's looking for automation and, and, and, and automatic ways of, yes. Saving money, but offering the opportunities to, to to improve the economics associated with your cloud infrastructure. >>Yeah. And improving the efficiency across the board. A hundred percent. It, it's, oh, it's awesome. Let's, and, and it's, it's my understanding there's some reporting and insights that you're able to then translate through from do it to your CTO and across the company. Denny, what's that like? What do you get to see working >>With them? Well, the problem is, like the CTO asked me to do all of that. It is funny he thinks that he's doing it, but essentially they have a excellent portal that basically looks up all of our instances on the one place. You got like good analytics on your cost, cost, anomalies, budget, costal location. But I didn't want to do that either. So what I have done is taken the next step. I actually sold this to the, to my company completely. So my finance teams goes there, they do it themselves, they log in, check, check, all the billing, the costal location. I actually has zero iteration with them if I don't hear anything from them, which is one of the benefits. But also there is lot of other products like the Flexe is virtually like you just click a finger and you start saving money just like that. Easy >>Is that easy button we've been talking about on >>The show? Yeah, exactly, exactly how it is. But there is obviously outside of the cost management, you actually can look at what is the resource you using do actually need it, how often you use it, think about the long term goal, what you're trying to achieve, and use the analytics to, and actually I have to say the analytics much better than AWS in, in, in, in cmp. It's, it's just more user friendly, more interactive as opposed to, you know, building the one in aws. >>It's good business model. Make things easy for your customers. Easy, simple >>To use. >>It's gotta be nice to hear John. >>Well, so first of all, thank you daddy. >>We, we work, but in all seriousness, you know, we, we work, Danny mentioned the trust word earlier. This is at the core of if we don't, if we're not able to build trust with our clients, our business is dead. It, it just doesn't exist. It can't scale. In fact, it'll go the opposite direction. And so we're, we work very, very hard to earn that trust and we're willing to start small to Danny's example, start small and grow. And that's why we're very, one of the things we're most proud of is, is how few customers tend to leave us year over year. We have customers that have been with us for 10 years. >>You know, Andy, Jesse always has, I just saw an interview, he was on the New York Times event in New York today as a CEO of Amazon. But he's always said in these build out phases, you gotta work backwards from the customer and innovate on behalf of the customer. Cause that's the answer that will always be a good answer for the outcome versus optimizing for just profit, you know what I'm saying? Or other things. So we're still in build out mode, >>You know, as a, as a, as a core fundamental sort of product concept. If you're not solving important problems for our customer, what are you, why, why are you investing? It just >>Doesn't make it. This is the beauty we do it. We actually, they wait for you to come to do the next step. They don't sell me anything. They don't bug me with emails. They're ready. When you're ready to make that journey, you just log a ticket and then come and help you. And this is the beauty. You just, it's just not your, your journey. >>I love it. That's a, that's a beautiful note to lead us to our new tradition on the cube. We have a little bit of a challenge for the both of you. We're looking for your 32nd Instagram real thought leadership sizzle anecdote. Either one of you wanna go first. John looks a little nauseous. Danny, you wanna give it a go? >>Well, we've got a few expressions, but we don't Google it. We just do it. And the key take, that's what we do now at, at, and also what we do is actually using their stuff as an influence employees richly. Like that's what we do. >>Well done, well done. Didn't even need the 30 seconds. Fantastic work, Danny. I love that. All right, John, now you do have to go. Okay, >>I'll goodness. You know, I'll, I'll, I'll, I'll I'll go back to what I mentioned earlier, if that's okay. I think we, you know, we exist as a company to sort of help our customers get back to focusing on why they started the business in the first place, which is innovating and delivering value to customers. And we'll help you take care of the rest. It's as simple as that. Awesome. >>Well done. You absolutely nailed it. I wanna just acknowledge your fan club over there watching. Hello everyone from the doit team. Good job team. I love, it's very cute when guests show up with an entourage to the cube. We like to see it. You obviously deserve the entourage. You're, you're both wonderful. Thanks again for being here on the show with Oh yeah, go ahead >>John. Well, I would just like to thank Danny for, for agreeing to >>Discern, thankfully >>Great to spend time with you. Absolutely. Let's do it. >>Thank you. Yeah, >>Yeah. Fantastic gentlemen. Well thank you all for tuning into this wonderful start to the afternoon here from AWS Reinvent. We are in Las Vegas, Nevada with John Furier. My name's Savannah Peterson, you're watching The Cube, the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

from the show floor at AWS Reinvent. Wasn't very complex like it is today, so it was very easy, Great, great guests coming up Cube alumni and great question around complexity and how is wellbeing teams be I swear I'll be able to say that right by the end of this. You know, back back to pre Covid days, John, what about you? It's, it's been busier than ever in case the audience isn't familiar, let's get them acquainted with your companies. in the cloud, scaling a business in the cloud, a lot of companies are just looking to sort of have I can imagine pretty much everyone here on the show floor is dealing with that challenge of complexity. And one of the key values for us here is that we can actually offer a product in regions And what you used to have here So again, we got more complexity coming, but the greatest the cloud is, you can abstract that you know, they know for most, for the most part what they're trying to achieve. And Danny, you're rolling out solutions so you're on, you're on the front lines, you gotta make it easier. I mean one of the reasons we go do it is you need to, And the other one is skills gap. And this is how we fill our gap from We can help, you know, operate your cloud, show you the most effective way to do that. Can, couldn't raise this upon there. start with DEF U A C because you cannot, you just have to learn The puns are on list here on the Cuban general. I like the data But it, but it's a lot of gap filling, you know, 50 connectors you got, you know, yeah. I spoke about the reason we exist and the reason we form the company is we hold it very, The intersection of the finance, the tuning is now we're hearing a lot of price performance, that is as prevalent and relevant today as it was, you know, 10 years ago. You spend more cloud to save cloud. It's always a little bit, always a little bit of a, of a We roll out the new application for all the new product offering host It's working, you know, talking about it means it's, it's, it's operational. Well to that point, John, one of the things we're most proud of in, you know, know this year was, was the launch of our product we from do it to your CTO and across the company. Well, the problem is, like the CTO asked me to do all of that. more interactive as opposed to, you know, building the one in aws. Make things easy for your customers. This is at the core of if we don't, if we're not able to build trust with our clients, the outcome versus optimizing for just profit, you know what I'm saying? You know, as a, as a, as a core fundamental sort of product concept. This is the beauty we do it. for the both of you. And the key take, All right, John, now you do have to go. I think we, you know, we exist as a company to sort of help our customers get back to focusing Thanks again for being here on the show with Oh yeah, go ahead Great to spend time with you. Thank you. Well thank you all for tuning into this wonderful start to the afternoon here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Adam CelskiPERSON

0.99+

DannyPERSON

0.99+

SavannahPERSON

0.99+

John FurierPERSON

0.99+

Savannah PetersonPERSON

0.99+

13QUANTITY

0.99+

AndyPERSON

0.99+

John FurrierPERSON

0.99+

EquinoxORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

New YorkLOCATION

0.99+

Danislav PenevPERSON

0.99+

JessePERSON

0.99+

AdamPERSON

0.99+

50 connectorsQUANTITY

0.99+

EuropeLOCATION

0.99+

YvannaPERSON

0.99+

AWSORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

AmericaLOCATION

0.99+

15 monthsQUANTITY

0.99+

North AmericaLOCATION

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

30 secondsQUANTITY

0.99+

DennyPERSON

0.99+

AfricaLOCATION

0.99+

32ndQUANTITY

0.99+

The CubeTITLE

0.99+

30 servicesQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.98+

todayDATE

0.98+

20QUANTITY

0.98+

LatinLOCATION

0.98+

tomorrowDATE

0.98+

one accountQUANTITY

0.98+

VMwareORGANIZATION

0.98+

this yearDATE

0.98+

John PurcellPERSON

0.97+

GoogleORGANIZATION

0.97+

southeast AsiaLOCATION

0.97+

Las Vegas, NevadaLOCATION

0.96+

about six monthsQUANTITY

0.96+

zeroQUANTITY

0.96+

dozens of videosQUANTITY

0.96+

DoiT InternationalORGANIZATION

0.96+

each regionQUANTITY

0.96+

10 years agoDATE

0.95+

INFINOX GlobalORGANIZATION

0.95+

AWS ReinventORGANIZATION

0.95+

CubeORGANIZATION

0.94+

this yearDATE

0.93+

DeWitORGANIZATION

0.93+

Karthik Narain and Tanuja Randery | AWS Executive Summit 2022


 

(relaxing intro music) >> Welcome back to theCUBE's Coverage here live at reinvent 2022. We're here at the Executive Summit upstairs with the Accenture Set three sets broadcasting live four days with theCUBE. I'm John Furrier your host, with two great guests, cube alumnis, back Tanuja Randery, managing director Amazon web service for Europe middle East and Africa, known as EMEA. Welcome back to the Cube. >> Thank you. >> Great to see you. And Karthik Narain, who's the Accenture first cloud lead. Great to see you back again. >> Thank you. >> Thanks for coming back on. All right, so business transformation is all about digital transformation taken to its conclusion. When companies transform, they are now a digital business. Technologies powering value proposition, data security all in the keynotes higher level service at industry specific solutions. The dynamics of the industry are changing radically in front of our eyes for for the better. Karthik, what's your position on this as Accenture looks at this, we've covered all your successes during the pandemic with AWS. What, what do you guys see out there now as this next layer of power dynamics in the industry take place? >> I think cloud is getting interesting and I think there's a general trend towards specialization that's happening in the world of cloud. And cloud is also moving from a general purpose technology backbone to providing specific industry capabilities for every customer within various industries. But the industry cloud is not a new term. It has been used in the past and it's been used in the past in various degrees, whether that's building horizontal solutions, certain specialized SaaS software or providing capabilities that are horizontal for certain industries. But we see the evolution of industry cloud a little differently and a lot more dynamic, which is we see this as a marketplace where ecosystem of capabilities are going to come together to interact with a common data platform data backbone, data model with workflows that'll come together and integrate all of this stuff and help clients reinvent their industry with newer capabilities, but at the same time use the power of democratized innovation that's already there within that industry. So that's the kind of change we are seeing where customers in their strategy are going to implement industry cloud as one of the tenants as they go through their strategy. >> Yeah, and I see in my notes, fit for purposes is a buzzword people are talking about right size in the cloud and then just building on that. And what's interesting, Tanuja I want to get your thoughts because in the US we're one country, so yeah, integrating is kind of within services. You have purview over countries and these regions it's global impact. This is now a global environment. So it's not just the US North America, it's Latin America it's EMEA, this is another variable in the cross connecting of these fit for purpose. What's your view of the these industry specific solutions? >> Yeah, no and thanks Karthik 'cause I'm a hundred percent aligned. You know, I mean, you know this better than me, John, but 90% of workloads have not yet moved to the cloud. And the only way that we think that's going to happen is by bringing together business and IT. So what does that mean? It means starting with business use cases whether that's digital banking or smart connected factories or frankly if it's predictive maintenance or connected beds. But how do we take those use cases leverage them to really drive outcomes with the technology behind them? I think that's the key unlock that we have to get to. And very specifically, and Adam talked about this a lot today, but data, data is the single unifier for all of business and IT coming together to drive value, right? However, the issue is there's a ton of it, (John Furrier chuckling) right? In fact, fun fact if you put all the data that's going to be created over the next five years, which is more than the last 30 years, on a one terabyte little floppy, disk drive, remember those? Well that's going to be 15 round trips to the moon (John Furrier chuckling) and back. That's how much data it is. So our perspective is you got to unify, single data lake, you got to modernize with AI and ML, and then you're going to have to drive innovation on that. Now, I'll give you one tiny example if I may which I love Ryanair, big airline, 150 million passengers. They are also the largest supplier of ham and cheese sandwiches in the air. And catering at that scale is really difficult, right? If you have too much food wastage, sustainability issues, too little customers are really unhappy. So we work with them leveraging AWS cloud and AI ML to build a panini predictor. And in essence, it's taking the data they've got, data we've got, and actually giving them the opportunity to have just the right number of paninis. >> I love the lock and and the key is data to unlock the value. We heard that in the keynote. Karthik, you guys have been working together with AWS and a lot of successes. We've covered some of those on the cube. As you look at these industry solutions they're not the obvious big problems. They're like businesses, you know it could be the pizza shop it could be the dentist office, it could be any business any industry specific carries over. What is the key to unlock it? Is it the data? Is it the solution? What's that key? >> I think, you know the easier answer is all of the about, but like Tanuja said it all starts by bringing the data together and this is a funny thing. It's not creating new data. This data is there within enterprises. Our clients have these data the industries have the data, but for ages these data has been trapped in functional silos and organizations have been doing analytics within those functions. It's about bringing the data together whether that's a single data warehouse or a data mesh. Those are architectural considerations. But it's about bringing cross-functional data together as step one. Step two, is about utilizing the power of cloud for democratized innovation. It's no longer about one company trying to reinvent the wheel, or create a a new wheel within their enterprise. It's about looking around through the power of cloud marketplace to see if there's a solution that is already existing can we use that? Or if I've created something within my company can I use that as a service for others to use? So, the number one thing is using the power of democratized innovation. Second thing is how do you standardize and digitize functions that does not need to be reinvented every single time so that, you know, your organization can do it or you could use that or take that from elsewhere. And the third element is using the power of the platform economy or platforms to find new avenues of revenue opportunity, customer engagement and experiences. So these are all the things that differentiates organization, but all of this is underpinned by a unified data model that helps, you know, use all the (indistinct) there. >> Tanuja, you have mentioned earlier that not everyone has their journey of the cloud looks the same and certainly in the US and EMEA you have different countries and different areas. >> Yep. >> Their journeys are different. Some want speed and fees, some will roll their own. I mean data brick CEO, when I interviewed them that last week, they started database on a credit card swiped it and they didn't want any support. Amazon's knocking on their door saying, "you want support?" "No, we got it covered." Obviously they're from Berkeley and they're nerds, and they're cool. They can roll their own, but not everyone can. >> Yeah. >> And so you have a mix of customer profiles. How do you view that and what's your strategy? How do you get them over productive seeing that business value? What's that transformation look like? >> Yeah, John, you're absolutely right. So you've got those who are born in cloud, they're very savvy, they know exactly what they need. However, what I do find increasingly, even with these digital native customers, is they're also starting to talk business use cases. So they're talking about, "okay how do I take my platform and build a whole bunch of new services on top of that platform?" So, we still have to work with them on this business use case dimension for the next curve of growth that they want to drive. Currently with the global macroeconomic factors obviously they're also very concerned about profitability and costs. So that's one model. In the enterprise space, you have differences. >> Yeah. >> Right, You have the sort of very, very, very savvy enterprises, right? Who know exactly what they're looking for. But for them then it's about how do I lean into sustainability? In fact, we did a survey, and 77% of users that we surveyed said that they could accelerate their sustainably goals by using cloud. So in many cases they haven't cracked that and we can help them do that. So it's really about horses for courses there. And then, then with some other companies, they've done a lot of the basic infrastructure modernization. However, what they haven't been able to yet do is figure out how they're going to actually become a tech company. So I keep getting asked, can I become a tech company? How do I do that? Right? And then finally there are companies which don't have the skills. So if I go to the SMB segment, they don't always have the skills or the resources. And there using scalable market platforms like AWS marketplace, >> Yeah. >> Allows them to get access to solutions without having to have all the capabilities. So it really is- >> This is where partner network really kind of comes in. >> Absolutely. >> Huge value. Having that channel of solution providers I use that term specifically 'cause you're providing the solution for those folks. >> Yeah. Exact- >> And then the folks at the enterprise, we had a quote on the analyst segment earlier on our Cube, "spend more, save more." >> Yeah. >> That's the cloud equations, >> Yeah. because you're going to get it on sustainability you're going to save it on, you're going to save on cost recovery for revenue, time to revenue. So the cloud is the answer for a lot of enterprises out of the recession. >> Absolutely, and in fact, we need to lean in now you heard Adam say this, right? I mean the cost savings potential alone from on-prem to cloud is between 40 and 60 percent. Just that. But I don't think that's it John. >> The bell tightening he said is reigning some right size. Okay, but then also do more, he didn't say that, but analysts are generally saying, if you spend right on the cloud, you'll save more. That's a general thesis. >> Yeah. >> Do you agree with that? >> I absolutely think so. And by the way, usage is, people use it differently as they get smarter. We're constantly working with our customers by the way though, to continuously cost optimize. So you heard about our Graviton3 instances for example. We're using that to constantly optimize, but at the same time, what are the workloads that you haven't yet brought over to the cloud? (John Furrier chuckling) And so supply chain is a great idea. Our health cloud initiative. So we worked with Accenture on the Accenture Health Insights platform, which runs on AWS as an example or the Goldman Sachs one last year, if you remember. >> I do >> The financial cloud. So those, those are some of the things that I think make it easier for people to consume cloud and reimagine their businesses. >> It's funny, I was talking with Adam and we had a little debate about what an ISV is and I talked to the CEO of Mongo. They don't see themselves on the ISV. As they grew up on the cloud, they become platforms, they have their own ISVs and data bricks and Snowflake and others are developing that dynamic. But there's still ISVs out there. So there's a dynamic of growth going on and the need for partners and our belief is that the ecosystem is going to start doubling in size we believe, because of the demand for purpose built or so out of the box. I hate to use that word "out of the box", but you know turnkey solutions that you can buy another one if it breaks. But use the building blocks if you want to build the foundation. That is more durable, more customizable. Do that if you can. >> Well, >> but- >> we've got a phenomenal, >> shall we talk about this? >> Yeah, go get into- >> So, we've built a five year vision together, Accenture and us. which is called Velocity and you'll be much better in describing it, but I'll give you the simple version of Velocity which is taking AWS powered industry solutions and bringing it to market faster, more repeatable and at lower cost. And so think about vertical solutions sitting on a horizontal accelerator platform able to be deployed making transformation less complex. >> Yeah. >> Karthik, weight in on this, because I've talked to you about this before. We've said years ago the horizontal scalability of the cloud's a beautiful thing but verticals where the ML works great too. Now you got ML in all aspects of it. Horizontal verticals here now. >> Yeah, Yeah, absolutely. Again, the power of this kind of platform that we are launching, by the way we're launching tomorrow we are very excited about it, is, create a platform- >> What are you launching tomorrow? Hold on, I got news out there. What's launching? >> We are going to launch a giant platform, which will help clients accelerate their journey to industry cloud. So that's going to happen tomorrow. So what this platform would provide is that this is going to provide the horizontal capabilities that will help clients bootstrap their launch into cloud. And once they get into cloud, they would be able to build industry solutions on this. The way I imagine this is create the chassis that you need for your industry and then add the cartridges, industry cartridges, which are going to be solutions that are going to be built on top of it. And we are going to do this across various industries starting from, you know, healthcare, life sciences to energy to, you know, public services and so on and so forth >> You're going to create a channel machine. A channel creation machine, you're going to allow people to build their own solutions on top of that platform. And that's launching tomorrow. Make sure we get the news on that. >> Exactly. And- >> Ah, No, >> Sorry, and we genuinely believe the power of industry cloud, if you think about it in the past to create a solution one had to be an ISV to create a solution. What cloud is providing for industry today in the concept of industry clouds, this, industry companies are creating industry solution. The best example is, along with, you know, AWS and Accenture, Ecopetrol, which is a leader in the energy industry, has created a platform, you know called Water Intelligence and Management platform. And through this platform, they are attacking the audacious goal of water sustainability, which is going to be a huge problem for humanity that everybody needs to solve. As part of this platform, the goal is to reduce, you know, fresh water usage by 66% or zero, you know, you know, impact to, you know, groundwater is going to be the goal or ambition of Ecopetrol. So all of this is possible because industry players want to jump to the bandwagon because they have all the toolkit of of the cloud that's available with which they could build a software platform with which they can power their entire industry. >> And make money and have a good business. You guys are doing great. Final word, partnership. Where's it go next? You're doing great. Put a plugin for the Accenture AWS partnership. >> Well, I mean we have a phenomenal relationship and partnership, which is amazing. We really believe in the power of three which is the GSI, the ISV, and us together. And I have to go back to the thing I keep focused on 90% of workloads not in cloud. I think together we can enable those companies to come into the cloud. Very importantly, start to innovate launch new products and refuel the economy. So I think- >> We'll have to check on that >> Very, very optimistic. >> We'll have to check on that number. >> That seems a little- >> You got to check on that number. >> 90 seems a little bit amazing. >> 90% of workloads. >> That sounds, maybe, I'd be surprised. Maybe a little bit lower than that. Maybe. We'll see. >> We got to start turning it. >> It's still a lot. >> (laughs) It's still a lot. >> A lot more. Still first, still early days. Thanks so much for the conversation Karthik great to see you again Tanuja, thanks for your time. >> Thank you, John. >> Congratulations, on your success. Okay, this is theCube up here in the executive summit. You're watching theCube, the leader in high tech coverage, we'll be right back with more coverage here, and the Accenture set after the short break. (calm outro music)

Published Date : Nov 30 2022

SUMMARY :

We're here at the Great to see you. in front of our eyes for for the better. So that's the kind of change So it's not just the US North the opportunity to have just and the key is data to unlock the value. And the third element is using and certainly in the US and they're nerds, And so you have a mix for the next curve of growth of the basic infrastructure modernization. to have all the capabilities. This is where partner Having that channel of solution providers we had a quote on the So the cloud is the answer I mean the cost savings potential alone if you spend right on the are the workloads that you the things that I think make it of the box", but you know and bringing it to market the cloud's a beautiful thing Again, the power of this What are you create the chassis that you need You're going to create the goal is to reduce, you know, Put a plugin for the and refuel the economy. You got to check 90 seems a little Maybe a little bit lower than that. great to see you again Tanuja, and the Accenture set

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Tanuja RanderyPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AdamPERSON

0.99+

TanujaPERSON

0.99+

KarthikPERSON

0.99+

90%QUANTITY

0.99+

John FurrierPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

Karthik NarainPERSON

0.99+

USLOCATION

0.99+

RyanairORGANIZATION

0.99+

zeroQUANTITY

0.99+

77%QUANTITY

0.99+

third elementQUANTITY

0.99+

tomorrowDATE

0.99+

EcopetrolORGANIZATION

0.99+

last yearDATE

0.99+

MongoORGANIZATION

0.99+

five yearQUANTITY

0.99+

66%QUANTITY

0.99+

four daysQUANTITY

0.99+

last weekDATE

0.99+

threeQUANTITY

0.99+

EuropeLOCATION

0.99+

oneQUANTITY

0.99+

60 percentQUANTITY

0.99+

one terabyteQUANTITY

0.98+

one modelQUANTITY

0.98+

firstQUANTITY

0.98+

AfricaLOCATION

0.98+

todayDATE

0.98+

BerkeleyLOCATION

0.98+

Latin AmericaLOCATION

0.98+

theCUBEORGANIZATION

0.98+

singleQUANTITY

0.98+

one countryQUANTITY

0.97+

150 million passengersQUANTITY

0.97+

Second thingQUANTITY

0.97+

two great guestsQUANTITY

0.97+

40QUANTITY

0.97+

hundred percentQUANTITY

0.96+

step oneQUANTITY

0.96+

three setsQUANTITY

0.96+

90QUANTITY

0.96+

GSIORGANIZATION

0.95+

Step twoQUANTITY

0.93+

Accenture AWSORGANIZATION

0.93+

one companyQUANTITY

0.92+

15 round tripsQUANTITY

0.91+

SnowflakeTITLE

0.91+

EMEALOCATION

0.9+

ISVORGANIZATION

0.89+

EMEAORGANIZATION

0.88+

US North AmericaLOCATION

0.88+

first cloudQUANTITY

0.85+

last 30 yearsDATE

0.84+

Ian Colle, AWS | SuperComputing 22


 

(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)

Published Date : Nov 17 2022

SUMMARY :

Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IanPERSON

0.99+

PaulPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

400 gigsQUANTITY

0.99+

2017DATE

0.99+

Ian CollePERSON

0.99+

thousandsQUANTITY

0.99+

DallasLOCATION

0.99+

40%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2022DATE

0.99+

AnnapernaORGANIZATION

0.99+

second partQUANTITY

0.99+

five yearsQUANTITY

0.99+

Last monthDATE

0.99+

IntelORGANIZATION

0.99+

five years agoDATE

0.98+

fiveQUANTITY

0.98+

TwoQUANTITY

0.98+

SupercomputingORGANIZATION

0.98+

LustreORGANIZATION

0.97+

Annaperna LabsORGANIZATION

0.97+

TrainiumORGANIZATION

0.97+

five yearsQUANTITY

0.96+

oneQUANTITY

0.96+

OpExTITLE

0.96+

bothQUANTITY

0.96+

first thingQUANTITY

0.96+

Supercomputing ConferenceEVENT

0.96+

firstQUANTITY

0.96+

West CoastLOCATION

0.96+

thousands of dollars a dayQUANTITY

0.96+

Supercomputing Conference 2022EVENT

0.95+

CapExTITLE

0.94+

threeQUANTITY

0.94+

theCUBEORGANIZATION

0.92+

East CoastLOCATION

0.91+

single regionQUANTITY

0.91+

yearsQUANTITY

0.91+

thousands of nodesQUANTITY

0.88+

Parallel ClusterTITLE

0.87+

about 25 gigsQUANTITY

0.87+

Anais Dotis Georgiou, InfluxData | Evolving InfluxDB into the Smart Data Platform


 

>>Okay, we're back. I'm Dave Valante with The Cube and you're watching Evolving Influx DB into the smart data platform made possible by influx data. Anna East Otis Georgio is here. She's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into realtime analytics. Anna is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IO X is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory, of course for speed. It's a kilo store, so it gives you compression efficiency, it's gonna give you faster query speeds, it gonna use store files and object storages. So you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOCs is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so a lot there. Now we talked to Brian about how you're using Rust and and which is not a new programming language and of course we had some drama around Russ during the pandemic with the Mozilla layoffs, but the formation of the Russ Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Rust was chosen because of his exceptional performance and rebi reliability. So while rust is synt tactically similar to c c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on card for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ, Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fixed race conditions to protect against buffering overflows and to ensure thread safe ay caching structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learned about the the new engine and the, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you're really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data and so much of the efficiency and performance of IOCs comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of illustrate why calmer data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then neighbor each other and when they neighbor each other in the storage format. This provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the min and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one times stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, calmer data fit framework. So that's where a lot of the advantages come >>From. Okay. So you've basically described like a traditional database, a row approach, but I've seen like a lot of traditional databases say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native it, is it not as effective as the, is the form not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. >>Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in rust, but what does it bring to to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps influx DB IOx is that okay, it's great if you can write unlimited amount of cardinality into influx cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PDA's data frames as well and all of the machine learning tools associated with pandas. >>Okay. You're also leveraging par K in the platform course. We heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par K and why is it important? >>Sure. So Par K is the calm oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and pandas so it supports a broader ecosystem. Parque files also take very little disc disc space and they're faster to scan because again they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and these, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call it the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOCs and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and I just wanna learn more, then I would encourage you to go to the monthly tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the influx D DB underscore IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about IOCs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how influx TB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and you guys super responsive, so really appreciate that. All right, thank you so much and East for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yokum. He's the director of engineering for Influx Data and we're gonna talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't wanna miss this.

Published Date : Nov 8 2022

SUMMARY :

to increase the granularity of time series analysis analysis and bring the world of data Hi, thank you so much. So you got very cost effective approach. it aims to have no limits on cardinality and also allow you to write any kind of event data that So lots of platforms, lots of adoption with rust, but why rust as an all the fine grain control, you need to take advantage of even to even today you do a lot of garbage collection in these, in these systems and And so you can picture this table where we have like two rows with the two temperature values for order to answer that question and you have those immediately available to you. to pluck out that one temperature value that you want at that one times stamp and do that for every about is really, you know, kind of native it, is it not as effective as the, Yeah, it's, it's not as effective because you have more expensive compression and because So let's talk about Arrow data fusion. It also has a PANDAS API so that you could take advantage of What are you doing with So it's important What's the value that you're bringing to the community? here is that the more you contribute and build those up, then the kind of summarize, you know, where what, what the big takeaways are from your perspective. So if there's a particular technology or stack that you wanna dive deeper into and want and you guys super responsive, so really appreciate that. I really appreciate it. Influx Data and we're gonna talk about how you update a SaaS engine while

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim YokumPERSON

0.99+

Jeff FrickPERSON

0.99+

BrianPERSON

0.99+

AnnaPERSON

0.99+

James BellengerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

three monthsQUANTITY

0.99+

16 timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

PythonTITLE

0.99+

mobile.twitter.comOTHER

0.99+

Influx DataORGANIZATION

0.99+

iOSTITLE

0.99+

TwitterORGANIZATION

0.99+

30,000 feetQUANTITY

0.99+

Russ FoundationORGANIZATION

0.99+

ScalaTITLE

0.99+

Twitter LiteTITLE

0.99+

two rowsQUANTITY

0.99+

200 megabyteQUANTITY

0.99+

NodeTITLE

0.99+

Three months agoDATE

0.99+

one applicationQUANTITY

0.99+

both placesQUANTITY

0.99+

each rowQUANTITY

0.99+

Par KTITLE

0.99+

Anais Dotis GeorgiouPERSON

0.99+

one languageQUANTITY

0.98+

first oneQUANTITY

0.98+

15 engineersQUANTITY

0.98+

Anna East Otis GeorgioPERSON

0.98+

bothQUANTITY

0.98+

one secondQUANTITY

0.98+

25 engineersQUANTITY

0.98+

About 800 peopleQUANTITY

0.98+

sqlTITLE

0.98+

Node Summit 2017EVENT

0.98+

two temperature valuesQUANTITY

0.98+

one timesQUANTITY

0.98+

c plus plusTITLE

0.97+

RustTITLE

0.96+

SQLTITLE

0.96+

todayDATE

0.96+

InfluxORGANIZATION

0.95+

under 600 kilobytesQUANTITY

0.95+

firstQUANTITY

0.95+

c plus plusTITLE

0.95+

ApacheORGANIZATION

0.95+

par KTITLE

0.94+

ReactTITLE

0.94+

RussORGANIZATION

0.94+

About three months agoDATE

0.93+

8:30 AM Pacific timeDATE

0.93+

twitter.comOTHER

0.93+

last decadeDATE

0.93+

NodeORGANIZATION

0.92+

HadoopTITLE

0.9+

InfluxDataORGANIZATION

0.89+

c c plus plusTITLE

0.89+

CubeORGANIZATION

0.89+

each columnQUANTITY

0.88+

InfluxDBTITLE

0.86+

Influx DBTITLE

0.86+

MozillaORGANIZATION

0.86+

DB IOxTITLE

0.85+

Brian Gilmore, Influx Data | Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now, in this program, we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program, you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think, like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean, if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems. Certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean, commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away. Just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean, we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like, take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and, you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally, I would just say please, like watch in ice in Tim's sessions, Like these are two of our best and brightest. They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time, really hot area. As Brian said in a moment, I'll be right back with Anna East Dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't want to miss this.

Published Date : Nov 8 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. who are using out on a, on a daily basis, you know, and having that sort of big shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, results in, in, you know, milliseconds of time since it hit the, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try you know, the risk of, of, you know, any issues that can come with new software rollouts. And you can do some experimentation and, you know, using the cloud resources. but you know, when it came to this particular new engine, you know, that power performance really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is, you know, really starting to hit that steep part of the S-curve. going out and, you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. the critical aspects of key open source components of the Influx DB engine,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YokumPERSON

0.99+

DavePERSON

0.99+

Dave ValantePERSON

0.99+

BrianPERSON

0.99+

TimPERSON

0.99+

60,000 peopleQUANTITY

0.99+

InfluxORGANIZATION

0.99+

todayDATE

0.99+

BryanPERSON

0.99+

twoQUANTITY

0.99+

twiceQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

three years agoDATE

0.99+

Influx DBTITLE

0.99+

Influx DataORGANIZATION

0.99+

tomorrowDATE

0.98+

ApacheORGANIZATION

0.98+

Anna East Dos GeorgioPERSON

0.98+

IOTORGANIZATION

0.97+

oneQUANTITY

0.97+

In Flux DataORGANIZATION

0.96+

InfluxTITLE

0.95+

The CubeORGANIZATION

0.95+

tonsQUANTITY

0.95+

CubeORGANIZATION

0.94+

RustTITLE

0.93+

both enterprisesQUANTITY

0.92+

iot TTITLE

0.91+

secondQUANTITY

0.89+

GoTITLE

0.88+

two thumbsQUANTITY

0.87+

Anna EastPERSON

0.87+

ParqueTITLE

0.85+

a minute agoDATE

0.84+

Influx StateORGANIZATION

0.83+

Dos GeorgioORGANIZATION

0.8+

influx dataORGANIZATION

0.8+

Apache ArrowORGANIZATION

0.76+

GitHubORGANIZATION

0.75+

BryanLOCATION

0.74+

phase oneQUANTITY

0.71+

past MayDATE

0.69+

GoORGANIZATION

0.64+

number twoQUANTITY

0.64+

millisecond agoDATE

0.61+

InfluxDBTITLE

0.6+

TimeTITLE

0.55+

industrialQUANTITY

0.54+

phase twoQUANTITY

0.54+

ParqueCOMMERCIAL_ITEM

0.53+

coupleQUANTITY

0.5+

timeTITLE

0.5+

thingsQUANTITY

0.49+

TSIORGANIZATION

0.4+

ArrowTITLE

0.38+

PARQUEOTHER

0.3+

Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 2 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Evolving InfluxDB into the Smart Data Platform Full Episode


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YoakumPERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

GoogleORGANIZATION

0.99+

16 timesQUANTITY

0.99+

two rowsQUANTITY

0.99+

New York CityLOCATION

0.99+

60,000 peopleQUANTITY

0.99+

RustTITLE

0.99+

InfluxORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

todayDATE

0.99+

Influx DataORGANIZATION

0.99+

PythonTITLE

0.99+

three expertsQUANTITY

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

each rowQUANTITY

0.99+

two laneQUANTITY

0.99+

TodayDATE

0.99+

Noble nineORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FluxORGANIZATION

0.99+

Influx DBTITLE

0.99+

each columnQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

cube.netOTHER

0.99+

twiceQUANTITY

0.99+

BryanPERSON

0.99+

PandasTITLE

0.99+

c plus plusTITLE

0.99+

three years agoDATE

0.99+

twoQUANTITY

0.99+

more than a decadeQUANTITY

0.98+

ApacheORGANIZATION

0.98+

dozensQUANTITY

0.98+

free@influxdbu.comOTHER

0.98+

30,000 feetQUANTITY

0.98+

Rust FoundationORGANIZATION

0.98+

two temperature valuesQUANTITY

0.98+

In Flux DataORGANIZATION

0.98+

one time stampQUANTITY

0.98+

tomorrowDATE

0.98+

RussPERSON

0.98+

IOTORGANIZATION

0.98+

Evolving InfluxDBTITLE

0.98+

firstQUANTITY

0.97+

Influx dataORGANIZATION

0.97+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

Influx DB UniversityORGANIZATION

0.97+

SQLTITLE

0.97+

The CubeTITLE

0.96+

Influx DB CloudTITLE

0.96+

single serverQUANTITY

0.96+

KubernetesTITLE

0.96+

Amit Eyal Govrin, Kubiya.ai | Cube Conversation


 

(upbeat music) >> Hello everyone, welcome to this special Cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE in theCUBE Studios. We've got a special video here. We love when we have startups that are launching. It's an exclusive video of a hot startup that's launching. Got great reviews so far. You know, word on the street is, they got something different and unique. We're going to' dig into it. Amit Govrin who's the CEO and co-founder of Kubiya, which stands for Cube in Hebrew, and they're headquartered in Bay Area and in Tel Aviv. Amit, congratulations on the startup launch and thanks for coming in and talk to us in theCUBE >> Thank you, John, very nice to be here. >> So, first of all, a little, 'cause we love the Cube, 'cause theCUBE's kind of an open brand. We've never seen the Cube in Hebrew, so is that true? Kubiya is? >> Kubiya literally means cube. You know, clearly there's some additional meanings that we can discuss. Obviously we're also launching a KubCon, so there's a dual meaning to this event. >> KubCon, not to be confused with CubeCon. Which is an event we might have someday and compete. No, I'm only kidding, good stuff. I want to get into the startup because I'm intrigued by your story. One, you know, conversational AI's been around, been a category. We've seen chat bots be all the rage and you know, I kind of don't mind chat bots on some sites. I can interact with some, you know, form based knowledge graph, whatever, knowledge database and get basic stuff self served. So I can see that, but it never really scaled or took off. And now with Cloud Native kind of going to the next level, we're starting to see a lot more open source and a lot more automation, in what I call AI as code or you know, AI as a service, machine learning, developer focused action. I think you guys might have an answer there. So if you don't mind, could you take a minute to explain what you guys are doing, what's different about Kubiya, what's happening? >> Certainly. So thank you for that. Kubiya is what we would consider the first, or one of the first, advanced virtual assitants with a domain specific expertise in DevOps. So, we respect all of the DevOps concepts, GitOps, workflow automation, of those categories you've mentioned, but also the added value of the conversational AI. That's really one of the few elements that we can really bring to the table to extract what we call intent based operations. And we can get into what that means in a little bit. I'll save that maybe for the next question. >> So the market you're going after is kind of, it's, I love to hear starters when they, they don't have a Gartner Magic quadrant, they can fit nicely, it means they're onto something. What is the market you're going after? Because you're seeing a lot of developers driving a lot of the key successes in DevOps. DevOps has evolved to the point where, and DevSecOps, where developers are driving the change. And so having something that's developer focused is key. Are you guys targeting the developers, IT buyers, cloud architects? Who are you looking to serve with this new opportunity? >> So essentially self-service in the world of DevOps, the end user typically would be a developer, but not only, and obviously the operators, those are the folks that we're actually looking to help augment a lot of their efforts, a lot of the toil that they're experiencing in a day to day. So there's subcategories within that. We can talk about the different internal developer tools, or platforms, shared services platforms, service catalogs are tangential categories that this kind of comes on. But on top of that, we're adding the element of conversational AI. Which, as I mentioned, that's really the "got you". >> I think you're starting to see a lot of autonomous stuff going on, autonomous pen testing. There's a company out there doing I've seen autonomous AI. Automation is a big theme of it. And I got to ask, are you guys on the business side purely in the cloud? Are you born in the cloud, is it a cloud service? What's the product choice there? It's a service, right? >> Software is a service. We have the classic, Multi-Tenancy SAAS, but we also have a hybrid SAAS solution, which allows our customers to run workflows using remote runners, essentially hosted at their own location. >> So primary cloud, but you're agnostic on where they could consume, how they want to' consume the product. >> Technology agnostic. >> Okay, so that's cool. So let's get into the problem you're solving. So take me through, this will drive a lot of value here, when you guys did the company, what problems did you hone in on and what are you guys seeing as the core problem that you solve? >> So we, this is a unique, I don't know how unique, but this is a interesting proposition because I come from the business side, so call it the top down. I've been in enterprise sales, I've been in a CRO, VP sales hat. My co-founder comes from the bottom up, right? He ran DevOps teams and SRE teams in his previous company. That's actually what he did. So, we met each other halfway, essentially with me seeing a lot of these problems of self-service not being so self-service after all, platforms hitting walls with adoption. And he actually created his own self-service platform, within his last company, to address his own personal pains. So we essentially kind of met with both perspectives. >> So you're absolutely hardcore on self-service. >> We're enabling self-service. >> And that basically is what everybody wants. I mean, the developers want self-service. I mean, that's kind of like, you know, that's the nirvana. So take us through what you guys are offering, give us an example of use cases and who's buying your product, why, and take us through that whole piece. >> Do you mind if I take a step back and say why we believe self-service has somewhat failed or not gotten off. >> Yeah, absolutely. >> So look, this is essentially how we're looking at it. All the analysts and the industry insiders are talking about self-service platforms as being what's going to' remove the dependency of the operator in the loop the entire time, right? Because the operator, that scarce resource, it's hard to hire, hard to train, hard to retain those folks, Developers are obviously dependent on them for productivity. So the operators in this case could be a DevOps, could be a SecOps, it could be a platform engineer. It comes in different flavors. But the common denominator, somebody needs an access request, provisioning a new environment, you name it, right? They go to somebody, that person is operator. The operator typically has a few things on their plate. It's not just attending and babysitting platforms, but it's also innovating, spinning up, and scaling services. So they see this typically as kind of, we don't really want to be here, we're going to' go and do this because we're on call. We have to take it on a chin, if you may, for this. >> It's their child, they got to' do it. >> Right, but it's KTLOs, right, keep the lights on, this is maintenance of a platform. It's not what they're born and bred to do, which is innovate. That's essentially what we're seeing, we're seeing that a lot of these platforms, once they finally hit the point of maturity, they're rolled out to the team. People come to serve themselves in platform, and low and behold, it's not as self-service as it may seem. >> We've seen that certainly with Kubernetes adoption being, I won't say slow, it's been fast, but it's been good. But I think this is kind of the promise of what SRE was supposed to be. You know, do it once and then babysit in the sense of it's working and automated. Nothing's broken yet. Don't call me unless you need something, I see that. So the question, you're trying to make it easier then, you're trying to free up the talent. >> Talent to operate and have essentially a human, like in the loop, essentially augment that person and give the end users all of the answers they require, as if they're talking to a person. >> I mean it's basically, you're taking the virtual assistant concept, or chat bot, to a level of expertise where there's intelligence, jargon, experience into the workflows that's known. Not just talking to chat bot, get a support number to rebook a hotel room. >> We're converting operational workflows into conversations. >> Give me an example, take me through an example. >> Sure, let's take a simple example. I mean, not everyone provisions EC2's with two days (indistinct). But let's say you want to go and provision new EC2 instances, okay? If you wanted to do it, you could go and talk to the assistant and say, "I want to spin up a new server". If it was a human in the loop, they would ask you the following questions: what type of environment? what are we attributing this to? what type of instance? security groups, machine images, you name it. So, these are the questions that typically somebody needs to be armed with before they can go and provision themselves, serve themselves. Now the problem is users don't always have these questions. So imagine the following scenario. Somebody comes in, they're in Jira ticket queue, they finally, their turn is up and the next question they don't have the answer to. So now they have to go and tap on a friend, or they have to go essentially and get that answer. By the time they get back, they lost their turn in queue. And then that happens again. So, they lose a context, they lose essentially the momentum. And a simple access request, or a simple provision request, can easily become a couple days of ping pong back and forth. This won't happen with the virtual assistant. >> You know, I think, you know, and you mentioned chat bots, but also RPA is out there, you've seen a lot of that growth. One of the hard things, and you brought this up, I want to get your reaction to, is contextualizing the workflow. It might not be apparent, but the answer might be there, it disrupts the entire experience at that point. RPA and chat bots don't have that contextualization. Is that what you guys do differently? Is that the unique flavor here? Is that difference between current chat bots and RPA? >> The way we see it, I alluded to the intent based operations. Let me give a tangible experience. Even not from our own world, this will be easy. It's a bidirectional feedback loop 'cause that's actually what feeds the context and the intent. We all know Waze, right, in the world of navigation. They didn't bring navigation systems to the world. What they did is they took the concept of navigation systems that are typically satellite guided and said it's not just enough to drive down the 280, which typically have no traffic, right, and to come across traffic and say, oh, why didn't my satellite pick that up? So they said, have the end users, the end nodes, feed that direction back, that feedback, right. There has to be a bidirectional feedback loop that the end nodes help educate the system, make the system be better, more customized. And that's essentially what we're allowing the end users. So the maintenance of the system isn't entirely in the hands of the operators, right? 'Cause that's the part that they dread. And the maintenance of the system is democratized across all the users that they can teach the system, give input to the system, hone in the system in order to make it more of the DNA of the organization. >> You and I were talking before you came on this camera interview, you said playfully that the Siri for DevOps, which kind of implies, hey infrastructure, do something for me. You know, we all know Siri, so we get that. So that kind of illustrates kind of where the direction is. Explain why you say that, what does that mean? Is that like a NorthStar vision that you guys are approaching? You want to' have a state where everything's automated in it's conversational deployments, that kind of thing. And take us through why that Siri for DevOps is. >> I think it helps anchor people to what a virtual assistant is. Because when you hear virtual assistant, that can mean any one of various connotations. So the Siri is actually a conversational assistant, but it's not necessarily a virtual assistant. So what we're saying is we're anchoring people to that thought and saying, we're actually allowing it to be operational, turning complex operations into simple conversations. >> I mean basically they take the automate with voice Google search or a query, what's the score of the game? And, it also, and talking to the guy who invented Siri, I actually interviewed on theCUBE, it's a learning system. It actually learns as it gets more usage, it learns. How do you guys see that evolving in DevOps? There's a lot of jargon in DevOps, a lot of configurations, a lot of different use cases, a lot of new technologies. What's the secret sauce behind what you guys do? Is it the conversational AI, is it the machine learning, is it the data, is it the model? Take us through the secret sauce. >> In fact, it's all the above. And I don't think we're bringing any one element to the table that hasn't been explored before, hasn't been done. It's a recipe, right? You give two people the same ingredients, they can have complete different results in terms of what they come out with. We, because of our domain expertise in DevOps, because of our familiarity with developer workflows with operators, we know how to give a very well suited recipe. Five course meal, hopefully with Michelin stars as part of that. So a few things, maybe a few of the secret sauce element, conversational AI, the ability to essentially go and extract the intent of the user, so that if we're missing context, the system is smart enough to go and to get that feedback and to essentially feed itself into that model. >> Someone might say, hey, you know, conversational AI, that was yesterday's trend, it never happened. It was kind of weak, chat bots were lame. What's different now and with you guys, and the market, that makes a redo or a second shot at this, a second bite at the apple, as they say. What do you guys see? 'Cause you know, I would argue that it's, you know, it's still early, real early. >> Certainly. >> How do you guys view that? How would you handle that objection? >> It's a fair question. I wasn't around the first time around to tell you what didn't work. I'm not afraid to share that the feedback that we're getting is phenomenal. People understand that we're actually customizing the workflows, the intent based operations to really help hone in on the dark spots. We call it last mile, you know, bottlenecks. And that's really where we're helping. We're helping in a way tribalize internal knowledge that typically hasn't been documented because it's painful enough to where people care about it but not painful enough to where you're going to' go and sit down an entire day and document it. And that's essentially what the virtual assistant can do. It can go and get into those crevices and help document, and operationalize all of those toils. And into workflows. >> Yeah, I mean some will call it grunt work, or low level work. And I think the automation is interesting. I think we're seeing this in a lot of these high scale situations where the talented hard to hire person is hired to do, say, things that were hard to do, but now harder things are coming around the corner. So, you know, serverless is great and all this is good, but it doesn't make the complexity go away. As these inflection points continue to drive more scale, the complexity kind of grows, but at the same time so is the ability to abstract away the complexity. So you're starting to see the smart, hired guns move to higher, bigger problems. And the automation seems to take the low level kind of like capabilities or the toil, or the grunt work, or the low level tasks that, you know, you don't want a high salaried person doing. Or I mean it's not so much that they don't want to' do it, they'll take one for the team, as you said, or take it on the chin, but there's other things to work on. >> I want to add one more thing, 'cause this goes into essentially what you just said. Think about it's not the virtual system, what it gives you is not just the intent and that's one element of it, is the ability to carry your operations with you to the place where you're not breaking your workflows, you're actually comfortable operating. So the virtual assistant lives inside of a command line interface, it lives inside of chat like Slack, and Teams, and Mattermost, and so forth. It also lives within a low-code editor. So we're not forcing anyone to use uncomfortable language or operations if they're not comfortable with. It's almost like Siri, it travels in your mobile phone, it's on your laptop, it's with you everywhere. >> It makes total sense. And the reason why I like this, and I want to' get your reaction on this because we've done a lot of interviews with DevOps, we've met at every CubeCon since it started, and Kubernetes kind of highlights the value of the containers at the orchestration level. But what's really going on is the DevOps developers, and the CICD pipeline, with infrastructure's code, they're basically have a infrastructure configuration at their disposal all the time. And all the ops challenges have been around that, the repetitive mundane tasks that most people do. There's like six or seven main use cases in DevOps. So the guardrails just need to be set. So it sounds like you guys are going down the road of saying, hey here's the use cases you can bounce around these use cases all day long. And just keep doing your jobs cause they're bolting on infrastructure to every application. >> There's one more element to this that we haven't really touched on. It's not just workflows and use cases, but it's also knowledge, right? Tribal knowledge, like you asked me for an example. You can type or talk to the assistant and ask, "How much am I spending on AWS, on US East 1, on so and so customer environment last week?", and it will know how to give you that information. >> Can I ask, should I buy a reserve instances or not? Can I ask that question? 'Cause there's always good trade offs between buying the reserve instances. I mean that's kind of the thing that. >> This is where our ecosystem actually comes in handy because we're not necessarily going to' go down every single domain and try to be the experts in here. We can tap into the partnerships, API, we have full extensibility in API and the software development kit that goes into. >> It's interesting, opinionated and declarative are buzzwords in developer language. So you started to get into this editorial thing. So I can bring up an example. Hey cube, implement the best service mesh. What answer does it give you? 'Cause there's different choices. >> Well this is actually where the operator, there's clearly guard rails. Like you can go and say, I want to' spin up a machine, and it will give you all of the machines on AWS. Doesn't mean you have to get the X one, that's good for a SAP environment. You could go and have guardrails in place where only the ones that are relevant to your team, ones that have resources and budgetary, you know, guidelines can be. So, the operator still has all the control. >> It was kind of tongue in cheek around the editorialized, but actually the answer seems to be as you're saying, whatever the customer decided their service mesh is. So I think this is where it gets into as an assistant to architecting and operating, that seems to be the real value. >> Now code snippets is a different story because that goes on to the web, that goes onto stock overflow, and that's actually one of the things. So inside the CLI, you could actually go and ask for code snippets and we could actually go and populate that, it's a smart CLI. So that's actually one of the things that are an added value of that. >> I was saying to a friend and we were talking about open source and how when I grew up, there was no open source. If you're a developer now, I mean there's so much code, it's not so much coding anymore as it is connecting and integrating. >> Certainly. >> And writing glue layers, if you will. I mean there's still code, but it's not, you don't have to build it from scratch. There's so much code out there. This low-code notion of a smart system is interesting 'cause it's very matrix like. It can build its own code. >> Yes, but I'm also a little wary with low-code and no code. I think part of the problem is we're so constantly focused on categories and categorizing ourselves, and different categories take on a life of their own. So low-code no code is not necessarily, even though we have the low-code editor, we're not necessarily considering ourselves low-code. >> Serverless, no code, low-code. I was so thrown on a term the other day, architecture-less. As a joke, no we don't need architecture. >> There's a use case around that by the way, yeah, we do. Show me my AWS architecture and it will build the architect diagram for you. >> Again, serverless architect, this is all part of infrastructure's code. At the end of the day, the developer has infrastructure with code. Again, how they deploy it is the neuron. That's what we've been striving for. >> But infrastructure is code. You can destroy, you know, terraform, you can go and create one. It's not necessarily going to' operate it for you. That's kind of where this comes in on top of that. So it's really complimentary to infrastructure. >> So final question, before we get into the origination story, data and security are two hot areas we're seeing fill the IT gap, that has moved into the developer role. IT is essentially provisioned by developers now, but the OP side shifted to large scale SRE like environments, security and data are critical. What's your opinion on those two things? >> I agree. Do you want me to give you the normal data as gravity? >> So you agree that IT is now, is kind of moved into the developer realm, but the new IT is data ops and security ops basically. >> A hundred percent, and the lines are so blurred. Like who's what in today's world. I mean, I can tell you, I have customers who call themselves five different roles in the same day. So it's, you know, at the end of the day I call 'em operators 'cause I don't want to offend anybody because that's just the way it is. >> Architectural-less, we're going to' come back to that. Well, I know we're going to' see you at CubeCon. >> Yes. >> We should catch up there and talk more. I'm looking forward to seeing how you guys get the feedback from the marketplace. It should be interesting to hear, the curious question I have for you is, what was the origination story? Why did you guys come together, was it a shared problem? Was it a big market opportunity? Was it an itch you guys were scratching? Did you feel like you needed to come together and start this company? What was the real vision behind the origination? Take a take a minute to explain the story. >> No, absolutely. So I've been living in Palo Alto for the last couple years. Previous, also a founder. So, you know, from my perspective, I always saw myself getting back in the game. Spent a few years in AWS essentially managing partnerships for tier one DevOps partners, you know, all of the known players. Some in public, some of them not. And really the itch was there, right. I saw what everyone's doing. I started seeing consistency in the pains that I was hearing back, in terms of what hasn't been solved. So I already had an opinion where I wanted to go. And when I was visiting actually Israel with the family, I was introduced by a mutual friend to Shaked, Shaked Askayo, my co-founder and CTO. Amazing guy, unbelievable technologists, probably one the most, you know, impressive folks I've had a chance to work with. And he actually solved a very similar problem, you know, in his own way in a previous company, BlueVine, a FinTech company where he was head of SRE, having to, essentially, oversee 200 developers in a very small team. The ratio was incongruent to what the SRE guideline would tell. >> That's more than 10 x rate developer. >> Oh, absolutely. Sure enough. And just imagine it's four different time zones. He finishes day shift and you already had the US team coming, asking for a question. He said, this is kind of a, >> Got to' clone himself, basically. >> Well, yes. He essentially said to me, I had no day, I had no life, but I had Corona, I had COVID, which meant I could work from home. And I essentially programed myself in the form of a bot. Essentially, when people came to him, he said, "Don't talk to me, talk to the bot". Now that was a different generation. >> Just a trivial example, but the idea was to automate the same queries all the time. There's an answer for that, go here. And that's the benefit of it. >> Yes, so he was able to see how easy it was to solve, I mean, how effective it was solving 70% of the toil in his organization. Scaling his team, froze the headcount and the developer team kept on going. So that meant that he was doing some right. >> When you have a problem, and you need to solve it, the creativity comes out of the woodwork, you know, invention is the mother of necessity. So final question for you, what's next? Got the launch, what are you guys hope to do over the next six months to a year, hiring? Put a plug in for the company. What are you guys looking to do? Take a minute to share the future vision and get a plug in. >> A hundred percent. So, Kubiya, as you can imagine, announcing ourselves at CubeCon, so in a couple weeks. Opening the gates towards the public beta and NGA in the next couple months. Essentially working with dozens of customers, Aston Martin, and business earn in. We have quite a few, our website's full of quotes. You can go ahead. But effectively we're looking to go and to bring the next operator, generation of operators, who value their time, who value the, essentially, the value of tribal knowledge that travels between organizations that could be essentially shared. >> How many customers do you guys have in your pre-launch? >> It's above a dozen. Without saying, because we're actually looking to onboard 10 more next week. So that's just an understatement. It changes from day to day. >> What's the number one thing people are saying about you? >> You got that right. I know it's, I'm trying to be a little bit more, you know. >> It's okay, you can be cocky, startups are good. But I mean they're obviously, they're using the product and you're getting good feedback. Saving time, are they saying this is a dream product? Got it right, what are some of the things? >> I think anybody who doesn't feel the pain won't know, but the folks who are in the trenches, or feeling the pain, or experiencing this toil, who know what this means, they said, "You're doing this different, you're doing this right. You architected it right. You know exactly what the developer workflows," you know, where all the areas, you know, where all the skeletons are hidden within that. And you're attending to that. So we're happy about that. >> Everybody wants to clone themselves, again, the tribal knowledge. I think this is a great example of where we see the world going. Make things autonomous, operationally automated for the use cases you know are lock solid. Why wouldn't you just deploy? >> Exactly, and we have a very generous free tier. People can, you know, there's a plugin, you can sign up for free until the end of the year. We have a generous free tier. Yeah, free forever tier, as well. So we're looking for people to try us out and to give us feedback. >> I think the self-service, I think the point is, we've talked about it on the Cube at our events, everyone says the same thing. Every developer wants self-service, period. Full stop, done. >> What they don't say is they need somebody to help them babysit to make sure they're doing it right. >> The old dashboard, green, yellow, red. >> I know it's an analogy that's not related, but have you been to Whole Foods? Have you gone through their self-service line? That's the beauty of it, right? Having someone in a loop helping you out throughout the time. You don't get confused, if something's not working, someone's helping you out, that's what people want. They want a human in the loop, or a human like in the loop. We're giving that next best thing. >> It's really the ratio, it's scale. It's a scaling. It's force multiplier, for sure. Amit, thanks for coming on, congratulations. >> Thank you so much. >> See you at KubeCon. Thanks for coming in, sharing the story. >> KubiyaCon. >> CubeCon. Cube in Hebrew, Kubiya. Founder, co-founder and CEO here, sharing the story in the launch. Conversational AI for DevOps, the theory of DevOps, really kind of changing the game, bringing efficiency, solving a lot of the pain points of large scale infrastructure. This is theCUBE, CUBE conversation, I'm John Furrier, thanks for watching. (upbeat electronic music)

Published Date : Oct 18 2022

SUMMARY :

on the startup launch We've never seen the Cube so there's a dual meaning to this event. I can interact with some, you know, but also the added value of the conversational AI. a lot of the key successes in DevOps. a lot of the toil that they're What's the product choice there? We have the classic, Multi-Tenancy SAAS, So primary cloud, So let's get into the call it the top down. So you're absolutely I mean, the developers want self-service. Do you mind if I take a step back So the operators in this keep the lights on, this is of the promise of what SRE all of the answers they require, experience into the We're converting operational take me through an example. So imagine the following scenario. Is that the unique flavor here? that the end nodes help the Siri for DevOps, So the Siri is actually a is it the data, is it the model? the system is smart enough to a second bite at the apple, as they say. on the dark spots. And the automation seems to it, is the ability to carry So the guardrails just need to be set. the assistant and ask, I mean that's kind of the thing that. and the software development implement the best service mesh. of the machines on AWS. but actually the answer So inside the CLI, you could actually go I was saying to a And writing glue layers, if you will. So low-code no code is not necessarily, I was so thrown on a term the around that by the way, At the end of the day, You can destroy, you know, terraform, that has moved into the developer role. the normal data as gravity? is kind of moved into the developer realm, in the same day. to' see you at CubeCon. the curious question I have for you is, And really the itch was there, right. the US team coming, asking for a question. myself in the form of a bot. And that's the benefit of it. and the developer team kept on going. of the woodwork, you know, and NGA in the next couple months. It changes from day to day. bit more, you know. It's okay, you can be but the folks who are in the for the use cases you know are lock solid. and to give us feedback. everyone says the same thing. need somebody to help them That's the beauty of it, right? It's really the ratio, it's scale. Thanks for coming in, sharing the story. sharing the story in the launch.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

70%QUANTITY

0.99+

SiriTITLE

0.99+

sixQUANTITY

0.99+

AWSORGANIZATION

0.99+

AmitPERSON

0.99+

Tel AvivLOCATION

0.99+

Amit GovrinPERSON

0.99+

Palo AltoLOCATION

0.99+

Amit Eyal GovrinPERSON

0.99+

two daysQUANTITY

0.99+

10QUANTITY

0.99+

200 developersQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Bay AreaLOCATION

0.99+

two peopleQUANTITY

0.99+

IsraelLOCATION

0.99+

Aston MartinORGANIZATION

0.99+

last weekDATE

0.99+

Whole FoodsORGANIZATION

0.99+

two thingsQUANTITY

0.99+

next weekDATE

0.99+

firstQUANTITY

0.99+

KubiyaORGANIZATION

0.99+

SREORGANIZATION

0.99+

KubeConEVENT

0.99+

BlueVineORGANIZATION

0.99+

EC2TITLE

0.99+

DevOpsTITLE

0.98+

five different rolesQUANTITY

0.98+

Five courseQUANTITY

0.98+

oneQUANTITY

0.98+

KubiyaPERSON

0.98+

first timeQUANTITY

0.97+

KubiyaConEVENT

0.97+

second shotQUANTITY

0.96+

yesterdayDATE

0.96+

hundred percentQUANTITY

0.96+

one elementQUANTITY

0.96+

KubConEVENT

0.96+

one more elementQUANTITY

0.96+

second biteQUANTITY

0.95+

both perspectivesQUANTITY

0.95+

GartnerORGANIZATION

0.95+

GoogleORGANIZATION

0.95+

HebrewOTHER

0.94+

NorthStarORGANIZATION

0.94+

Shaked AskayoPERSON

0.94+

CubeORGANIZATION

0.93+

ShakedPERSON

0.93+

theCUBE StudiosORGANIZATION

0.93+

dozens of customersQUANTITY

0.93+

CoronaORGANIZATION

0.92+

DevSecOpsTITLE

0.92+

theCUBEORGANIZATION

0.92+

above a dozenQUANTITY

0.91+

OneQUANTITY

0.9+

more than 10 xQUANTITY

0.9+

Siri for DevOpsTITLE

0.9+

cubePERSON

0.9+

US East 1LOCATION

0.89+

280QUANTITY

0.89+

CubeConEVENT

0.88+

two hot areasQUANTITY

0.87+

todayDATE

0.87+

seven main use casesQUANTITY

0.84+

USLOCATION

0.84+

MichelinTITLE

0.83+

a yearQUANTITY

0.83+

Ray Wang, Constellation & Pascal Bornet, Best-selling Author | UiPath FORWARD 5


 

>>The Cube Presents UI Path Forward five. Brought to you by UI Path, >>Everybody. We're back in Las Vegas. The cube's coverage we're day one at UI Path forward. Five. Pascal Borne is here. He's an expert and bestselling author in the topic of AI and automation and the book Intelligent Automation. Welcome to the world of Hyper Automation, the first book on the topic. And of course, Ray Wong is back on the cube. He's the founder, chairman and principal analyst, Constellation Reese, also bestselling author of Everybody Wants To Rule the World. Guys, thanks so much for coming on The Cubes. Always a pleasure. Ray Pascal, First time on the Cube, I believe. >>Yes, thank you. Thanks for the invitation. Thank you. >>So what is artificial about artificial intelligence, >>For sure, not people. >>So, okay, so you guys are both speaking at the conference, Ray today. I think you're interviewing the co CEOs. What do you make of that? What's, what are you gonna, what are you gonna probe with these guys? Like, how they're gonna divide their divide and conquer, and why do you think the, the company Danielle in particular, decided to bring in Rob Sland? >>Well, you know what I mean, Like, you know, these companies are now at a different stage of growth, right? There's that early battle between RPA vendors. Now we're actually talking something different, right? We're talking about where does automation go? How do we get the decisioning? What's the next best action? That's gonna be the next step. And to take where UI path is today to somewhere else, You really want someone with that enterprise cred and experience the sales motions, the packages, the partnership capabilities, and who else better than Roblin? He, that's, he's done, he can do that in his sleep, but now he's gotta do that in a new space, taking whole category to another level. Now, Daniel on the other hand, right, I mean, he's the visionary founder. He put this thing from nothing to where he is today, right? I mean, at that point you want your founder thinking about the next set of ideas, right? So you get this interesting dynamic that we've seen for a while with co CEOs, those that are doing the operations, getting the stuff out the door, and then letting the founders get a chance to go back and rethink, take a look at the perspective, and hopefully get a chance to build the next idea or take the next idea back into the organization. >>Right? Very well said. Pascal, why did you write your book on intelligent automation and, and hyper automation, and what's changed since you've written that book? >>So, I, I wrote this book, An Intelligent Automation, two years ago. At that time, it was really a new topic. It was really about the key, the, the key, the key content of the, of the book is really about combining different technologies to automate the most complex end to end business processes in companies. And when I say capabilities, it's, we, we hear a lot about up here, especially here, robotic process automation. But up here alone, if you just trying to transform a company with only up here, you just fall short. Okay? A lot of those processes need more than execution. They need language, they need the capacity to view, to see, they need the capacity to understand and to, and to create insights. So by combining process automation with ai, natural language processing, computer vision, you give this capability to create impact by automating end to end processes in companies. >>I, I like the test, what I hear in the keynote with independent experts like yourself. So we're hearing that that intelligent automation or automation is a fundamental component of digital transformation. Is it? Or is it more sort of a back office sort of hidden in inside plumbing Ray? What do you think? >>Well, you start by understanding what's going on in the process phase. And that's where you see discover become very important in that keynote, right? And that's where process mining's playing a role. Then you gotta automate stuff. But when you get to operations, that's really where the change is going to happen, right? We actually think that, you know, when you're doing the digital transformation pieces, right? Analytics, automation and AI are coming together to create a concept we call decision velocity. You and I make a quick decision, boom, how long does it take to get out? Management committee could free forever, right? A week, two months, never. But if you're thinking about competing with the automation, right? These decisions are actually being done a hundred times per second by machine, even a thousand times per second. That asymmetry is really what people are facing at the moment. >>And the companies that are gonna be able to do that and start automating decisions are gonna be operating at another level. Back to what Pascal's book talking about, right? And there are four questions everyone has to ask you, like, when do you fully intelligently automate? And that happens right in the background when you augment the machine with a human. So we can find why did you make an exception? Why did you break a roll? Why didn't you follow this protocol so we can get it down to a higher level confidence? When do you augment the human with the machine so we can give you the information so you can act quickly. And the last one is, when do you wanna insert a human in the process? That's gonna be the biggest question. Order to cash, incident or resolution, Hire to retire, procure to pay. It doesn't matter. When do you want to put a human in the process? When do you want a man in the middle, person in the middle? And more importantly, when do you want insert friction? >>So Pascal, you wrote your book in the middle of the, the pandemic. Yes. And, and so, you know, pre pandemic digital transformation was kind of a buzzword. A lot of people gave it lip service, eh, not on my watch, I don't have to worry about that. But then it became sort of, you're not a digital business, you're out of business. So, so what have you seen as the catalyst for adoption of automation? Was it the, the pandemic? Was it sort of good runway before that? What's changed? You know, pre isolation, post isolation economy. >>You, you make me think about a joke. Who, who did your best digital transformation over the last years? The ceo, C H R O, the Covid. >>It's a big record ball, right? Yeah. >>Right. And that's exactly true. You know, before pandemic digital transformation was a competitive advantage. >>Companies that went into it had an opportunity to get a bit better than their, their competitors during the pandemic. Things have changed completely. Companies that were not digitalized and automated could not survive. And we've seen so many companies just burning out and, and, and those companies that have been able to capitalize on intelligent automation, digital transformations during the pandemic have been able not only to survive, but to, to thrive, to really create their place on the market. So that's, that has been a catalyst, definitely a catalyst for that. That explains the success of the book, basically. Yeah. >>Okay. Okay. >>So you're familiar with the concept of Stew the food, right? So Stew by definition is something that's delicious to eat. Stew isn't simply taking one of every ingredient from the pantry and throwing it in the pot and stirring it around. When we start talking about intelligent automation, artificial intelligence, augmented intelligence, it starts getting a bit overwhelming. My spy sense goes off and I start thinking, this sounds like mush. It doesn't sound like Stew. So I wanna hear from each of you, what is the methodical process that, that people need to go through when they're going through digital trans transmission, digital transformation, so that you get delicious stew instead of a mush that's just confused everything in your business. So you, Ray, you want, you want to, you wanna answer that first? >>Yeah. You know, I mean, we've been talking about digital transformation since 2010, right? And part of it was really getting the business model, right? What are you trying to achieve? Is that a new type of offering? Are you changing the way you monetize something? Are you taking existing process and applying it to a new set of technologies? And what do you wanna accomplish, right? Once you start there, then it becomes a whole lot of operational stuff. And it's more than st right? I mean, it, it could be like, well, I can't use those words there. But the point being is it could be a complete like, operational exercise. It could be a complete revenue exercise, it could be a regulatory exercise, it could be something about where you want to take growth into the next level. And each one of those processes, some of it is automation, right? There's a big component of it today. But most of it is really rethinking about what you want things to do, right? How do you actually make things to be successful, right? Do I reorganize a process? Do I insert a place to do monetization? Where do I put engagement in place? How do I collect data along the way so I can build better feedback loop? What can I do to build the business graph so that I have that knowledge for the future so I can go forward doing that so I can be successful. >>The Pascal should, should, should the directive be first ia, then ai? Or are these, are these things going to happen in parallel naturally? What's your position on that? Is it first, >>So it, so, >>So AI is part of IA because that's, it's, it's part of the big umbrella. And very often I got the question. So how do you differentiate AI in, I a, I like to say that AI is only the brain. So think of ai cuz I'm consider, I consider AI as machine learning, Okay? Think of AI in a, like a brain near jar that only can think, create, insight, learn, but doesn't do anything, doesn't have any arms, doesn't have any eyes, doesn't not have any mouth and ears can't talk, can't understand with ia, you, you give those capabilities to ai. You, you basically, you create a cap, the capability, technological capability that is able to do more than just thinking, learning and, and create insight, but also acting, speaking, understanding the environment, viewing it, interacting with it. So basically performing these, those end to end processes that are performed currently by people in companies. >>Yeah, we're gonna get to a point where we get to what we call a dynamic scenario generation. You're talking to me, you get excited, well, I changed the story because something else shows up, or you're talking to me and you're really upset. We're gonna have to actually ch, you know, address that issue right away. Well, we want the ability to have that sense and respond capability so that the next best action is served. So your data, your process, the journey, all the analytics on the top end, that's all gonna be served up and changed along the way. As we go from 2D journeys to 3D scenarios in the metaverse, if we think about what happens from a decentralized world to decentralized, and we think about what's happening from web two to web three, we're gonna make those types of shifts so that things are moving along. Everything's a choose your end venture journey. >>So I hope I remember this correctly from your book. You talked about disruption scenarios within industries and within companies. And I go back to the early days of, of our industry and East coast Prime, Wang, dg, they're all gone. And then, but, but you look at companies like Microsoft, you know, they were, they were able to, you know, get through that novel. Yeah. Ibm, you know, I call it survived. Intel is now going through their, you know, their challenge. So, so maybe it's inevitable, but how do you see the future in terms of disruption with an industry, Forget our industry for a second, all industry across, whether it's healthcare, financial services, manufacturing, automobiles, et cetera. How do you see the disruption scenario? I'm pretty sure you talked about this in your book, it's been a while since I read it, but I wonder if you could talk about that disruption scenario and, and the role that automation is going to play, either as the disruptor or as the protector of the incumbents. >>Let's take healthcare and auto as an example. Healthcare is a great example. If we think about what's going on, not enough nurses, massive shortage, right? What are we doing at the moment? We're setting five foot nine robots to do non-patient care. We're trying to capture enough information off, you know, patient analytics like this watch is gonna capture vitals from a going forward. We're doing a lot what we can do in the ambient level so that information and data is automatically captured and decisions are being rendered against that. Maybe you're gonna change your diet along the way, maybe you're gonna walk an extra 10 minutes. All those things are gonna be provided in that level of automation. Take the car business. It's not about selling cars. Tesla's a great example. We talk about this all the time. What Tesla's doing, they're basically gonna be an insurance company with all the data they have. They have better data than the insurance companies. They can do better underwriting, they've got better mapping information and insights they can actually suggest next best action do collision avoidance, right? Those are all the things that are actually happening today. And automation plays a big role, not just in the collection of that, that information insight, but also in the ability to make recommendations, to do predictions and to help you prevent things from going wrong. >>So, you know, it's interesting. It's like you talk about Tesla as the, the disrupting the insurance companies. It's almost like the over the top vendors have all the data relative to the telcos and mopped them up for lunch. Pascal, I wanna ask you, you know, the topic of future of work kind of was a bromide before, but, but now I feel like, you know, post pandemic, it, it actually has substance. How do you see the future of work? Can you even summarize what it's gonna look like? It's, it's, Or are we here? >>It's, yeah, it's, and definitely it's, it's more and more important topic currently. And you, you all heard about the great resignation and how employee experience is more and more important for companies according to have a business review. The companies that take care of their employee experience are four times more profitable that those that don't. So it's a, it's a, it's an issue for CEOs and, and shareholders. Now, how do we get there? How, how do we, how do we improve the, the quality of the employee experience, understanding the people, getting information from them, educating them. I'm talking about educating them on those new technologies and how they can benefit from those empowering them. And, and I think we've talked a lot about this, about the democratization local type of, of technologies that democratize the access to those technologies. Everyone can be empowered today to change their work, improve their work, and finally, incentivization. I think it's a very important point where companies that, yeah, I >>Give that. What's gonna be the key message of your talk tomorrow. Give us the bumper sticker, >>If you will. Oh, I'm gonna talk, It's a little bit different. I'm gonna talk for the IT community in this, in the context of the IT summit. And I'm gonna talk about the future of intelligent automation. So basically how new technologies will impact beyond what we see today, The future of work. >>Well, I always love having you on the cube, so articulate and, and and crisp. What's, what's exciting you these days, you know, in your world, I know you're traveling around a lot, but what's, what's hot? >>Yeah, I think one of the coolest thing that's going on right now is the fact that we're trying to figure out do we go to work or do we not go to work? Back to your other point, I mean, I don't know, work, work is, I mean, for me, work has been everywhere, right? And we're starting to figure out what that means. I think the second thing though is this notion around mission and purpose. And everyone's trying to figure out what does that mean for themselves? And that's really, I don't know if it's a great, great resignation. We call it great refactoring, right? Where you work, when you work, how we work, why you work, that's changing. But more importantly, the business models are changing. The monetization models are changing macro dynamics that are happening. Us versus China, G seven versus bricks, right? War on the dollar. All these things are happening around us at this moment and, and I think it's gonna really reshape us the way that we came out of the seventies into the eighties. >>Guys, always a pleasure having folks like yourself on, Thank you, Pascal. Been great to see you again. All right, Dave Nicholson, Dave Ante, keep it right there. Forward five from Las Vegas. You're watching the cue.

Published Date : Sep 29 2022

SUMMARY :

Brought to you by And of course, Ray Wong is back on the cube. Thanks for the invitation. What's, what are you gonna, what are you gonna probe with these guys? I mean, at that point you want your founder thinking about the next set Pascal, why did you write your book on intelligent automation and, the key, the key content of the, of the book is really about combining different technologies to automate What do you think? And that's where you see discover become very important And that happens right in the background when you augment So Pascal, you wrote your book in the middle of the, the pandemic. You, you make me think about a joke. It's a big record ball, right? And that's exactly true. That explains the success of the book, basically. you want, you want to, you wanna answer that first? And what do you wanna accomplish, right? So how do you differentiate AI in, I a, I We're gonna have to actually ch, you know, address that issue right away. about that disruption scenario and, and the role that automation is going to play, either as the disruptor to do predictions and to help you prevent things from going wrong. How do you see the future of work? is more and more important for companies according to have a business review. What's gonna be the key message of your talk tomorrow. And I'm gonna talk about the future of intelligent automation. what's exciting you these days, you know, in your world, I know you're traveling around a lot, when you work, how we work, why you work, that's changing. Been great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ray WongPERSON

0.99+

Dave NicholsonPERSON

0.99+

PascalPERSON

0.99+

Dave AntePERSON

0.99+

Las VegasLOCATION

0.99+

Ray PascalPERSON

0.99+

RayPERSON

0.99+

Pascal BornePERSON

0.99+

two monthsQUANTITY

0.99+

todayDATE

0.99+

first bookQUANTITY

0.99+

TeslaORGANIZATION

0.99+

Everybody Wants To Rule the WorldTITLE

0.99+

2010DATE

0.99+

IntelORGANIZATION

0.99+

An Intelligent AutomationTITLE

0.99+

Rob SlandPERSON

0.99+

C H R OPERSON

0.99+

A weekQUANTITY

0.99+

four questionsQUANTITY

0.99+

firstQUANTITY

0.98+

tomorrowDATE

0.98+

second thingQUANTITY

0.98+

bothQUANTITY

0.98+

two years agoDATE

0.98+

Pascal BornetPERSON

0.98+

DanielleORGANIZATION

0.98+

eightiesDATE

0.98+

pandemicEVENT

0.98+

First timeQUANTITY

0.97+

five footQUANTITY

0.97+

oneQUANTITY

0.94+

Hyper AutomationTITLE

0.93+

fiveQUANTITY

0.92+

East coast PrimeORGANIZATION

0.92+

Ray WangPERSON

0.92+

each oneQUANTITY

0.91+

eachQUANTITY

0.9+

FiveQUANTITY

0.89+

nineQUANTITY

0.89+

10 minutesQUANTITY

0.89+

ConstellationORGANIZATION

0.88+

seventiesDATE

0.88+

3DQUANTITY

0.87+

PascalTITLE

0.84+

a thousand times per secondQUANTITY

0.84+

a hundred times per secondQUANTITY

0.84+

2DQUANTITY

0.83+

Intelligent AutomationTITLE

0.82+

WangORGANIZATION

0.81+

RoblinPERSON

0.8+

CovidPERSON

0.79+

StewPERSON

0.71+

CubesORGANIZATION

0.7+

The CubeORGANIZATION

0.65+

last yearsDATE

0.65+

secondQUANTITY

0.63+

G sevenOTHER

0.61+

ReesePERSON

0.6+

web twoQUANTITY

0.59+

ChinaLOCATION

0.59+

UIORGANIZATION

0.56+

PathTITLE

0.54+

every ingredientQUANTITY

0.53+

threeQUANTITY

0.51+

UiPathORGANIZATION

0.46+

UITITLE

0.43+

webOTHER

0.37+

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Rainer Richter, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

(light music) >> Hello, and welcome to theCUBE's special presentation with Horizon3.ai with Rainer Richter, Vice President of EMEA, Europe, Middle East and Africa, and Asia Pacific, APAC Horizon3.ai. Welcome to this special CUBE presentation. Thanks for joining us. >> Thank you for the invitation. >> So Horizon3.ai, driving global expansion, big international news with a partner-first approach. You guys are expanding internationally. Let's get into it. You guys are driving this new expanse partner program to new heights. Tell us about it. What are you seeing in the momentum? Why the expansion? What's all the news about? >> Well, I would say in international, we have, I would say a similar situation like in the US. There is a global shortage of well-educated penetration testers on the one hand side. On the other side, we have a raising demand of network and infrastructure security. And with our approach of an autonomous penetration testing, I believe we are totally on top of the game, especially as we have also now starting with an international instance. That means for example, if a customer in Europe is using our service, NodeZero, he will be connected to a NodeZero instance, which is located inside the European Union. And therefore, he doesn't have to worry about the conflict between the European GDPR regulations versus the US CLOUD Act. And I would say there, we have a total good package for our partners that they can provide differentiators to their customers. >> You know, we've had great conversations here on theCUBE with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company. And obviously, I can just connect the dots here, but I'd like you to weigh in more on how that translates into the go-to-market here because you got great cloud scale with the security product you guys are having success with. Great leverage there, I'm seeing a lot of success there. What's the momentum on the channel partner program internationally? Why is it so important to you? Is it just the regional segmentation? Is it the economics? Why the momentum? >> Well, there are multiple issues. First of all, there is a raising demand in penetration testing. And don't forget that in international, we have a much higher level number or percentage in SMB and mid-market customers. So these customers, typically, most of them even didn't have a pen test done once a year. So for them, pen testing was just too expensive. Now with our offering together with our partners, we can provide different ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with a traditional manual pen test, and that is because we have our Consulting PLUS package, which is for typically pen testers. They can go out and can do a much faster, much quicker pen test at many customers after each other. So they can do more pen test on a lower, more attractive price. On the other side, there are others or even the same one who are providing NodeZero as an MSSP service. So they can go after SMP customers saying, "Okay, you only have a couple of hundred IP addresses. No worries, we have the perfect package for you." And then you have, let's say the mid-market. Let's say the thousand and more employees, then they might even have an annual subscription. Very traditional, but for all of them, it's all the same. The customer or the service provider doesn't need a piece of hardware. They only need to install a small piece of a Docker container and that's it. And that makes it so smooth to go in and say, "Okay, Mr. Customer, we just put in this virtual attacker into your network, and that's it and all the rest is done." And within three clicks, they can act like a pen tester with 20 years of experience. >> And that's going to be very channel-friendly and partner-friendly, I can almost imagine. So I have to ask you, and thank you for calling out that breakdown and segmentation. That was good, that was very helpful for me to understand, but I want to follow up, if you don't mind. What type of partners are you seeing the most traction with and why? >> Well, I would say at the beginning, typically, you have the innovators, the early adapters, typically boutique-size of partners. They start because they are always looking for innovation. Those are the ones, they start in the beginning. So we have a wide range of partners having mostly even managed by the owner of the company. So they immediately understand, okay, there is the value, and they can change their offering. They're changing their offering in terms of penetration testing because they can do more pen tests and they can then add others ones. Or we have those ones who offered pen test services, but they did not have their own pen testers. So they had to go out on the open market and source pen testing experts to get the pen test at a particular customer done. And now with NodeZero, they're totally independent. They can go out and say, "Okay, Mr. Customer, here's the service. That's it, we turn it on. And within an hour, you are up and running totally." >> Yeah, and those pen tests are usually expensive and hard to do. Now it's right in line with the sales delivery. Pretty interesting for a partner. >> Absolutely, but on the other hand side, we are not killing the pen tester's business. We are providing with NodeZero, I would call something like the foundational work. The foundational work of having an ongoing penetration testing of the infrastructure, the operating system. And the pen testers by themselves, they can concentrate in the future on things like application pen testing, for example. So those services, which we are not touching. So we are not killing the pen tester market. We are just taking away the ongoing, let's say foundation work, call it that way. >> Yeah, yeah. That was one of my questions. I was going to ask is there's a lot of interest in this autonomous pen testing. One because it's expensive to do because those skills are required are in need and they're expensive. (chuckles) So you kind of cover the entry-level and the blockers that are in there. I've seen people say to me, "This pen test becomes a blocker for getting things done." So there's been a lot of interest in the autonomous pen testing and for organizations to have that posture. And it's an overseas issue too because now you have that ongoing thing. So can you explain that particular benefit for an organization to have that continuously verifying an organization's posture? >> Certainly. So I would say typically, you have to do your patches. You have to bring in new versions of operating systems, of different services, of operating systems of some components, and they are always bringing new vulnerabilities. The difference here is that with NodeZero, we are telling the customer or the partner the package. We're telling them which are the executable vulnerabilities because previously, they might have had a vulnerability scanner. So this vulnerability scanner brought up hundreds or even thousands of CVEs, but didn't say anything about which of them are vulnerable, really executable. And then you need an expert digging in one CVE after the other, finding out is it really executable, yes or no? And that is where you need highly-paid experts, which where we have a shortage. So with NodeZero now, we can say, "Okay, we tell you exactly which ones are the ones you should work on because those are the ones which are executable. We rank them accordingly to risk level, how easily they can be used." And then the good thing is converted or in difference to the traditional penetration test, they don't have to wait for a year for the next pen test to find out if the fixing was effective. They run just the next scan and say, "Yes, closed. Vulnerability is gone." >> The time is really valuable. And if you're doing any DevOps, cloud-native, you're always pushing new things. So pen test, ongoing pen testing is actually a benefit just in general as a kind of hygiene. So really, really interesting solution. Really bringing that global scale is going to be a new coverage area for us, for sure. I have to ask you, if you don't mind answering, what particular region are you focused on or plan to target for this next phase of growth? >> Well, at this moment, we are concentrating on the countries inside the European Union plus United Kingdom. And of course, logically, I'm based in the Frankfurt area. That means we cover more or less the countries just around. So it's like the so-called DACH region, Germany, Switzerland, Austria, plus the Netherlands. But we also already have partners in the Nordic, like in Finland and Sweden. So we have partners already in the UK and it's rapidly growing. So for example, we are now starting with some activities in Singapore and also in the Middle East area. Very important, depending on let's say, the way how to do business. Currently, we try to concentrate on those countries where we can have, let's say at least English as an accepted business language. >> Great, is there any particular region you're having the most success with right now? Sounds like European Union's kind of first wave. What's the most- >> Yes, that's the first. Definitely, that's the first wave. And now with also getting the European INSTANCE up and running, it's clearly our commitment also to the market saying, "Okay, we know there are certain dedicated requirements and we take care of this." And we are just launching, we are building up this one, the instance in the AWS service center here in Frankfurt. Also, with some dedicated hardware, internet, and a data center in Frankfurt, where we have with the DE-CIX, by the way, the highest internet interconnection bandwidth on the planet. So we have very short latency to wherever you are on the globe. >> That's a great call out benefit too. I was going to ask that. What are some of the benefits your partners are seeing in EMEA and Asia Pacific? >> Well, I would say, the benefits for them, it's clearly they can talk with customers and can offer customers penetration testing, which they before even didn't think about because penetration testing in a traditional way was simply too expensive for them, too complex, the preparation time was too long, they didn't have even have the capacity to support an external pen tester. Now with this service, you can go in and even say, "Mr. Customer, we can do a test with you in a couple of minutes. We have installed a Docker container. Within 10 minutes, we have the pen test started. That's it and then we just wait." And I would say we are seeing so many aha moments then. On the partner side, when they see NodeZero the first time working, it's like they say, "Wow, that is great." And then they walk out to customers and show it to their typically at the beginning, mostly the friendly customers like, "Wow, that's great, I need that." And I would say the feedback from the partners is that is a service where I do not have to evangelize the customer. Everybody understands penetration testing, I don't have to describe what it is. The customer understanding immediately, "Yes. Penetration testing, heard about that. I know I should do it, but too complex, too expensive." Now for example, as an MSSP service provided from one of our partners, it's getting easy. >> Yeah, and it's great benefit there. I mean, I got to say I'm a huge fan of what you guys are doing. I like this continuous automation. That's a major benefit to anyone doing DevOps or any kind of modern application development. This is just a godsend for them, this is really good. And like you said, the pen testers that are doing it, they were kind of coming down from their expertise to kind of do things that should have been automated. They get to focus on the bigger ticket items. That's a really big point. >> Exactly. So we free them, we free the pen testers for the higher level elements of the penetration testing segment, and that is typically the application testing, which is currently far away from being automated. >> Yeah, and that's where the most critical workloads are, and I think this is the nice balance. Congratulations on the international expansion of the program, and thanks for coming on this special presentation. I really appreciate it. Thank you very much. >> You're welcome. >> Okay, this is theCUBE special presentation, you know, checking on pen test automation, international expansion, Horizon3.ai. A really innovative solution. In our next segment, Chris Hill, Sector Head for Strategic Accounts, will discuss the power of Horizon3.ai and Splunk in action. You're watching theCUBE, the leader in high tech enterprise coverage. (steady music)

Published Date : Sep 27 2022

SUMMARY :

Welcome to this special CUBE presentation. Why the expansion? On the other side, on the channel partner and that's it and all the rest is done." seeing the most traction with Those are the ones, they and hard to do. And the pen testers by themselves, and the blockers that are in there. in one CVE after the other, I have to ask you, if and also in the Middle East area. What's the most- Definitely, that's the first wave. What are some of the benefits "Mr. Customer, we can do a test with you the bigger ticket items. of the penetration testing segment, of the program, the leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

Chris HillPERSON

0.99+

FinlandLOCATION

0.99+

SwedenLOCATION

0.99+

SingaporeLOCATION

0.99+

AWSORGANIZATION

0.99+

UKLOCATION

0.99+

FrankfurtLOCATION

0.99+

hundredsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

APACORGANIZATION

0.99+

Rainer RichterPERSON

0.99+

Asia PacificLOCATION

0.99+

NetherlandsLOCATION

0.99+

NordicLOCATION

0.99+

US CLOUD ActTITLE

0.99+

Middle EastLOCATION

0.99+

EMEALOCATION

0.99+

SwitzerlandLOCATION

0.99+

USLOCATION

0.99+

AustriaLOCATION

0.99+

thousandsQUANTITY

0.99+

European UnionORGANIZATION

0.99+

United KingdomLOCATION

0.99+

three clicksQUANTITY

0.99+

once a yearQUANTITY

0.99+

GermanyLOCATION

0.99+

firstQUANTITY

0.99+

more than once a yearQUANTITY

0.98+

10 minutesQUANTITY

0.98+

NodeZeroORGANIZATION

0.98+

CUBEORGANIZATION

0.97+

EnglishOTHER

0.97+

Horizon3.aiTITLE

0.96+

FirstQUANTITY

0.96+

first timeQUANTITY

0.95+

OneQUANTITY

0.95+

European UnionLOCATION

0.94+

CVEsQUANTITY

0.94+

EMEAORGANIZATION

0.93+

DACH regionLOCATION

0.93+

a yearQUANTITY

0.92+

oneQUANTITY

0.92+

Vice PresidentPERSON

0.9+

first waveEVENT

0.89+

an hourQUANTITY

0.85+

DE-CIXOTHER

0.83+

one of my questionsQUANTITY

0.82+

EuropeanOTHER

0.82+

first approachQUANTITY

0.82+

NodeZeroCOMMERCIAL_ITEM

0.79+

theCUBEORGANIZATION

0.79+

hundred IP addressesQUANTITY

0.73+

thousand and more employeesQUANTITY

0.7+

UnionLOCATION

0.69+

AsiaORGANIZATION

0.67+

GDPRTITLE

0.63+

Horizon3.aiORGANIZATION

0.58+

SMPORGANIZATION

0.55+

NodeZeroTITLE

0.55+

coupleQUANTITY

0.53+

MiddleLOCATION

0.52+

EastORGANIZATION

0.52+

PacificLOCATION

0.51+

EuropeanORGANIZATION

0.51+

AfricaLOCATION

0.45+

minutesQUANTITY

0.38+

Partner ProgramOTHER

0.32+

*****NEEDS TO STAY UNLISTED FOR REVIEW***** Tom Gillis | Advanced Security Business Group


 

(bright music) >> Welcome back everyone. theCube's live coverage here. Day two, of two sets, three days of theCube coverage here at VMware Explore. This is our 12th year covering VMware's annual conference, formerly called VM World. I'm John Furrier, with Dave Vellante. We'd love seeing the progress and we've got great security comes Tom Gill, senior vices, president general manager, networking and advanced security business group at VMware. Great to see you. Thanks for coming on. >> Thanks. for having me. >> Yeah, really happy we could have you on. >> I think this is my sixth edition on the theCube. Do I get frequent flyer points or anything? >> Yeah. >> You first get the VIP badge. We'll make that happen. You can start getting credits. >> Okay, there we go. >> We won't interrupt you. Seriously, you got a great story in security here. The security story is kind of embedded everywhere, so it's not called out and blown up and talked specifically about on stage. It's kind of in all the narratives in the VM World for this year. But you guys have an amazing security story. So let's just step back and to set context. Tell us the security story for what's going on here at VMware and what that means to this supercloud, multi-cloud and ongoing innovation with VMware. >> Yeah, sure thing. So probably the first thing I'll point out is that security's not just built in at VMware. It's built differently. So, we're not just taking existing security controls and cut and pasting them into our software. But we can do things because of our platform, because of the virtualization layer that you really can't do with other security tools. And where we're very, very focused is what we call lateral security or East-West movement of an attacker. 'Cause frankly, that's the name of the game these days. Attackers, you've got to assume that they're already in your network. Already assume that they're there. Then how do we make it hard for them to get to the stuff that you really want? Which is the data that they're going after. And that's where we really should. >> All right. So we've been talking a lot, coming into VMware Explore, and here, the event. About two things. Security, as a state. >> Yeah. >> I'm secure right now. >> Yeah. >> Or I think I'm secure right now, even though someone might be in my network or in my environment. To the notion of being defensible. >> Yeah. >> Meaning I have to defend and be ready at a moment's notice to attack, fight, push back, red team, blue team. Whatever you're going to call it. But something's happening. I got to be able to defend. >> Yeah. So what you're talking about is the principle of Zero Trust. When I first started doing security, the model was we have a perimeter. And everything on one side of the perimeter is dirty, ugly, old internet. And everything on this side, known good, trusted. What could possibly go wrong. And I think we've seen that no matter how good you make that perimeter, bad guys find a way in. So Zero Trust says, you know what? Let's just assume they're already in. Let's assume they're there. How do we make it hard for them to move around within the infrastructure and get to the really valuable assets? 'Cause for example, if they bust into your laptop, you click on a link and they get code running on your machine. They might find some interesting things on your machine. But they're not going to find 250 million credit cards. >> Right. >> Or the script of a new movie or the super secret aircraft plans. That lives in a database somewhere. And so it's that movement from your laptop to that database. That's where the damage is done and that's where VMware shines. >> So if they don't have the right to get to that database, they're not in. >> And it's not even just the right. So they're so clever and so sneaky that they'll steal a credential off your machine, go to another machine, steal a credential off of that. So, it's like they have the key to unlock each one of these doors. And we've gotten good enough where we can look at that lateral movement, even though it has a credential and a key, we're like wait a minute. That's not a real CIS Admin making a change. That's ransomware. And that's where you. >> You have to earn your way in. >> That's right. That's right. Yeah. >> And we're all kinds of configuration errors. But also some user problems. I've heard one story where there's so many passwords and username and passwords and systems that the bad guys scour, the dark web for passwords that have been exposed. >> Correct. >> And go test them against different accounts. Oh one hit over here. >> Correct. >> And people don't change their passwords all the time. >> Correct. >> That's a known vector. >> Just the idea that users are going to be perfect and never make a mistake. How long have we been doing this? Humans are the weakest link. So people are going to make mistakes. Attackers are going to be in. Here's another way of thinking about it. Remember log4j? Remember that whole fiasco? Remember that was at Christmas time. That was nine months ago. And whoever came up with that vulnerability, they basically had a skeleton key that could access every network on the planet. I don't know if a single customer that said, "Oh yeah, I wasn't impacted by log4j." So here's some organized entity had access to every network on the planet. What was the big breach? What was that movie script that got stolen? So there wasn't one, right? We haven't heard anything. So the point is, the goal of attackers is to get in and stay in. Imagine someone breaks into your house, steals your laptop and runs. That's a breach. Imagine someone breaks into your house and stays for nine months. It's untenable, in the real world, right? >> Right. >> We don't know in there, hiding in the closet. >> They're still in. >> They're watching everything. >> Hiding in your closet, exactly. >> Moving around, nibbling on your cookies. >> Drinking your beer. >> Yeah. >> So let's talk about how this translates into the new reality of cloud-native. Because now you hear about automated pentesting is a new hot thing right now. You got antivirus on data is hot within APIs, for instance. >> Yeah. >> API security. So all kinds of new hot areas. Cloud-native is very iterative. You know, you can't do a pentest every week. >> Right. >> You got to do it every second. >> So this is where it's going. It's not so much simulation. It's actually real testing. >> Right. Right. >> How do you view that? How does that fit into this? 'cause that seems like a good direction to me. >> Yeah. If it's right in, and you were talking to my buddy, Ahjay, earlier about what VMware can do to help our customers build cloud native applications with Tanzu. My team is focused on how do we secure those applications? So where VMware wants to be the best in the world is securing these applications from within. Looking at the individual piece parts and how they talk to each other and figuring out, wait a minute, that should never happen. By almost having an x-ray machine on the innards of the application. So we do it for both for VMs and for container based applications. So traditional apps are VM based. Modern apps are container based. And we have a slightly different insertion mechanism. It's the same idea. So for VMs, we do it with a hypervisor with NSX. We see all the inner workings. In a container world we have this thing called a service mesh that lets us look at each little snippet of code and how they talk to each other. And once you can see that stuff, then you can actually apply. It's almost like common sense logic of like, wait a minute. This API is giving back credit card numbers and it gives five an hour. All of a sudden, it's now asking for 20,000 or a million credit cards. That doesn't make any sense. The anomalies stick out like a sore thumb. If you can see them. At VMware, our unique focus in the infrastructure is that we can see each one of these little transactions and understand the conversation. That's what makes us so good at that East-West or lateral security. >> You don't belong in this room, get out or that that's some weird call from an in memory database, something over here. >> Exactly. Where other security solutions won't even see that. It's not like there algorithms aren't as good as ours or better or worse. It's the access to the data. We see the inner plumbing of the app and therefore we can protect the app from. >> And there's another dimension that I want to get in the table here. 'Cause to my knowledge only AWS, Google, I believe Microsoft and Alibaba and VMware have this. >> Correct >> It's Nitro. The equivalent of a Nitro. >> Yes. >> Project Monterey. >> Yeah. >> That's unique. It's the future of computing architectures. Everybody needs a Nitro. I've written about this. >> Yeah. >> Right. So explain your version. >> Yeah. >> It's now real. >> Yeah. >> It's now in the market, right? >> Yeah. >> Or soon will be. >> Here's our mission. >> Salient aspects. >> Yeah. Here's our mission of VMware. Is that we want to make every one of our enterprise customers. We want their private cloud to be as nimble, as agile, as efficient as the public cloud. >> And secure. >> And secure. In fact, I'll argue, we can make it actually more secure because we're thinking about putting security everywhere in this infrastructure. Not just on the edges of it. Okay. How do we go on that journey? As you pointed out, the public cloud providers realized five years ago that the right way to build computers was not just a CPU and a graphics process unit, GPU. But there's this third thing that the industry's calling a DPU, data processing unit. And so there's kind of three pieces of a computer. And the DPU is sometimes called a Smartnic. It's the network interface card. It does all that network handling and analytics and it takes it off the CPU. So they've been building and deploying those systems themselves. That's what Nitro is. And so we have been working with the major Silicon vendors to bring that architecture to everybody. So with vSphere 8, we have the ability to take the network processing, that East-West inspection I talked about, take it off of the CPU and put it into this dedicated processing element called the DPU and free up the CPU to run the applications that Ahjay and team are building. >> So no performance degradation at all? >> Correct. To CPU offload. >> So even the opposite, right? I mean you're running it basically Bare Metal speeds. >> Yes, yes and yes. >> And you're also isolating the storage from the security, the management, and. >> There's an isolation angle to this, which is that firewall, that we're putting everywhere. Not just that the perimeter, but we put it in each little piece of the server is running when it runs on one of these DPUs it's a different memory space. So even if an attacker gets to root in the OS, they it's very, very, never say never, but it's very difficult. >> So who has access to that resource? >> Pretty much just the infrastructure layer, the cloud provider. So it's Amazon, Google, Microsoft, and the enterprise. >> Application can't get in. >> Can't get in there. Cause you would've to literally bridge from one memory space to another. Never say never, but it would be very. >> But it hasn't earned the trust to get. >> It's more than barbwire. It's multiple walls. >> Yes. And it's like an air gap. It puts an air gap in the server itself so that if the server is compromised, it's not going to get into the network. Really powerful. >> What's the big thing that you're seeing with this supercloud transition. We're seeing multi-cloud and this new, not just SaaS hosted on the cloud. >> Yeah. >> You're seeing a much different dynamic of, combination of large scale CapEx, cloud-native, and then now cloud-native drills on premises and edge. Kind of changing what a cloud looks like if the cloud's on a cloud. >> Yeah. >> So we're the customer, I'm building on a cloud and I have on premise stuff. So, I'm getting scale CapEx relief from the hyperscalers. >> I think there's an important nuance on what you're talking about. Which is in the early days of the cloud customers. Remember those first skepticism? Oh, it'll never work. Oh, that's consumer grade. Oh, that's not really going to work. Oh some people realize. >> It's not secure. >> Yeah. It's not secure. >> That one's like, no, no, no it's secure. It works. And it's good. So then there was this sort of over rush. Let's put everything on the cloud. And I had a lot of customers that took VM based applications said, I'm going to move those onto the cloud. You got to take them all apart, put them on the cloud and put them all back together again. And little tiny details like changing an IP address. It's actually much harder than it looks. So my argument is, for existing workloads for VM based workloads, we are VMware. We're so good at running VM based workloads. And now we run them on anybody's cloud. So whether it's your east coast data center, your west coast data center, Amazon, Google, Microsoft, Alibaba, IBM keep going. We pretty much every. >> And the benefit of the customer is what. >> You can literally VMotion and just pick it up and move it from private to public, public to private, private to public, Back and forth. >> Remember when we called Vmotion BS, years ago? >> Yeah. Yeah. >> VMotion is powerful. >> We were very skeptical. We're like, that'll never happen. I mean we were. This supposed to be pat ourselves on the back. >> Well because alchemy. It seems like what you can't possibly do that. And now we do it across clouds. So it's not quite VMotion, but it's the same idea. You can just move these things over. I have one customer that had a production data center in the Ukraine. Things got super tense, super fast and they had to go from their private cloud data center in the Ukraine, to a public cloud data center out of harm's way. They did it over a weekend. 48 hours. If you've ever migrated a data center, that's usually six months. Right. And a lot of heartburn and a lot of angst. Boop. They just drag and dropped and moved it on over. That's the power of what we call the cloud operating model. And you can only do this when all your infrastructures defined in software. If you're relying on hardware, load balancers, hardware, firewalls, you can't move those. They're like a boat anchor. You're stuck with them. And by the way, they're really, really expensive. And by the way, they eat a lot of power. So that was an architecture from the 90's. In the cloud operating model your data center. And this comes back to what you were talking about is just racks and racks of X86 with these magic DPUs, or smart nics, to make any individual node go blisteringly fast and do all the functions that you used to do in network appliances. >> We just had Ahjay taking us to school, and everyone else to school on applications, middleware, abstraction layer. And Kit Culbert was also talking about this across cloud. We're talking supercloud, super pass. If this continues to happen, which we would think it will happen. What does the security posture look like? It feels to me, and again, this is your wheelhouse. If supercloud happens with this kind of past layer where there's vMotioning going on. All kinds of spanning applications and data across environments. >> Yeah. Assume there's an operating system working on behind the scenes. >> Right. >> What's the security posture in all this? >> Yeah. So remember my narrative about the bad guys are getting in and they're moving around and they're so sneaky that they're using legitimate pathways. The only way to stop that stuff, is you've got to understand it at what we call Layer 7. At the application layer. Trying to do security to the infrastructure layer. It was interesting 20 years ago, kind of less interesting 10 years ago. And now it's becoming irrelevant because the infrastructure is oftentimes not even visible. It's buried in some cloud provider. So Layer 7 understanding, application awareness, understanding the APIs and reading the content. That's the name of the game in security. That's what we've been focused on. Nothing to do with the infrastructure. >> And where's the progress bar on that paradigm. One to ten. Ten being everyone's doing it. >> Right now. Well, okay. So we as a vendor can do this today. All the stuff I talked about, reading APIs, understanding the individual services looking at, Hey, wait a minute this credit card anomalies, that's all shipping production code. Where is it in customer adoption life cycle? Early days 10%. So there's a whole lot of headroom for people to understand, Hey, I can put these controls in place. They're software based. They don't require appliances. It's Layer 7, so it has contextual awareness and it's works on every single cloud. >> We talked about the pandemic being an accelerator. It really was a catalyst to really rethink. Remember we used to talk about Pat as a security do over. He's like, yes, if it's the last thing I do, I'm going to fix security. Well, he decided to go try to fix Intel instead. >> He's getting some help from the government. >> But it seems like CISOs have totally rethought their security strategy. And at least in part, as a function of the pandemic. >> When I started at VMware four years ago, Pat sat me down in his office and he said to me what he said to you, which is like, "Tom," he said, "I feel like we have fundamentally changed servers. We fundamentally change storage. We fundamentally change networking. The last piece of the puzzle of security. I want you to go fundamentally change it." And I'll argue that the work that we're doing with this horizontal security, understanding the lateral movement. East- West inspection. It fundamentally changes how security works. It's got nothing to do with firewalls. It's got nothing to do with Endpoint. It's a unique capability that VMware is uniquely suited to deliver on. And so Pat, thanks for the mission. We delivered it and it's available now. >> Those WET web applications firewall for instance are around, I mean. But to your point, the perimeter's gone. >> Exactly. >> And so you got to get, there's no perimeter. so it's a surface area problem. >> Correct. And access. And entry. >> Correct. >> They're entering here easy from some manual error, or misconfiguration or bad password that shouldn't be there. They're in. >> Think about it this way. You put the front door of your house, you put a big strong door and a big lock. That's a firewall. Bad guys come in the window. >> And then the windows open. With a ladder. >> Oh my God. Cause it's hot, bad user behavior trumps good security every time. >> And then they move around room to room. We're the room to room people. We see each little piece of the thing. Wait, that shouldn't happen. Right. >> I want to get you a question that we've been seeing and maybe we're early on this or it might be just a false data point. A lot of CSOs and we're talking to are, and people in industry in the customer environment are looking at CISOs and CSOs, two roles. Chief information security officer, and then chief security officer. Amazon, actually Steven Schmidt is now CSO at Reinforce. They actually called that out. And the interesting point that he made, we had some other situations that verified this, is that physical security is now tied to online, to your point about the service area. If I get a password, I still got the keys to the physical goods too. >> Right. So physical security, whether it's warehouse for them or store or retail. Digital is coming in there. >> Yeah. So is there a CISO anymore? Is it just CSO? What's the role? Or are there two roles you see that evolving? Or is that just circumstance. >> I think it's just one. And I think that the stakes are incredibly high in security. Just look at the impact that these security attacks are having on. Companies get taken down. Equifax market cap was cut 80% with a security breach. So security's gone from being sort of a nuisance to being something that can impact your whole kind of business operation. And then there's a whole nother domain where politics get involved. It determines the fate of nations. I know that sounds grand, but it's true. And so companies care so much about it they're looking for one leader, one throat to choke. One person that's going to lead security in the virtual domain, in the physical domain, in the cyber domain, in the actual. >> I mean, you mention that, but I mean, you look at Ukraine. I mean that cyber is a component of that war. I mean, it's very clear. I mean, that's new. We've never seen. this. >> And in my opinion, the stuff that we see happening in the Ukraine is small potatoes compared to what could happen. >> Yeah. >> So the US, we have a policy of strategic deterrence. Where we develop some of the most sophisticated cyber weapons in the world. We don't use them. And we hope never to use them. Because our adversaries, who could do stuff like, I don't know, wipe out every bank account in North America. Or turn off the lights in New York City. They know that if they were to do something like that, we could do something back. >> This is the red line conversation I want to go there. So, I had this discussion with Robert Gates in 2016 and he said, "We have a lot more to lose." Which is really your point. >> So this brand. >> I agree that there's to have freedom and liberty, you got to strike back with divorce. And that's been our way to balance things out. But with cyber, the red line, people are already in banks. So they're are operating below the red line line. Red line meaning before we know you're in there. So do we move the red line down because, hey, Sony got hacked. The movie. Because they don't have their own militia. >> Yeah. >> If their were physical troops on the shores of LA breaking into the file cabinets. The government would've intervened. >> I agree with you that it creates tension for us in the US because our adversaries don't have the clear delineation between public and private sector. Here you're very, very clear if you're working for the government. Or you work for an private entity. There's no ambiguity on that. >> Collaboration, Tom, and the vendor community. I mean, we've seen efforts to try to. >> That's a good question. >> Monetize private data and private reports. >> So at VMware, I'm very proud of the security capabilities we've built. But we also partner with people that I think of as direct competitors. We've got firewall vendors and Endpoint vendors that we work with and integrate. And so coopetition is something that exists. It's hard. Because when you have these kind of competing. So, could we do more? Of course we probably could. But I do think we've done a fair amount of cooperation, data sharing, product integration, et cetera. And as the threats get worse, you'll probably see us continue to do more. >> And the government is going to trying to force that too. >> And the government also drives standards. So let's talk about crypto. Okay. So there's a new form of encryption coming out called processing quantum. >> Quantum. Quantum computers have the potential to crack any crypto cipher we have today. That's bad. Okay. That's not good at all because our whole system is built around these private communications. So the industry is having conversations about crypto agility. How can we put in place the ability to rapidly iterate the ciphers in encryption. So, when the day quantum becomes available, we can change them and stay ahead of these quantum people. >> Well, didn't NIST just put out a quantum proof algo that's being tested right now by the community? >> There's a lot of work around that. Correct. And NIST is taking the lead on this, but Google's working on it. VMware's working on it. We're very, very active in how do we keep ahead of the attackers and the bad guys? Because this quantum thing is a, it's an x-ray machine. It's like a dilithium crystal that can power a whole ship. It's a really, really, really powerful tool. >> Bad things will happen. >> Bad things could happen. >> Well, Tom, great to have you on the theCube. Thanks for coming on. Take the last minute to just give a plug for what's going on for you here at VMWorld this year, just VMware Explore this year. >> Yeah. We announced a bunch of exciting things. We announced enhancements to our NSX family, with our advanced load balancer. With our edge firewall. And they're all in service of one thing, which is helping our customers make their private cloud like the public cloud. So I like to say 0, 0, 0. If you are in the cloud operating model, you have zero proprietary appliances. You have zero tickets to launch a workload. You have zero network taps and Zero Trust built into everything you do. And that's what we're working on. Pushing that further and further. >> Tom Gill, senior vices president, head of the networking at VMware. Thanks for coming on. We do appreciate it. >> Thanks for having us. >> Always getting the security data. That's killer data and security of the two ops that get the most conversations around DevOps and Cloud Native. This is The theCube bringing you all the action here in San Francisco for VMware Explore 2022. I'm John Furrier with Dave Vellante. Thanks for watching. (bright music)

Published Date : Sep 1 2022

SUMMARY :

We'd love seeing the progress for having me. we could have you on. edition on the theCube. You first get the VIP It's kind of in all the narratives So probably the first thing and here, the event. To the notion of being defensible. I got to be able to defend. the model was we have a perimeter. or the super secret aircraft plans. right to get to that database, And it's not even just the right. Yeah. systems that the bad guys scour, And go test them And people don't change So the point is, the goal of attackers hiding in the closet. nibbling on your cookies. into the new reality of cloud-native. So all kinds of new hot areas. So this is where it's going. Right. a good direction to me. of the application. get out or that that's some weird call It's the access to the data. 'Cause to my knowledge only AWS, Google, The equivalent of a Nitro. It's the future of So explain your version. as efficient as the public cloud. that the right way to build computers So even the opposite, right? from the security, the management, and. Not just that the perimeter, Microsoft, and the enterprise. from one memory space to another. It's more than barbwire. server itself so that if the not just SaaS hosted on the cloud. if the cloud's on a cloud. relief from the hyperscalers. of the cloud customers. It's not secure. Let's put everything on the cloud. And the benefit of and move it from private to public, ourselves on the back. in the Ukraine, to a What does the security posture look like? Yeah. and reading the content. One to ten. All the stuff I talked We talked about the help from the government. function of the pandemic. And I'll argue that the work But to your point, the perimeter's gone. And so you got to get, And access. password that shouldn't be there. You put the front door of your house, And then the windows Cause it's hot, bad user behavior We're the room to room people. the keys to the physical goods too. So physical security, whether What's the role? in the cyber domain, in the actual. component of that war. the stuff that we see So the US, we have a policy This is the red line I agree that there's to breaking into the file cabinets. have the clear delineation and the vendor community. and private reports. And as the threats get worse, And the government is going And the government So the industry is having conversations And NIST is taking the lead on this, Take the last minute to just So I like to say 0, 0, 0. head of the networking at VMware. that get the most conversations

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

Tom GillPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Tom GillisPERSON

0.99+

PatPERSON

0.99+

UkraineLOCATION

0.99+

2016DATE

0.99+

Steven SchmidtPERSON

0.99+

AWSORGANIZATION

0.99+

20,000QUANTITY

0.99+

TomPERSON

0.99+

SonyORGANIZATION

0.99+

John FurrierPERSON

0.99+

New York CityLOCATION

0.99+

San FranciscoLOCATION

0.99+

nine monthsQUANTITY

0.99+

six monthsQUANTITY

0.99+

Zero TrustORGANIZATION

0.99+

ReinforceORGANIZATION

0.99+

two setsQUANTITY

0.99+

NISTORGANIZATION

0.99+

North AmericaLOCATION

0.99+

VMwareORGANIZATION

0.99+

sixth editionQUANTITY

0.99+

Kit CulbertPERSON

0.99+

48 hoursQUANTITY

0.99+

Robert GatesPERSON

0.99+

two rolesQUANTITY

0.99+

80%QUANTITY

0.99+

12th yearQUANTITY

0.99+

AhjayPERSON

0.99+

three daysQUANTITY

0.99+

two opsQUANTITY

0.99+

TenQUANTITY

0.99+

third thingQUANTITY

0.99+

five an hourQUANTITY

0.99+

EquifaxORGANIZATION

0.99+

tenQUANTITY

0.98+

zero ticketsQUANTITY

0.98+

nine months agoDATE

0.98+

one customerQUANTITY

0.98+

four years agoDATE

0.98+

bothQUANTITY

0.98+

LALOCATION

0.98+

250 million credit cardsQUANTITY

0.98+

Day twoQUANTITY

0.98+

five years agoDATE

0.98+

a million credit cardsQUANTITY

0.98+

firstQUANTITY

0.97+

10 years agoDATE

0.97+

IntelORGANIZATION

0.97+

this yearDATE

0.97+

90'sDATE

0.97+

one storyQUANTITY

0.97+

oneQUANTITY

0.96+

todayDATE

0.96+

Layer 7OTHER

0.96+

20 years agoDATE

0.96+

One personQUANTITY

0.96+

ChristmasEVENT

0.96+

three piecesQUANTITY

0.96+

NitroORGANIZATION

0.95+

TanzuORGANIZATION

0.95+

OneQUANTITY

0.94+

10%QUANTITY

0.94+

one leaderQUANTITY

0.94+

Kit Colbert, VMware | VMware Explore 2022


 

>>Welcome back everyone to the cubes, live coverage here at VMware Explorer, 22. We're here on the ground on the floor of Mosco. I'm John for David ante. We're at kit Goldberg, CTO of VMware, the star of the show, the headliner@supercloud.world. The event we had just a few weeks ago, kit. Great to see you super excited to, to chat with you. Thanks for coming on. Oh >>Yeah. Happy to be here, man. It's been a wild week. Tons of excitement. We are jazzed. We're jacked, like to look at things >>For both, of course, jacked up and jazzed. Ready to go. So you got UN stage loved your keynote, you know, very CTO oriented, hit the, all your marks cloud native, the vSphere eight intro. Yep. More performance, more power. Yeah, more efficiency. And now the cloud native over the top, you shipped a white paper a few weeks ago, which we discussed at our super cloud event. Yep. You know, really laying out the narrative of cloud native. This is the priority for you. Is that true? Is that your only priority? What are the things going on right now for you that are your top priorities, >>Top priorities. So absolutely at a high level, it's flushing out this vision that, that we're talking about in terms of what we call cross cloud services. Other people call multi-cloud, you guys have super cloud, but the point is, I think what we see is that there's these different sort of vertical silos, the different public clouds they're on-prem data center edge. And what we're looking at is trying to create a new type of cloud something that's more horizontal in architecture. And I think this is something that we realize we've been doing at VMware for a while, and we gave it a name, we call it cross cloud. But what's important is that while we do bring a lot of value there, we can't possibly do everything. This has to be an industrywide movement. And so I think what we're really excited about is figuring out, okay, how do we actually build an architecture and a framework such that there's clear sort of lines of responsibility. Here's what one company does. Here's what another one does make sure that there's clean sort of APIs between that basically an overall architecture and structure. So that's probably one of the, the high level things that we're doing as an organization right now. >>What's been the feedback here at VMware Explorer, obviously the new name, Explorer rag laid that out in the keynote. Yep. It's about moving forward. Not replacing the community. Yep. Extending the world core and exploring new frontiers multicloud. Obviously one of them key. Yeah. Very clever actually names dig into it. It's nuanced. What's been the reaction. Yep. You're right. Yep. You're crazy. I love it. I need it. It's it's too early. It's perfect timing. No, it's a bit of, what's the feedback always a little >>Bit of everything, you know, I think one of us firstno people didn't really understand it. I think people were confused about what it was, but now that we're here in person, I think generally speaking, I'm hearing a lot of positive things about it. We've been gone or been apart for three years now, right? Since the last in person one, and this is an interesting opportunity for recreation sort of rebirth, right? We've certainly lost some traditions during the COVID pandemic, but also gives us the opportunity to build new ones. And to your point, world was always associated with virtualization. And of course, we're still doing that. We're still doing cloud infrastructure, but we're doing so much more. And given this focus on multi-cloud that I just mentioned and how it is the go forward focus for VMware, we wanted to evolve the conference to have that focus. And so I've been actually really pleased to see how many folks for it's their first time here. Right? They haven't been Tom worlds before and you know, this broader sort of conference that we're creating to, to apply to the support, more disciplines, different focus areas, you know, application development, developers, platform teams, you got cloud management things with aria, public cloud management, networking security, and user computing, all in addition to the core infrastructure bits. >>So John all week's been paying homage to, to Andy Grove talking about, let chaos rain and then rain in the chaos. Right. And so when you talk to customers, that chaos message cloud chaos, how is it resonating? Are they aware of that chaos? Are they saying, yes, we have cloud chaos or some saying, eh, yeah. It's okay. Everything's good. And they just maybe have some blind spots. What do >>You think? Yeah. I'm actually surprised at how strongly it's resonating. I mean, I think we knew that we were onto something, but people even love the specific term. They're like cloud chaos. I never thought about it that way, but you're like, you're absolutely right. It was a movie. It's a great, yeah. I know. Sounds like a thriller, but, but what we sort of, the picture we paint there about these silos across clouds, the duplication of technologies, duplication of teams and training, all this stuff. People realize that's where they're at. And it's one of those things where there's this headlong rush to cloud for good reasons. People wanted to be in the agility, but now they're dealing with some of that complexity that, that gets built up there and it absolutely is chaos. And while speed is great, you need to somehow balance that speed with control things like security compliance. These are sort of enterprise requirements that are sort of getting left out. And I think that's the realization, that's the sort of chaos that we're hitting on. >>It's almost like when in bus, in business school, you had the economic lines when break even hits, you know, cloud had a lot of great goodness to it. Yep. A lot of great value. It still does on the CapEx side, but as distributed computing architectures become reality. Yep. Private cloud instantiation of hybrid cloud operations. Now you've got edge and opening up all these new, new net new applications. Yep. What are you seeing there? And it's a question we've been asked some of the folks in the partner network, what are some of those new next gen apps that are gonna be enabled by, by this next wave edge specifically? Yeah. More performance, more application development, more software. Yeah. More faster, cheaper going on here. Kind of a Moore's law vibe there. What's next. >>Yeah. So, you know, when we look at edge, so, okay. Take today. Today. Edge is oftentimes highly customized software and hardware. It's not general purpose or to cloud technologies. And while edge is certainly gonna be limited. You can't just infinitely scale. Like you can in the cloud and the network bandwidth might be a little bit limited. You still wanna imagine it or manage it as if it were another cloud location, right. That like, I wanna be able to address it. Just like I addressed a certain availabilities done within AWS. I wanna be able to say the specific edge location at, you know, wherever somewhere here in San Francisco, let's say right now there's a few different things though. The first of which is that you got to manage at scale. Cause you don't have with cloud, you got a small number of very large locations with edge. >>You got a large number of very small locations. And so it's the scale is inverted there. So what this means is that you probably can't exactly specify which edge you want to go to. What instead you wanna say is more relational. Like I've got an IOT device out there. I want my app to be in data to be near it. And the system needs to figure out, okay, where do I put that thing? And how do I get it near it? And there may be some different constraints. You have cost security, privacy, it may be your edge or maybe telco edge location, you know, one, one of these sorts of things. Right? And so I think where we're going there is to enable the movement of applications and data to the right place. And this again goes back to the whole cross cloud architecture, right? >>You don't wanna be limited in terms of where you put an app, you wanna have that flexibility. This is the whole, you know, we use the term cloud smart. Right. And that's what it means. It's like put the, the app where it needs to be sort of the right tool for the right job. And so I think the innovation though, it's gonna be huge. You're gonna see new application architectures that the app can be placed near a user near a device near like a, an iPhone or near an IOT device, like a video camera. And the way that you manage that is gonna be much kind of infrastructure is code base. Yeah. So I think there's huge possibilities there. And it's really amazing to see just real quick on the telco side, what's happening there as well. The move to 5g, the move to open ran telco is now starting to adopt these data center and cloud technologies kinda standard building blocks that we use now out at the edge. So I think, you know, the amount of innovation that we're gonna see, >>It's really the first time on telco, they actually have a viable, scalable opportunity to, to put real gear data center, liked capabilities yep. At a location for specific purpose. Yeah. The edge function. >>Yeah. And well, and what we, without >>Building a, a monster >>Facility. Exactly. Yeah. It's like the base of a cell tower or something telephone closet. But what we've been able to do is improve these general purpose technologies. Like you look at vSphere in our hypervisor today. We are great at real time workloads, right? Like as a matter of fact, you look at performance on vSphere versus bare metal. Oftentimes an app runs faster on vSphere now because of all the efficiency and scale and so forth we can bring. So it means that these telecom applications that are very latency sensitive can now run fun on there. But Hey, guess what? Once you have a general purpose server that can run some of the telecom apps, well, Hey, you got extra space to run other apps. Maybe you could sell that space to customers or partners. And you know, then you have this new architecture >>Is the dev skill, a, a barrier for the, for the telcos, where are we at >>With that? It, it, it is. I think the barriers are really, how do you provide, I dunno if it's a skill set. I mean, there's probably some skill set aspects. I think in my mind, it's more about giving them the APIs to get access to that. Like, as I said, you're not gonna have developers knowing, okay, here are the specific geographic locations of all the cell towers in San Francisco and set what you're gonna say again, I need to be near this thing. And so you used geolocation and figure out, just put it some, put it in the right place. I don't really care. Right. So again, I think it's an evolution of management evolution of the APIs that developers use to access. Like today, I'm gonna say, okay, I know my app needs to be on the east coast so I can use us east one. I know the specific AZs at a, at a cloud level. That makes sense at an edge level. It doesn't, you're not gonna know. Okay. Like the specific cross streets or whatever, you gotta let the system figure that >>Out kid. I know you gotta go on. Times's tight, real quick. You got a session here on web three. Yeah. The Cube's got the, you know, the cube versus coming soon. We might be heavy. The cube versus coming powered by arm token, we had all kinds of stuff going on. Yep. You saw the preview a couple years ago. We did with the Cuban. Anyway, you did a session on web three and DM. VMware's rolling real quick. What was that about? Yeah, what's the purpose? >>What's the direction. That was a fascinating conversation. So I was talking about web three. It was talking about why enterprises haven't really started even to scratch the surface of the potential of web three. So part of it was like, okay, what is web three? It's a buzz words. We talked through that. We talked through the use of blockchain, how that sits with the core of a lot of web three. We talked about the use of cryptocurrency and how that makes sense. We talked about the consumerization, continuing consumerization of it. We've seen it with end user devices. We may well see it with some of the web three changes around ownership, individual ownership of data, of assets, et cetera. That's gonna have a downstream impact on enterprises, how they go to market their commercial models. So it was a fascinating discussion that unfortunately it's hard to summarize, but gotten to a lot of the nuances of this and some of the, are >>You bullish on >>It? Very bullish, a hundred percent. Like I think blockchain is a hugely enabling technology and not from a cryptocurrency standpoint, put that aside. All the enterprise use cases, we have customers like broad bridge financial today leveraging VMware blockchain, doing a hundred billion in transactions a day with the sort of repo market >>You think defi is booming >>Defi. So I, I think we're just starting to get there. But what you find is oftentimes these trends start on the consumer side and then all of a sudden they surprise enterprises. >>They call it a tri tried tread five traditional fi finance >>Versus okay. >>Any >>Other way around? No, no, no. But I'm saying is that it's, these consumer trends will start to impact enterprises. But what I'm saying is that enterprises need to be ready now or start preparing now for those comings. >>And what's the preparation for that? Just education learning. Yeah. >>Education learning, looking at blockchain, use cases, looking at what will this enable consumers to do that they couldn't do before there is gonna be a democratization of access to data. You're still gonna wanna have gatekeepers. You're still gonna wanna have enterprises or services that add value on top of that, but it's gonna be a bit more of an open ecosystem now, and that's gonna change some of the market dynamics in subtle ways. >>Okay. So we got one minute left. I want to ask you, what's your impression of the super cloud event we had also, you were headlining and you guys were a big part of bringing the, a large C of great people together. Are you happy with the outcome? What do you think's next for? >>Absolutely. No. I was super excited to see how much reception and engagement it got from across the industry. Right? So many different entry participants, so many different customers, partners, et cetera, viewing it online have had a lot of conversations here at explore already. As you know, you know, VMware, we put out a white paper, our point of view on what is a multi-cloud service. What is the taxonomy of those services? Again, as I mentioned before, we need to get as an industry to a place where we have alignment about this overall architecture to enable interoperability. And I think that's really the key thing. If we're gonna make this industry architectural shift, which is what I see coming, this is what we got. >>And you're gonna be jumping all in with this and helping out if we need you >>Hundred percent. All right. >>All in. I really love your transparency on the, on your white paper. Check out the white paper online on vmware.com. It's the cross cloud cloud native. I, I call the, the mission statement. It's not a Jerry McGuire memo. It's more me than that. It's the, it's the direction of cloud native. Yep. And multi-cloud thanks for coming on and, and thanks for doing that too. >>No, of course. And thanks for having me. Thanks. Love the discussion. >>Okay. More live coverage here at world Explorer, VMware Explorer, after the short break.

Published Date : Aug 31 2022

SUMMARY :

CTO of VMware, the star of the show, the headliner@supercloud.world. We're jacked, like to look at things And now the cloud native over the top, you shipped a white paper a few weeks ago, And I think this is something that we realize we've been doing at VMware for a while, What's been the feedback here at VMware Explorer, obviously the new name, Explorer rag laid that out Bit of everything, you know, I think one of us firstno people didn't really understand it. And so when you talk to customers, that chaos message cloud And while speed is great, you need to somehow balance that speed of the folks in the partner network, what are some of those new next gen apps that are gonna be enabled by, I wanna be able to say the specific edge location at, you know, wherever somewhere here in San Francisco, And the system needs to figure out, okay, where do I put that thing? And the way that you manage that is gonna be much kind It's really the first time on telco, they actually have a viable, scalable opportunity to, And you know, then you have this new architecture Like the specific cross streets or whatever, you gotta let the system figure The Cube's got the, you know, the cube versus coming soon. We talked about the use of cryptocurrency and how that makes sense. All the enterprise use cases, we have customers like broad But what you find is oftentimes But what I'm saying is that enterprises need to be ready now or start preparing now for those comings. And what's the preparation for that? but it's gonna be a bit more of an open ecosystem now, and that's gonna change some of the market dynamics in subtle ways. What do you think's next for? And I think that's really the key thing. All right. It's the cross cloud cloud native. Love the discussion.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kit ColbertPERSON

0.99+

San FranciscoLOCATION

0.99+

three yearsQUANTITY

0.99+

Andy GrovePERSON

0.99+

VMwareORGANIZATION

0.99+

Jerry McGuirePERSON

0.99+

JohnPERSON

0.99+

telcoORGANIZATION

0.99+

TodayDATE

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Hundred percentQUANTITY

0.99+

one minuteQUANTITY

0.98+

DavidPERSON

0.98+

first timeQUANTITY

0.98+

COVID pandemicEVENT

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.97+

vSphereTITLE

0.96+

oneQUANTITY

0.96+

MoorePERSON

0.96+

three changesQUANTITY

0.95+

hundred percentQUANTITY

0.93+

few weeks agoDATE

0.87+

telco edgeORGANIZATION

0.87+

couple years agoDATE

0.86+

MoscoLOCATION

0.85+

kit GoldbergPERSON

0.84+

one companyQUANTITY

0.84+

CapExORGANIZATION

0.84+

a dayQUANTITY

0.83+

VMware ExploreTITLE

0.81+

VMware ExplorerORGANIZATION

0.78+

fiveQUANTITY

0.75+

threeQUANTITY

0.71+

vmware.comORGANIZATION

0.71+

AZsLOCATION

0.7+

waveEVENT

0.68+

headliner@supercloud.worldOTHER

0.68+

web threeOTHER

0.67+

east coastLOCATION

0.66+

VMware ExplorerTITLE

0.65+

CubanPERSON

0.65+

a hundred billionQUANTITY

0.64+

2022DATE

0.63+

CubeCOMMERCIAL_ITEM

0.59+

Tons of excitementQUANTITY

0.58+

east oneLOCATION

0.57+

ExplorerTITLE

0.56+

nextEVENT

0.53+

CTOPERSON

0.52+

VMwareTITLE

0.51+

threeOTHER

0.51+

cloudORGANIZATION

0.5+

22DATE

0.48+

webTITLE

0.47+

edgeTITLE

0.41+

5gORGANIZATION

0.32+

Natasha | DigitalBits VIP Gala Dinner Monaco


 

(upbeat music) >> Hello, everyone. Welcome back to theCUBE's extended coverage. I'm John Furrier, host of theCUBE. We are here in Monaco at the Yacht Club, part of the VIP Gala with Prince Albert, DigitalBits, theCUBE. theCUBE and Prince Albert celebrating Monaco leaning into crypto. I'm here with Natasha Mahfar, who's our guest. She just came on theCUBE. Great story. Great to see you. Thanks for coming on. >> Thank you so much for having me. >> Tell the folks what you do real quick. >> Sure. So I actually started my career in Silicon Valley, like you have. And I had the idea of creating a startup in mental health that was voice based only. So it was peer to peer support groups via voice. So I created this startup, pretended to be a student at Stanford and built out a whole team, and unfortunately, at that time, no one was in the space of mental health and voice. Now, as you know, it's a $30 billion industry that's one of the biggest in Silicon Valley. So my career really started from there. And due to that startup, I got involved in the World XR Forum. Now, the World XR Forum is kind of like a mini Davos, but a little bit more exclusive, where we host entrepreneurs, people in blockchain, crypto, and we have a five day event covering all sorts of topics. So- >> When you host them, you mean like host them and they hang out and sleep over? It's a hotel? Is it an event? A workshop? >> There's workshops. We arrange hotels. We pretty much arrange everything that there is. >> It's a group get together. >> It's a group get together. Pretty much like Davos. >> And so Natasha, I wanted to talk to you about what we're passionate about which is theCUBE is bringing people up to have a voice and give them a voice. Give people a platform. You don't have to be famous. If you have something to say and share, we found that right now in this environment with media, we go out to an event, we stream as many stories, but we also have the virtual version of our studio. And I could tell you, I've found that internationally now as we bring people together, there are so many great stories. >> Absolutely. >> Out there that need to be told. And the bottleneck isn't the media, it's the fact that it's open now. >> Yes. >> So why aren't the stories coming out? So our mission is to get the stories. >> Wow. >> Scale stories. The more stories that are scaled, the more people can feel it. More people are impacted by it, and it changes the world. It gets people serendipity with data 'cause we're, you know, you shared some data about what you're working on. >> Yeah, of course. It's all about data these days. And the fact that you're doing it so openly is great because there is a need for that today, so. >> What do you see right now in the market for media? I mean, we got emerging markets, a lot of misinformation. Trust is a big problem. >> Right. >> Bullying, harassing. Smear campaigns. What's news, what's not news. I mean, how do you get your news? I mean, how do people figure out what's going on? >> No, absolutely. And this is such a pure format and a way of doing it. How did you come up with the idea, and how did you start? >> Well, I started... I realized after the Web 2.0, when social media started taking over and ruining the democratization . Blogging, podcasting, which I started in 2004, one of the first podcasts in Silicon Valley. >> Wow. >> I saw the network of that. I saw the value that people had when normal people, they call it user generated content, shared information. And I discovered something amazing that a nobody like me can have a really top podcast. >> Well, you're definitely not a nobody, but... >> Well, I was back then. And nobody knew me back then. But what it is is that even... If you put your voice out there, people will connect to it. And if you have the ability to bring other people in, you start to see a social dynamic. And what social media ruined, Facebook, Twitter, not so much Twitter 'cause Twitter's more smeary, but it's still got to open the API, LinkedIn, they're all terrible. They're all gardens. They don't really bring people together, so I think that stalled for about almost eight years or nine years. Now, with crypto and decentralization, you start to see the same thing come back. Democratization, level the playing field, remove the middle man and person, intermediate the middle bottlenecks. So with media, we found that live streaming and going to events was what the community wants. And then interviewing people, and getting their ideas out there. Not promotional, not getting paid to say stuff. Yeah, they get the plug in for the company that they're working on, that's good for everybody. But more share something that you're passionate about, data. And it works. And people like it. And we've been doing it for 12 years, and it creates a great brand of openness, community, and network effect. So we scaled up the brand to be- >> And it seems like you're international now. I mean, we're sitting in Monte Carlo, so I don't think it gets better than that. >> Well, in 2016, we started going international. 2017, we started doing stuff in Europe. 2018, we did the crypto, Middle East. And we also did London, a lot of different events. We had B2B Enterprise and Crypto Blooming. 2019, we were like, "Let's go global with staff and whatnot." >> Wow. >> And the pandemic hits. >> I know. >> And that really kind of allowed us to pivot and turn us into a virtual hybrid. And that's why we're into the metaverse, as we see the value of a physical face to face event where intimacy's there, but why aren't my friends connected first party? >> Right. How much would you say the company has grown from the time that you kind of pivoted? >> Well, we've grown in a different direction with new capabilities because the old way is over. >> Right. >> Every event right now, this event here, is in person. People are talking. They get connections. But every person that's connecting has a social graph behind them that's online too, and immediately available. And with Instagram, direct messaging, Telegram, Signal, all there. >> It's brilliant. Honestly, it was brilliant idea and a brilliant pivot. >> Thank you for interviewing me. >> Yeah, of course. (Natasha and John laugh) >> Any other questions? >> That should do it. >> Okay. Are you going to have fun tonight? >> Absolutely. >> What is your take of the Monaco scene here? What's it like? >> You know, I think it's a really interesting scene. I think there's a lot of potential because this is such an international place so it draws a very eclectic crowd, and I think there's a lot that could be done here. And you have a lot of people from Europe that are starting to get into this whole crypto, leaving kind of the traditional banks and finance behind. So I think the potential is very strong. >> Very progressive. Well, Natasha, thank you for sharing. >> Thank you so much. >> Here on theCUBE. We're the extended edition CUBE here in Monaco with Prince Albert, theCUBE, and Prince Albert, DigitalBits Al Burgio, a great market here for them. And just an amazing time. And thanks for watching. Natasha, thanks for coming on. Thanks for watching theCUBE. We'll be back with more after this break. (upbeat music)

Published Date : Aug 22 2022

SUMMARY :

part of the VIP Gala with Prince Albert, And I had the idea of creating everything that there is. It's a group get together. And so Natasha, I wanted to talk to you And the bottleneck isn't the media, So our mission is to get the stories. the more people can feel it. And the fact that you're now in the market for media? I mean, how do you get your news? And this is such a pure I realized after the Web 2.0, I saw the network of that. Well, you're definitely And if you have the ability And it seems like And we also did London, a And that really kind from the time that you kind of pivoted? because the old way is over. And with Instagram, direct it was brilliant idea Yeah, of course. to have fun tonight? And you have a lot of people from Europe Well, Natasha, thank you for sharing. We're the extended edition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natasha MahfarPERSON

0.99+

NatashaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

2004DATE

0.99+

EuropeLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

2018DATE

0.99+

12 yearsQUANTITY

0.99+

2019DATE

0.99+

2016DATE

0.99+

2017DATE

0.99+

$30 billionQUANTITY

0.99+

MonacoLOCATION

0.99+

DigitalBitsORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

five dayQUANTITY

0.99+

Monte CarloLOCATION

0.99+

LondonLOCATION

0.99+

Middle EastLOCATION

0.98+

todayDATE

0.98+

FacebookORGANIZATION

0.98+

TwitterORGANIZATION

0.97+

tonightDATE

0.97+

LinkedInORGANIZATION

0.96+

oneQUANTITY

0.96+

nine yearsQUANTITY

0.96+

World XR ForumEVENT

0.95+

first podcastsQUANTITY

0.95+

StanfordORGANIZATION

0.93+

first partyQUANTITY

0.9+

B2B EnterpriseORGANIZATION

0.89+

Prince AlbertORGANIZATION

0.88+

AlbertORGANIZATION

0.86+

PrincePERSON

0.84+

Prince AlbertPERSON

0.82+

DavosPERSON

0.8+

eight yearsQUANTITY

0.8+

InstagramORGANIZATION

0.76+

DinnerEVENT

0.74+

Yacht ClubLOCATION

0.72+

TelegramTITLE

0.71+

pandemicEVENT

0.68+

CUBEORGANIZATION

0.67+

Al BurgioORGANIZATION

0.61+

SignalORGANIZATION

0.5+

Crypto BloomingEVENT

0.41+

theCUBETITLE

0.4+

Ed Casmer, Cloud Storage Security | CUBE Conversation


 

(upbeat music) >> Hello, and welcome to "theCUBE" conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE," got a great security conversation, Ed Casper who's the founder and CEO of Cloud Storage Security, the great Cloud background, Cloud security, Cloud storage. Welcome to the "theCUBE Conversation," Ed. Thanks for coming on. >> Thank you very much for having me. >> I got Lafomo on that background. You got the nice look there. Let's get into the storage blind spot conversation around Cloud Security. Obviously, reinforced has came up a ton, you heard a lot about encryption, automated reasoning but still ransomware was still hot. All these things are continuing to be issues on security but they're all brought on data and storage, right? So this is a big part of it. Tell us a little bit about how you guys came about the origination story. What is the company all about? >> Sure, so, we're a pandemic story. We started in February right before the pandemic really hit and we've survived and thrived because it is such a critical thing. If you look at the growth that's happening in storage right now, we saw this at reinforced. We saw even a recent AWS Storage Day. Their S3, in particular, houses over 200 trillion objects. If you look just 10 years ago, in 2012, Amazon touted how they were housing one trillion objects, so in a 10 year period, it's grown to 200 trillion and really most of that has happened in the last three or four years, so the pandemic and the shift in the ability and the technologies to process data better has really driven the need and driven the Cloud growth. >> I want to get into some of the issues around storage. Obviously, the trend on S3, look at what they've done. I mean, I saw my land at storage today. We've interviewed her. She's amazing. Just the EC2 and S3 the core pistons of AWS, obviously, the silicons getting better, the IaaS layers just getting so much more innovation. You got more performance abstraction layers at the past is emerging Cloud operations on premise now with hybrid is becoming a steady state and if you look at all the action, it's all this hyper-converged kind of conversations but it's not hyper-converged in a box, it's Cloud Storage, so there's a lot of activity around storage in the Cloud. Why is that? >> Well, because it's that companies are defined by their data and, if a company's data is growing, the company itself is growing. If it's not growing, they are stagnant and in trouble, and so, what's been happening now and you see it with the move to Cloud especially over the on-prem storage sources is people are starting to put more data to work and they're figuring out how to get the value out of it. Recent analysts made a statement that if the Fortune 1000 could just share and expose 10% more of their data, they'd have net revenue increases of 65 million. So it's just the ability to put that data to work and it's so much more capable in the Cloud than it has been on-prem to this point. >> It's interesting data portability is being discussed, data access, who gets access, do you move compute to the data? Do you move data around? And all these conversations are kind of around access and security. It's one of the big vulnerabilities around data whether it's an S3 bucket that's an manual configuration error, or if it's a tool that needs credentials. I mean, how do you manage all this stuff? This is really where a rethink kind of comes around so, can you share how you guys are surviving and thriving in that kind of crazy world that we're in? >> Yeah, absolutely. So, data has been the critical piece and moving to the Cloud has really been this notion of how do I protect my access into the Cloud? How do I protect who's got it? How do I think about the networking aspects? My east west traffic after I've blocked them from coming in but no one's thinking about the data itself and ultimately, you want to make that data very safe for the consumers of the data. They have an expectation and almost a demand that the data that they consume is safe and so, companies are starting to have to think about that. They haven't thought about it. It has been a blind spot, you mentioned that before. In regards to, I am protecting my management plane, we use posture management tools. We use automated services. If you're not automating, then you're struggling in the Cloud. But when it comes to the data, everyone thinks, "Oh, I've blocked access. I've used firewalls. I've used policies on the data," but they don't think about the data itself. It is that packet that you talked about that moves around to all the different consumers and the workflows and if you're not ensuring that that data is safe, then, you're in big trouble and we've seen it over and over again. >> I mean, it's definitely a hot category and it's changing a lot, so I love this conversation because it's a primary one, primary and secondary cover data cotton storage. It's kind of good joke there, but all kidding aside, it's a hard, you got data lineage tracing is a big issue right now. We're seeing companies come out there and kind of superability tangent there. The focus on this is huge. I'm curious, what was the origination story? What got you into the business? Was it like, were you having a problem with this? Did you see an opportunity? What was the focus when the company was founded? >> It's definitely to solve the problems that customers are facing. What's been very interesting is that they're out there needing this. They're needing to ensure their data is safe. As the whole story goes, they're putting it to work more, we're seeing this. I thought it was a really interesting series, one of your last series about data as code and you saw all the different technologies that are processing and managing that data and companies are leveraging today but still, once that data is ready and it's consumed by someone, it's causing real havoc if it's not either protected from being exposed or safe to use and consume and so that's been the biggest thing. So we saw a niche. We started with this notion of Cloud Storage being object storage, and there was nothing there protecting that. Amazon has the notion of access and that is how they protect the data today but not the packets themselves, not the underlying data and so, we created the solution to say, "Okay, we're going to ensure that that data is clean. We're also going to ensure that you have awareness of what that data is, the types of files you have out in the Cloud, wherever they may be, especially as they drift outside of the normal platforms that you're used to seeing that data in. >> It's interesting that people were storing data lakes. Oh yeah, just store a womp we might need and then became a data swamp. That's kind of like go back 67 years ago. That was the conversation. Now, the conversation is I need data. It's got to be clean. It's got to feed the machine learning. This is going to be a critical aspect of the business model for the developers who are building the apps, hence, the data has code reference which we've focused on but then you say, "Okay, great. Does this increase our surface area for potential hackers?" So there's all kinds of things that kind of open up, we start doing cool, innovative, things like that so, what are some of the areas that you see that your tech solves around some of the blind spots or with object store, the things that people are overlooking? What are some of the core things that you guys are seeing that you're solving? >> So, it's a couple of things, right now, the still the biggest thing you see in the news is configuration issues where people are losing their data or accidentally opening up to rights. That's the worst case scenario. Reads are a bad thing too but if you open up rights and we saw this with a major API vendor in the last couple of years they accidentally opened rights to their buckets. Hackers found it immediately and put malicious code into their APIs that were then downloaded and consumed by many, many of their customers so, it is happening out there. So the notion of ensuring configuration is good and proper, ensuring that data has not been augmented inappropriately and that it is safe for consumption is where we started and, we created a lightweight, highly scalable solution. At this point, we've scanned billions of files for customers and petabytes of data and we're seeing that it's such a critical piece to that to make sure that that data's safe. The big thing and you brought this up as well is the big thing is they're getting data from so many different sources now. It's not just data that they generate. You see one centralized company taking in from numerous sources, consolidating it, creating new value on top of it, and then releasing that and the question is, do you trust those sources or not? And even if you do, they may not be safe. >> We had an event around super Clouds is a topic we brought up to get bring the attention to the complexity of hybrid which is on premise, which is essentially Cloud operations. And the successful people that are doing things in the software side are essentially abstracting up the benefits of the infrastructures of service from HN AWS, right, which is great. Then they innovate on top so they have to abstract that storage is a key component of where we see the innovations going. How do you see your tech that kind of connecting with that trend that's coming which is everyone wants infrastructures code. I mean, that's not new. I mean, that's the goal and it's getting better every day but DevOps, the developers are driving the operations and security teams to like stay pace, so policy seeing a lot of policy seeing some cool things going on that's abstracting up from say storage and compute but then those are being put to use as well, so you've got this new wave coming around the corner. What's your reaction to that? What's your vision on that? How do you see that evolving? >> I think it's great, actually. I think that the biggest problem that you have to do as someone who is helping them with that process is make sure you don't slow it down. So, just like Cloud at scale, you must automate, you must provide different mechanisms to fit into workflows that allow them to do it just how they want to do it and don't slow them down. Don't hold them back and so, we've come up with different measures to provide and pretty much a fit for any workflow that any customer has come so far with. We do data this way. I want you to plug in right here. Can you do that? And so it's really about being able to plug in where you need to be, and don't slow 'em down. That's what we found so far. >> Oh yeah, I mean that exactly, you don't want to solve complexity with more complexity. That's the killer problem right now so take me through the use case. Can you just walk me through how you guys engage with customers? How they consume your service? How they deploy it? You got some deployment scenarios. Can you talk about how you guys fit in and what's different about what you guys do? >> Sure, so, we're what we're seeing is and I'll go back to this data coming from numerous sources. We see different agencies, different enterprises taking data in and maybe their solution is intelligence on top of data, so they're taking these data sets in whether it's topographical information or whether it's in investing type information. Then they process that and they scan it and they distribute it out to others. So, we see that happening as a big common piece through data ingestion pipelines, that's where these folks are getting most of their data. The other is where is the data itself, the document or the document set, the actual critical piece that gets moved around and we see that in pharmaceutical studies, we see it in mortgage industry and FinTech and healthcare and so, anywhere that, let's just take a very simple example, I have to apply for insurance. I'm going to upload my Social Security information. I'm going to upload a driver's license, whatever it happens to be. I want to one know which of my information is personally identifiable, so I want to be able to classify that data but because you're trusting or because you're taking data from untrusted sources, then you have to consider whether or not it's safe for you to use as your own folks and then also for the downstream users as well. >> It's interesting, in the security world, we hear zero trust and then we hear supply chain, software supply chains. We get to trust everybody, so you got kind of two things going on. You got the hardware kind of like all the infrastructure guys saying, "Don't trust anything 'cause we have a zero trust model," but as you start getting into the software side, it's like trust is critical like containers and Cloud native services, trust is critical. You guys are kind of on that balance where you're saying, "Hey, I want data to come in. We're going to look at it. We're going to make sure it's clean." That's the value here. Is that what I'm hearing you, you're taking it and you're saying, "Okay, we'll ingest it and during the ingestion process, we'll classify it. We'll do some things to it with our tech and put it in a position to be used properly." Is that right? >> That's exactly right. That's a great summary, but ultimately, if you're taking data in, you want to ensure it's safe for everyone else to use and there are a few ways to do it. Safety doesn't just mean whether it's clean or not. Is there malicious content or not? It means that you have complete coverage and control and awareness over all of your data and so, I know where it came from. I know whether it's clean and I know what kind of data is inside of it and we don't see, we see that the interesting aspects are we see that the cleanliness factor is so critical in the workflow, but we see the classification expand outside of that because if your data drifts outside of what your standard workflow was, that's when you have concerns, why is PII information over here? And that's what you have to stay on top of, just like AWS is control plane. You have to manage it all. You have to make sure you know what services have all of a sudden been exposed publicly or not, or maybe something's been taken over or not and you control that. You have to do that with your data as well. >> So how do you guys fit into the security posture? Say it a large company that might want to implement this right away. Sounds like it's right in line with what developers want and what people want. It's easy to implement from what I see. It's about 10, 15, 20 minutes to get up and running. It's not hard. It's not a heavy lift to get in. How do you guys fit in once you get operationalized when you're successful? >> It's a lightweight, highly scalable serverless solution, it's built on Fargate containers and it goes in very easily and then, we offer either native integrations through S3 directly, or we offer APIs and the APIs are what a lot of our customers who want inline realtime scanning leverage and we also are looking at offering the actual proxy aspects. So those folks who use the S3 APIs that our native AWS, puts and gets. We can actually leverage our put and get as an endpoint and when they retrieve the file or place the file in, we'll scan it on access as well, so, it's not just a one time data arrest. It can be a data in motion as you're retrieving the information as well >> We were talking with our friends the other day and we're talking about companies like Datadog. This is the model people want, they want to come in and developers are driving a lot of the usage and operational practice so I have to ask you, this fits kind of right in there but also, you also have the corporate governance policy police that want to make sure that things are covered so, how do you balance that? Because that's an important part of this as well. >> Yeah, we're really flexible for the different ways they want to consume and and interact with it. But then also, that is such a critical piece. So many of our customers, we probably have a 50/50 breakdown of those inside the US versus those outside the US and so, you have those in California with their information protection act. You have GDPR in Europe and you have Asia having their own policies as well and the way we solve for that is we scan close to the data and we scan in the customer's account, so we don't require them to lose chain of custody and send data outside of the accoun. That is so critical to that aspect. And then we don't ask them to transfer it outside of the region, so, that's another critical piece is data residency has to be involved as part of that compliance conversation. >> How much does Cloud enable you to do this that you couldn't really do before? I mean, this really shows the advantage of natively being in the Cloud to kind of take advantage of the IaaS to SAS components to solve these problems. Share your thoughts on how this is possible. What if there was no problem, what would you do? >> It really makes it a piece of cake. As silly as that sounds, when we deploy our solution, we provide a management console for them that runs inside their own accounts. So again, no metadata or anything has to come out of it and it's all push button click and because the Cloud makes it scalable because Cloud offers infrastructure as code, we can take advantage of that and then, when they say go protect data in the Ireland region, they push a button, we stand up a stack right there in the Ireland region and scan and protect their data right there. If they say we need to be in GovCloud and operate in GovCloud East, there you go, push the button and you can behave in GovCloud East as well. >> And with server lists and the region support and all the goodness really makes a really good opportunity to really manage these Cloud native services with the data interaction so, really good prospects. Final question for you. I mean, we love the story. I think it is going to be a really changing market in this area in a big way. I think the data storage relationship relative to higher level services will be huge as Cloud native continues to drive everything. What's the future? I mean, you guys see yourself as a all encompassing, all singing and dancing storage platform or a set of services that you're going to enable developers and drive that value. Where do you see this going? >> I think that it's a mix of both. Ultimately, you saw even on Storage Day the announcement of file cash and file cash creates a new common name space across different storage platforms and so, the notion of being able to use one area to access your data and have it come from different spots is fantastic. That's been in the on-prem world for a couple of years and it's finally making it to the Cloud. I see us following that trend in helping support. We're super laser-focused on Cloud Storage itself so, EBS volumes, we keep having customers come to us and say, "I don't want to run agents in my EC2 instances. I want you to snap and scan and I don't want to, I've got all this EFS and FSX out there that we want to scan," and so, we see that all of the Cloud Storage platforms, Amazon work docs, EFS, FSX, EBS, S3, we'll all come together and we'll provide a solution that's super simple, highly scalable that can meet all the storage needs so, that's our goal right now and where we're working towards. >> Well, Cloud Storage Security, you couldn't get a more a descriptive name of what you guys are working on and again, I've had many contacts with Andy Jassy when he was running AWS and he always loves to quote "The Innovator's Dilemma," one of his teachers at Harvard Business School and we were riffing on that the other day and I want to get your thoughts. It's not so much "The Innovator's Dilemma" anymore relative to Cloud 'cause that's kind of a done deal. It's "The Integrator's Dilemma," and so, it's the integrations are so huge now. If you don't integrate the right way, that's the new dilemma. What's your reaction to that? >> A 100% agreed. It's been super interesting. Our customers have come to us for a security solution and they don't expect us to be 'cause we don't want to be either. Our own engine vendor, we're not the ones creating the engines. We are integrating other engines in and so we can provide a multi engine scan that gives you higher efficacy. So this notion of offering simple integrations without slowing down the process, that's the key factor here is what we've been after so, we are about simplifying the Cloud experience to protecting your storage and it's been so funny because I thought customers might complain that we're not a name brand engine vendor, but they love the fact that we have multiple engines in place and we're bringing that to them this higher efficacy, multi engine scan. >> I mean the developer trends can change on a dime. You make it faster, smarter, higher velocity and more protected, that's a winning formula in the Cloud so Ed, congratulations and thanks for spending the time to riff on and talk about Cloud Storage Security and congratulations on the company's success. Thanks for coming on "theCUBE." >> My pleasure, thanks a lot, John. >> Okay. This conversation here in Palo Alto, California I'm John Furrier, host of "theCUBE." Thanks for watching.

Published Date : Aug 11 2022

SUMMARY :

the great Cloud background, You got the nice look there. and driven the Cloud growth. and if you look at all the action, and it's so much more capable in the Cloud It's one of the big that the data that they consume is safe and kind of superability tangent there. and so that's been the biggest thing. the areas that you see and the question is, do you and security teams to like stay pace, problem that you have to do That's the killer problem right now and they distribute it out to others. and during the ingestion and you control that. into the security posture? and the APIs are what of the usage and operational practice and the way we solve for of the IaaS to SAS components and because the Cloud makes it scalable and all the goodness really and so, the notion of and so, it's the and so we can provide a multi engine scan I mean the developer I'm John Furrier, host of "theCUBE."

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ed CasperPERSON

0.99+

Ed CasmerPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CaliforniaLOCATION

0.99+

John FurrierPERSON

0.99+

2012DATE

0.99+

USLOCATION

0.99+

JohnPERSON

0.99+

200 trillionQUANTITY

0.99+

AWSORGANIZATION

0.99+

FebruaryDATE

0.99+

IrelandLOCATION

0.99+

EuropeLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

65 millionQUANTITY

0.99+

S3TITLE

0.99+

10%QUANTITY

0.99+

information protection actTITLE

0.99+

15QUANTITY

0.99+

FSXTITLE

0.99+

EdPERSON

0.99+

DatadogORGANIZATION

0.99+

one timeQUANTITY

0.99+

GDPRTITLE

0.99+

10 years agoDATE

0.99+

one trillion objectsQUANTITY

0.99+

two thingsQUANTITY

0.99+

100%QUANTITY

0.98+

billions of filesQUANTITY

0.98+

20 minutesQUANTITY

0.98+

Harvard Business SchoolORGANIZATION

0.98+

AsiaLOCATION

0.98+

bothQUANTITY

0.98+

67 years agoDATE

0.98+

over 200 trillion objectsQUANTITY

0.98+

50/50QUANTITY

0.97+

Cloud Storage SecurityORGANIZATION

0.97+

oneQUANTITY

0.96+

pandemicEVENT

0.96+

todayDATE

0.95+

HN AWSORGANIZATION

0.95+

CloudTITLE

0.94+

The Integrator's DilemmaTITLE

0.94+

theCUBEORGANIZATION

0.94+

EC2TITLE

0.93+

zero trustQUANTITY

0.93+

last couple of yearsDATE

0.93+

about 10QUANTITY

0.93+

EFSTITLE

0.9+

one areaQUANTITY

0.88+

The Innovator's DilemmaTITLE

0.87+

10 year periodQUANTITY

0.81+

GovCloudTITLE

0.78+

Cloud StorageTITLE

0.77+

The Innovator's DilemmaTITLE

0.75+

LafomoPERSON

0.75+

EBSTITLE

0.72+

last threeDATE

0.71+

Storage DayEVENT

0.7+

Cloud SecurityTITLE

0.69+

CUBEORGANIZATION

0.67+

Fortune 1000ORGANIZATION

0.61+

EBSORGANIZATION

0.59+

Jen Huffstetler, Intel | HPE Discover 2022


 

>> Announcer: theCube presents HPE Discover 2022 brought to you by HPE. >> Hello and welcome back to theCube's continuous coverage HPE Discover 2022 and from Las Vegas the formerly Sands Convention Center now Venetian, John Furrier and Dave Vellante here were excited to welcome in Jen Huffstetler. Who's the Chief product Sustainability Officer at Intel Jen, welcome to theCube thanks for coming on. >> Thank you very much for having me. >> You're really welcome. So you dial back I don't know, the last decade and nobody really cared about it but some people gave it lip service but corporations generally weren't as in tune, what's changed? Why has it become so top of mind? >> I think in the last year we've noticed as we all were working from home that we had a greater appreciation for the balance in our lives and the impact that climate change was having on the world. So I think across the globe there's regulations industry and even personally, everyone is really starting to think about this a little more and corporations specifically are trying to figure out how are they going to continue to do business in these new regulated environments. >> And IT leaders generally weren't in tune cause they weren't paying the power bill for years it was the facilities people, but then they started to come together. How should leaders in technology, business tech leaders, IT leaders, CIOs, how should they be thinking about their sustainability goals? >> Yeah, I think for IT leaders specifically they really want to be looking at the footprint of their overall infrastructure. So whether that is their on-prem data center, their cloud instances, what can they do to maximize the resources and lower the footprint that they contribute to their company's overall footprint. So IT really has a critical role to play I think because as you'll find in IT, the carbon footprint of the data center of those products in use is actually it's fairly significant. So having a focus there will be key. >> You know compute has always been one of those things where, you know Intel's been makes chips so that, you know heat is important in compute. What is Intel's current goals? Give us an update on where you guys are at. What's the ideal goal in the long term? Where are you now? You guys always had a focus on this for a long, long time. Where are we now? Cause I won't say the goalpost of changed, they're changing the definitions of what this means. What's the current state of Intel's carbon footprint and overall goals? >> Yeah, no thanks for asking. As you mentioned, we've been invested in lowering our environmental footprint for decades in fact, without action otherwise, you know we've already lowered our carbon footprint by 75%. So we're really in that last mile. And that is why when we recently announced a very ambitious goal Net-Zero 2040 for our scope one and two for manufacturing operations, this is really an industry leading goal. And partly because the technology doesn't even exist, right? For the chemistries and for making the silicon into the sand into, you know, computer chips yet. And so by taking this bold goal, we're going to be able to lead the industry, partner with academia, partner with consortia, and that drive is going to have ripple effects across the industry and all of the components in semiconductors. >> Is there a changing definition of Net-Zero? What that means, cause some people say they're Net-Zero and maybe in one area they might be but maybe holistically across the company as it becomes more of a broader mandate society, employees, partners, Wall Street are all putting pressure on companies. Is the Net-Zero conversation changed a little bit or what's your view on that? >> I think we definitely see it changing with changing regulations like those coming forth from the SEC here in the US and in Europe. Net-Zero can't just be lip service anymore right? It really has to be real reductions on your footprint. And we say then otherwise and even including in our supply chain goals what we've taken new goals to reduce, but our operations are growing. So I think everybody is going through this realization that you know, with the growth, how do we keep it lower than it would've been otherwise, keep focusing on those reductions and have not just renewable credits that could have been bought in one location and applied to a different geographical location but real credible offsets for where the the products manufactured or the computes deployed. >> Jen, when you talk about you've reduced already by 75% you're on that last mile. We listened to Pat Gelsinger very closely up until recently he was the number one most frequently had on theCube guest. He's been busy I guess. But as you apply that discipline to where you've been, your existing business and now Pat's laid out this plan to increase the Foundry business how does that affect your... Are you able to carry through that reduction to, you know, the new foundries? Do you have to rethink that? How does that play in? >> Certainly, well, the Foundry expansion of our business with IBM 2.0 is going to include the existing factories that already have the benefit of those decades of investment and focus. And then, you know we have clear goals for our new factories in Ohio, in Europe to achieve goals as well. That's part of the overall plan for Net-Zero 2040. It's inclusive of our expansion into Foundry which means that many, many many more customers are going to be able to benefit from the leadership that Intel has here. And then as we onboard acquisitions as any company does we need to look at the footprint of the acquisition and see what we can do to align it with our overall goals. >> Yeah so sustainable IT I don't know for some reason was always an area of interest to me. And when we first started, even before I met you, John we worked with PG&E to help companies get rebates for installing technologies that would reduce their carbon footprint. >> Jen: Very forward thinking. >> And it was a hard thing to get, you know, but compute was the big deal. And there were technologies and I remember virtualization at the time was one and we would go in and explain to the PG&E engineers how that all worked. Cause they had metrics and that they wanted to see, but anyway, so virtualization was clearly one factor. What are the technologies today that people should be paying, flash storage was another one. >> John: AI's going to have a big impact. >> Reduce the spinning disk, but what are the ones today that are going to have an impact? >> Yeah, no, that's a great question. We like to think of the built in acceleration that we have including some of the early acceleration for virtualization technologies as foundational. So built in accelerated compute is green compute and it allows you to maximize the utilization of the transistors that you already have deployed in your data center. This compute is sitting there and it is ready to be used. What matters most is what you were talking about, John that real world workload performance. And it's not just you know, a lot of specsmanship around synthetic benchmarks, but AI performance with the built in acceleration that we have in Xeon processors with the Intel DL Boost, we're able to achieve four X, the AI performance per Watts without you know, doing that otherwise. You think about the consolidation you were talking about that happened with virtualization. You're basically effectively doing the same thing with these built in accelerators that we have continued to add over time and have even more coming in our Sapphire Generation. >> And you call that green compute? Or what does that mean, green compute? >> Well, you are greening your compute. >> John: Okay got it. >> By increasing utilization of your resources. If you're able to deploy AI, utilize the telemetry within the CPU that already exists. We have customers KDDI in Japan has a great Proofpoint that they already announced on their 5G data center, lowered their data center power by 20%. That is real bottom line impact as well as carbon footprint impact by utilizing all of those built in capabilities. So, yeah. >> We've heard some stories earlier in the event here at Discover where there was some cooling innovations that was powering moving the heat to power towns and cities. So you start to see, and you guys have been following this data center and been part of the whole, okay and hot climates, you have cold climates, but there's new ways to recycle energy where's that cause that sounds very Sci-Fi to me that oh yeah, the whole town runs on the data center exhaust. So there's now systems thinking around compute. What's your reaction to that? What's the current view on re-engineering a system to take advantage of that energy or recycling? >> I think when we look at our vision of sustainable compute over this horizon it's going to be required, right? We know that compute helps to solve society's challenges and the demand for it is not going away. So how do we take new innovations looking at a systems level as compute gets further deployed at the edge, how do we make it efficient? How do we ensure that that compute can be deployed where there is air pollution, right? So some of these technologies that you have they not only enable reuse but they also enable some you know, closing in of the solution to make it more robust for edge deployments. It'll allow you to place your data center wherever you need it. It no longer needs to reside in one place. And then that's going to allow you to have those energy reuse benefits either into district heating if you're in, you know Northern Europe or there's examples with folks putting greenhouses right next to a data center to start growing food in what we're previously food deserts. So I don't think it's science fiction. It is how we need to rethink as a society. To utilize everything we have, the tools at our hand. >> There's a commercial on the radio, on the East Coast anyway, I don't know if you guys have heard of it, it's like, "What's your one thing?" And the gentleman comes on, he talks about things that you can do to help the environment. And he says, "What's your one thing?" So what's the one thing or maybe it's not just one that IT managers should be doing to affect carbon footprint? >> The one thing to affect their carbon footprint, there are so many things. >> Dave: Two, three, tell me. >> I think if I was going to pick the one most impactful thing that they could do in their infrastructure is it's back to John's comment. It's imagine if the world deployed AI, all the benefits not only in business outcomes, you know the revenue, lowering the TCO, but also lowering the footprint. So I think that's the one thing they could do. If I could throw in a baby second, it would be really consider how you get renewable energy into your computing ecosystem. And then you know, at Intel, when we're 80% renewable power, our processors are inherently low carbon because of all the work that we've done others have less than 10% renewable energy. So you want to look for products that have low carbon by design, any Intel based system and where you can get renewables from your grid to ask for it, run your workload there. And even the next step to get to sustainable computing it's going to take everyone, including every enterprise to think differently and really you know, consider what would it look like to bring renewables onto my site? If I don't have access through my local utility and many customers are really starting to evaluate that. >> Well Jen its great to have you on theCube. Great insight into the current state of the art of sustainability and carbon footprint. My final question for you is more about the talent out there. The younger generation coming in I'll say the pressure, people want to work for a company that's mission driven we know that, the Wall Street impact is going to be financial business model and then save the planet kind of pressure. So there's a lot of talent coming in. Is there awareness at the university level? Is there a course where can, do people get degrees in sustainability? There's a lot of people who want to come into this field what are some of the talent backgrounds of people learning or who might want to be in this field? What would you recommend? How would you describe how to onboard into the career if they want to contribute? What are some of those factors? Cause it's not new, new, but it's going to be globally aware. >> Yeah well there certainly are degrees with focuses on sustainability maybe to look at holistically at the enterprise, but where I think the globe is really going to benefit, we didn't really talk about the software inefficiency. And as we delivered more and more compute over the last few decades, basically the programming languages got more inefficient. So there's at least 35% inefficiency in the software. So being a software engineer, even if you're not an AI engineer. So AI would probably be the highest impact being a software engineer to focus on building new applications that are going to be efficient applications that they're well utilizing the transistor that they're not leaving zombie you know, services running that aren't being utilized. So I actually think-- >> So we got a program in assembly? (all laughing) >> (indistinct), would get really offended. >> Get machine language. I have to throw that in sorry. >> Maybe not that bad. (all laughing) >> That's funny, just a joke. But the question is what's my career path. What's a hot career in this area? Sustainability, AI totally see that. Anything else, any other career opportunities you see or hot jobs or hot areas to work on? >> Yeah, I mean, just really, I think it takes every architect, every engineer to think differently about their design, whether it's the design of a building or the design of a processor or a motherboard we have a whole low carbon architecture, you know, set of actions that are we're underway that will take to the ecosystem. So it could really span from any engineering discipline I think. But it's a mindset with which you approach that customer problem. >> John: That system thinking, yeah. >> Yeah sustainability designed in. Jen thanks so much for coming back in theCube, coming on theCube. It's great to have you. >> Thank you. >> All right. Dave Vellante for John Furrier, we're sustaining theCube. We're winding down day three, HPE Discover 2022. We'll be right back. (upbeat music)

Published Date : Jun 30 2022

SUMMARY :

brought to you by HPE. the formerly Sands Convention I don't know, the last decade and the impact that climate but then they started to come together. and lower the footprint What's the ideal goal in the long term? into the sand into, you but maybe holistically across the company that you know, with the growth, to where you've been, that already have the benefit to help companies get rebates at the time was one and it is ready to be used. the CPU that already exists. and been part of the whole, And then that's going to allow you And the gentleman comes on, The one thing to affect And even the next step to to have you on theCube. that are going to be would get really offended. I have to throw that in sorry. Maybe not that bad. But the question is what's my career path. or the design of a It's great to have you. Dave Vellante for John Furrier,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jen HuffstetlerPERSON

0.99+

JohnPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

OhioLOCATION

0.99+

EuropeLOCATION

0.99+

PG&EORGANIZATION

0.99+

USLOCATION

0.99+

80%QUANTITY

0.99+

JapanLOCATION

0.99+

Pat GelsingerPERSON

0.99+

Las VegasLOCATION

0.99+

JenPERSON

0.99+

SECORGANIZATION

0.99+

75%QUANTITY

0.99+

last yearDATE

0.99+

TwoQUANTITY

0.99+

John FurrierPERSON

0.99+

threeQUANTITY

0.99+

Northern EuropeLOCATION

0.99+

one factorQUANTITY

0.99+

HPEORGANIZATION

0.98+

PatPERSON

0.98+

IntelORGANIZATION

0.98+

oneQUANTITY

0.98+

one locationQUANTITY

0.98+

20%QUANTITY

0.98+

twoQUANTITY

0.98+

one thingQUANTITY

0.97+

firstQUANTITY

0.97+

Net-ZeroORGANIZATION

0.96+

one placeQUANTITY

0.96+

DL BoostCOMMERCIAL_ITEM

0.96+

last decadeDATE

0.95+

todayDATE

0.93+

decadesQUANTITY

0.92+

day threeQUANTITY

0.9+

one areaQUANTITY

0.9+

East CoastLOCATION

0.9+

KDDIORGANIZATION

0.89+

DiscoverORGANIZATION

0.88+

less than 10% renewableQUANTITY

0.86+

Wall StreetLOCATION

0.86+

Sands Convention CenterLOCATION

0.84+

theCubeORGANIZATION

0.83+

four XQUANTITY

0.82+

WallORGANIZATION

0.82+

least 35%QUANTITY

0.75+

ChiefPERSON

0.75+

IBM 2.0ORGANIZATION

0.74+

Sustainability OfficerPERSON

0.72+

last few decadesDATE

0.69+

secondQUANTITY

0.63+

Net-Zero 2040TITLE

0.62+

GenerationCOMMERCIAL_ITEM

0.6+

HPE Discover 2022COMMERCIAL_ITEM

0.55+

2022COMMERCIAL_ITEM

0.55+

every engineerQUANTITY

0.54+

5GQUANTITY

0.54+

-ZeroOTHER

0.54+

HPECOMMERCIAL_ITEM

0.48+

StreetLOCATION

0.47+

Jim Walker, Cockroach Labs & Christian Hüning, finleap connect | Kubecon + Cloudnativecon EU 2022


 

>> (bright music) >> Narrator: The Cube, presents Kubecon and Cloudnativecon, year of 2022, brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Now what we're opening. Welcome to Valencia, Spain in Kubecon Cloudnativecon, Europe, 2022. I'm Keith Townsend, along with my host, Paul Gillin, who is the senior editor for architecture at Silicon angle, Paul. >> Keith you've been asking me questions all these last two days. Let me ask you one. You're a traveling man. You go to a lot of conferences. What's different about this one. >> You know what, we're just talking about that pre-conference, open source conferences are usually pretty intimate. This is big. 7,500 people talking about complex topics, all in one big area. And then it's, I got to say it's overwhelming. It's way more. It's not focused on a single company's product or messaging. It is about a whole ecosystem, very different show. >> And certainly some of the best t-shirts I've ever seen. And our first guest, Jim has one of the better ones. >> I mean a bit cockroach come on, right. >> Jim Walker, principal product evangelist at CockroachDB and Christian Huning, tech director of cloud technologies at Finleap Connect, a financial services company that's based out of Germany, now offering services in four countries now. >> Basically all over Europe. >> Okay. >> But we are in three countries with offices. >> So you're CockroachDB customer and I got to ask the obvious question. Databases are hard and started the company in 2015 CockroachDB, been a customer since 2019, I understand. Why take the risk on a four year old database. I mean that just sounds like a world of risk and trouble. >> So it was in 2018 when we joined the company back then and we did this cloud native transformation, that was our task basically. We had very limited amount of time and we were faced with a legacy infrastructure and we needed something that would run in a cloud native way and just blend in with everything else we had. And the idea was to go all in with Kubernetes. Though early days, a lot of things were alpha beta, and we were running on mySQL back then. >> Yeah. >> On a VM, kind of small setup. And then we were looking for something that we could just deploy in Kubernetes, alongside with everything else. And we had to stack and we had to duplicate it many times. So also to maintain that we wanted to do it all the same like with GitOps and everything and Cockroach delivered that proposition. So that was why we evaluate the risk of relatively early adopting that solution with the proposition of having something that's truly cloud native and really blends in with everything else we do in the same way was something we considered, and then we jumped the leap of faith and >> The fin leap of faith >> The fin leap of faith. Exactly. And we were not dissatisfied. >> So talk to me a little bit about the challenges because when we think of MySQL, MySQL scales to amazing sizes, it is the de facto database for many cloud based architectures. What problems were you running into with MySQL? >> We were running into the problem that we essentially, as a finTech company, we are regulated and we have companies, customers that really value running things like on-prem, private cloud, on-prem is a bit of a bad word, maybe. So it's private cloud, hybrid cloud, private cloud in our own data centers in Frankfurt. And we needed to run it in there. So we wanted to somehow manage that and with, so all of the managed solution were off the table, so we couldn't use them. So we needed something that ran in Kubernetes because we only wanted to maintain Kubernetes. We're a small team, didn't want to use also like full blown VM solution, of sorts. So that was that. And the other thing was, we needed something that was HA distributable somehow. So we also looked into other solutions back at the time, like Vitis, which is also prominent for having a MySQL compliant interface and great solution. We also got into work, but we figured, this is from the scale, and from the sheer amount of maintenance it would need, we couldn't deliver that, we were too small for that. So that's where then Cockroach just fitted in nicely by being able to distribute BHA, be resilient against failure, but also be able to scale out because we had this problem with a single MySQL deployment to not really, as it grew, as the data amounts grew, we had trouble to operatively keep that under control. >> So Jim, every time someone comes to me and says, I have a new database, I think we don't need it, yet another database. >> Right. >> What problem, or how does CockroachDB go about solving the types of problems that Christian had? >> Yeah. I mean, Christian laid out why it exists. I mean, look guys, building a database isn't easy. If it was easy, we'd have a database for every application, but you know, Michael Stonebraker, kind of godfather of all database says it himself, it takes seven, eight years for a database to fully gestate to be something that's like enterprise ready and kind of, be relied upon. We've been billing for about seven, eight years. I mean, I'm thankful for people like Christian to join us early on to help us kind of like troubleshoot and go through some things. We're building a database, it's not easy. You're right. But building a distributor system is also not easy. And so for us, if you look at what's going on in just infrastructure in general, what's happening in Kubernetes, like this whole space is Kubernetes. It's all about automation. How do I automate scale? How do I automate resilience out of the entire equation of what we're actually doing? I don't want to have to think about active passive systems. I don't want to think about sharding a database. Sure you can scale MySQL. You know, how many people it takes to run three or four shards of MySQL database. That's not automation. And I tell you what, this world right now with the advances in data how hard it is to find people who actually understand infrastructure to hire them. This is why this automation is happening, because our systems are more complex. So we started from the very beginning to be something that was very different. This is a cloud native database. This is built with the same exact principles that are in Kubernetes. In fact, like Kubernetes it's kind of a spawn of borg, the back end of Google. We are inspired by Spanner. I mean, this started by three engineers that worked at Google, are frustrated, they didn't have the tools, they had at Google. So they built something that was, outside of Google. And how do we give that kind of Google like infrastructure for everybody. And that's, the advent of Cockroach and kind of why we're doing, what we're doing. >> As your database has matured, you're now beginning a transition or you're in a transition to a serverless version. How are you doing that without disrupting the experience for existing customers? And why go serverless at all? >> Yeah, it's interesting. So, you know, serverless was, it was kind of a an R&D project for us. And when we first started on a path, because I think you know, ultimately what we would love to do for the database is let's not even think about database, Keith. Like, I don't want to think about the database. What we're building too is, we want a SQL API in the cloud. That's it. I don't want to think about scale. I don't want to think about upgrades. I literally like. that stuff should just go away. That's what we need, right. As developers, I don't want to think about isolation levels or like, you know, give me DML and I want to be able to communicate. And for us the realization of that vision is like, if we're going to put a database on the planet for everybody to actually use it, we have to be really, really efficient. And serverless, which I believe really should be infrastructure less because I don't think we should be thinking of just about service. We got to think about, how do I take the context of regions out of this thing? How do I take the context of cloud providers out of what we're talking about? Let's just not think about that. Let's just code against something. Serverless was the answer. Now we've been building for about a year and a half. We launched a serverless version of Cockroach last October and we did it so that everybody in the public could have a free version of a database. And that's what serverless allows us to do. It's all consumption based up to certain limits and then you pay. But I think ultimately, and we spoke a little bit about this at the very beginning. I think as ISVs, people who are building software today the serverless vision gets really interesting because I think what's on the mind of the CTO is, how do I drive down my cost to the cloud provider? And if we can basically, drive down costs through either making things multi-tenant and super efficient, and then optimizing how much compute we use, spinning things down to zero and back up and auto scaling these sort of things in our software. We can start to make changes in the way that people are thinking about spend with the cloud provider. And ultimately we did that, so we could do things for free. >> So, Jim, I think I disagree Christian, I'm sorry, Jim. I think I disagree with you just a little bit. Christian, I think the biggest challenge facing CTOs are people. >> True. >> Getting the people to worry about cost and spend and implementation. So as you hear the concepts of CoachDB moving to a serverless model, and you're a large customer how does that make you think or react to your people side of your resources? >> Well, I can say that from the people side of resources luckily Cockroach is our least problem. So it just kind of, we always said, it's an operator stream because that was the part that just worked for us, so. >> And it's worked as you have scaled it? without you having ... >> Yeah. I mean, we use it in a bit of a, we do not really scale out like the Cockroach, like really large. It's like, more that we use it with the enterprise features of encryption in the stack and our customers then demand. If they do so, we have the Zas offering and we also do like dedicated stacks. So by having a fully cloud native solution on top of Kubernetes, as the foundational layer we can just use that and stamp it out and deploy it. >> How does that translate into services you can provide your customers? Are there services you can provide customers that you couldn't have, if you were running, say, MySQL? >> No, what we do is, we run this, so the SAS offering runs in our hybrid private cloud. And the other thing that we offer is that we run the entire stack at a cloud provider of their choosing. So if they are an AWS, they give us an AWS account, we put it in there. Theoretically, we could then also talk about using the serverless variant, if they like so, but it's not strictly required for us. >> So Christian, talk to me about that provisioning process because if I had a MySQL deployment before I can imagine how putting that into a cloud native type of repeatable CICD pipeline or Ansible script that could be difficult. Talk to me about that. How CockroachDB enables you to create new onboarding experiences for your customers? >> So what we do is, we use helm charts all over the place as probably everybody else. And then each application team has their parts of services, they've packaged them to helm charts, they've wrapped us in a super chart that gets wrapped into the super, super chart for the entire stack. And then at the right place, somewhere in between Cockroach is added, where it's a dependency. And as they just offer a helm chart that's as easy as it gets. And then what the teams do is they have an inner job, that once you deploy all that, it would spin up. And as soon as Cockroach is ready it's just the same reconcile loop as everything. It will then provision users, set up database schema, do all that. And initialize, initial data sets that might be required for a new setup. So with that setup, we can spin up a new cluster and then deploy that stack chart in there. And it takes some time. And then it's done. >> So talk to me about life cycle management. Because when I have one database, I have one schema. When I have a lot of databases I have a lot of different schemas. How do you keep your stack consistent across customers? >> That is basically part of the same story. We have get offs all over the place. So we have this repository, we see the super helm chart versions and we maintain like minus three versions and ensure that we update the customers and keep them up to date. It's part of the contract sometimes, down to the schedule of the customer at times. And Cockroach nicely supports also, these updates with these migrations in the background, the schema migrations in the background. So we use in our case, in that integration SQL alchemy, which is also nicely supported. So there was also part of the story from MySQL to Postgres, was supported by the ORM, these kind of things. So the skill approach together with the ease of helm charts and the background migrations of the schema is a very seamless upgrade operations. Before that we had to have downtime. >> That's right, you could have online schema changes. Upgrading the database uses the same concept of rolling upgrades that you have in Kubernetes. It's just cloud native. It just fits that same context, I think. >> Christian: It became a no-brainer. >> Yeah. >> Yeah. >> Jim, you mentioned the idea of a SQL API in the cloud, that's really interesting. Why does such a thing not exist? >> Because it's really difficult to build. You know, SQL API, what does that mean? Like, okay. What I'm going to, where does that endpoint live? Is there one in California one on the east coast, one in Europe, one in Asia? Okay. And I'm asking that endpoint for data. Where does that data live? Can you control where data lives on the planet? Because ultimately what we're fighting in software today in a lot of these situations is the speed of light. And so how do you intelligently place data on this planet? So that, you know, when you're asking for data, when you're maybe home, it's a different latency than when you're here in Valencia. Does that data follow and move you? These are really, really difficult problems to solve. And I think that we're at that layer of, we're at this moment in time in software engineering, we're solving some really interesting, interesting things cause we are budding against this speed of light problem. And ultimately that's one of the biggest challenges. But underneath, it has to have all this automation like the ease at which we can scale this database like the always on resilient, the way that we can upgrade the entire thing with just rolling upgrades. The cloud native concepts is really what's enabling us to do things at global scale it's automation. >> Let's alk about that speed of light in global scale. There's no better conference for speed of light, for scale, than Kubecon. Any predictions coming out of the show? >> It's less a prediction for me and more of an observation, you guys. Like look at two years ago, when we were here in Barcelona at QCon EU, it was a lot of hype. It's a lot of hype, a lot of people walking around, curious, fascinated, this is reality. The conversations that I'm having with people today, there's a reality. There's people really doing, they're becoming cloud native. And to me, I think what we're going to see over the next two to three years is people start to adopt this kind of distributed mindset. And it permeates not just within infrastructure but it goes up into the stack. We'll start to see much more developers using, Go and these kind of the threaded languages, because I think that distributed mindset, if it starts at the chip all the way to the fingertip of the person clicking and you're distributed everywhere in between. It is extremely powerful. And I think that's what Finleap, I mean, that's exactly what the team is doing. And I think there's a lot of value and a lot of power in that. >> Jim, Christian, thank you so much for coming on the Cube and sharing your story. You know what we're past the hype cycle of Kubernetes, I agree. I was a nonbeliever in Kubernetes two, three years ago. It was mostly hype. We're looking at customers from Microsoft, Finleap and competitors doing amazing things with this platform and cloud native in general. Stay tuned for more coverage of Kubecon from Valencia, Spain. I'm Keith Townsend, along with Paul Gillin and you're watching the Cube, the leader in high tech coverage. (bright music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, Welcome to Valencia, Spain You go to a lot of conferences. I got to say it's overwhelming. And certainly some of the and Christian Huning, But we are in three and started the company and we were faced with So also to maintain that we And we were not dissatisfied. So talk to me a little and we have companies, customers I think we don't need it, And how do we give that kind disrupting the experience and we did it so that I think I disagree with Getting the people to worry because that was the part And it's worked as you have scaled it? It's like, more that we use it And the other thing that we offer is that So Christian, talk to me it's just the same reconcile I have a lot of different schemas. and ensure that we update the customers Upgrading the database of a SQL API in the cloud, the way that we can Any predictions coming out of the show? and more of an observation, you guys. so much for coming on the Cube

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

Paul GillinPERSON

0.99+

Jim WalkerPERSON

0.99+

CaliforniaLOCATION

0.99+

Keith TownsendPERSON

0.99+

Michael StonebrakerPERSON

0.99+

2018DATE

0.99+

GermanyLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

2015DATE

0.99+

FrankfurtLOCATION

0.99+

KeithPERSON

0.99+

EuropeLOCATION

0.99+

sevenQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Cockroach LabsORGANIZATION

0.99+

ChristiaPERSON

0.99+

BarcelonaLOCATION

0.99+

GoogleORGANIZATION

0.99+

ValenciaLOCATION

0.99+

AsiaLOCATION

0.99+

ChristianPERSON

0.99+

Finleap ConnectORGANIZATION

0.99+

MySQLTITLE

0.99+

KubernetesTITLE

0.99+

Valencia, SpainLOCATION

0.99+

threeQUANTITY

0.99+

two years agoDATE

0.99+

FinleapORGANIZATION

0.99+

three engineersQUANTITY

0.99+

three countriesQUANTITY

0.99+

first guestQUANTITY

0.99+

SQL APITITLE

0.99+

PaulPERSON

0.99+

KubeconORGANIZATION

0.98+

last OctoberDATE

0.98+

eight yearsQUANTITY

0.98+

2022DATE

0.98+

each applicationQUANTITY

0.98+

four countriesQUANTITY

0.98+

one databaseQUANTITY

0.98+

oneQUANTITY

0.98+

2019DATE

0.98+

three years agoDATE

0.98+

CockroachDBORGANIZATION

0.98+

one schemaQUANTITY

0.98+

Christian HuningPERSON

0.97+

about a year and a halfQUANTITY

0.97+

twoDATE

0.96+

firstQUANTITY

0.96+

Christian HüningPERSON

0.94+

todayDATE

0.94+

about sevenQUANTITY

0.93+

CloudnativeconORGANIZATION

0.93+

three yearsQUANTITY

0.93+

Brian Schwarz, Google Cloud | VeeamON 2022


 

(soft intro music) >> Welcome back to theCUBE's coverage of VeeamON 2022. Dave Vellante with David Nicholson. Brian Schwarz is here. We're going to stay on cloud. He's the director of product management at Google Cloud. The world's biggest cloud, I contend. Brian, thanks for coming on theCUBE. >> Thanks for having me. Super excited to be here. >> Long time infrastructure as a service background, worked at Pure, worked at Cisco, Silicon Valley guy, techie. So we're going to get into it here. >> I love it. >> I was saying before, off camera. We used to go to Google Cloud Next every year. It was an awesome show. Guys built a big set for us. You joined, right as the pandemic hit. So we've been out of touch a little bit. It's hard to... You know, you got one eye on the virtual event, but give us the update on Google Cloud. What's happening generally and specifically within storage? >> Yeah. So obviously the Cloud got a big boost during the pandemic because a lot of work went online. You know, more things kind of being digitally transformed as people keep trying to innovate. So obviously the growth of Google Cloud, has got a big tailwind to it. So business has been really good, lots of R&D investment. We obviously have an incredible set of technology already but still huge investments in new technologies that we've been bringing out over the past couple of years. It's great to get back out to events to talk to people about 'em. Been a little hard the last couple of years to give people some of the insights. When I think about storage, huge investments, one of the things that some people know but I think it's probably underappreciated is we use the same infrastructure for Google Cloud that is used for Google consumer products. So Search and Photos and all the public kind of things that most people are familiar with, Maps, et cetera. Same infrastructure at the same time is also used for Google Cloud. So we just have this tremendous capability of infrastructure. Google's got nine products that have a billion users most of which many people know. So we're pretty good at storage pretty good at compute, pretty good at networking. Obviously a lot of that kind of shines through on Google Cloud for enterprises to bring their applications, lift and shift and/or modernize, build new stuff in the Cloud with containers and things like that. >> Yeah, hence my contention that Google has the biggest cloud in the world, like I said before. Doesn't have the most IS revenue 'cause that's a different business. You can't comment, but I've got Google Cloud running at $12 billion a year run rate. So a lot of times people go, "Oh yeah, Google they're third place going for the bronze." But that is a huge business. There aren't a lot of 10, $12 billion infrastructure companies. >> In a rapidly growing market. >> And if you do some back of napkin math, whatever, give me 10, 15, let's call it 15% of that, to storage. You've got a big storage business. I know you can't tell us how big, but it's big. And if you add in all the stuff that's not in GCP, you do a lot of storage. So you know storage, you understand the technology. So what is the state of technology? You have a background in Cisco, nearly a networking company, they used to do some storage stuff sort of on the side. We used to say they're going to buy NetApp, of course that never happened. That would've made no sense. Pure Storage, obviously knows storage, but they were a disk array company essentially. Cloud storage, what's different about it? What's different in the technology? How does Google think about it? >> You know, I always like to tell people there's some things that are the same and familiar to you, and there's some things that are different. If I start with some of the differences, object storage in the Cloud, like just fundamentally different. Object storage on-prem, it's been around for a while, often used as kind of like a third tier of storage, maybe a backup target, compliance, something like that. In the cloud, object storage is Tier one storage. Public reference for us, Spotify, okay, use object storage for all the songs out there. And increasingly we see a lot of growth in-- >> Well, how are you defining Tier one storage in that regard? Again, are you thinking streaming service? Okay. Fine. Transactional? >> Spotify goes down and I'm pissed. >> Yeah. This is true. (Dave laughing) >> Not just you, maybe a few million other people too. One is importance, business importance. Tier one applications like critical to the business, like business down type stuff. But even if you look at it for performance, for capabilities, object storage in the cloud, it's a different thing than it was. >> Because of the architecture that you're deploying? >> Yeah. And the applications that we see running on it. Obviously, a huge growth in our business in AI and analytics. Obviously, Google's pretty well known in both spaces, BigQuery, obviously on the analytics side, big massive data warehouses and obviously-- >> Gets very high marks from customers. >> Yeah, very well regarded, super successful, super popular with our customers in Google Cloud. And then obviously AI as well. A lot of AI is about getting structure from unstructured data. Autonomous vehicles getting pictures and videos around the world. Speech recognition, audio is a fundamentally analog signal. You're trying to train computers to basically deal with analog things and it's all stored in object storage, machine learning on top of it, creating all the insights, and frankly things that computers can deal with. Getting structure out of the unstructured data. So you just see performance capabilities, importance as it's really a Tier one storage, much like file and block is where have kind of always been. >> Depending on, right, the importance. Because I mean, it's a fair question, right? Because we're used to thinking, "Oh, you're running your Oracle transaction database on block storage." That's Tier one. But Spotify's pretty important business. And again, on BigQuery, it is a cloud-native born in the cloud database, a lot of the cloud databases aren't, right? And that's one of the reasons why BigQuery is-- >> Google's really had a lot of success taking technologies that were built for some of the consumer services that we build and turning them into cloud-native Google Cloud. Like HDFS, who we were talking about, open source technologies came originally from the Google file system. Now we have a new version of it that we run internally called Colossus, incredible technologies that are cloud scale technologies that you can use to build things like Google Cloud storage. >> I remember one of the early Hadoop worlds, I was talking to a Google engineer and saying, "Well, wow, that's so cool that Hadoop came. You guys were the main spring of that." He goes, "Oh, we're way past Hadoop now." So this is early days of Hadoop (laughs) >> It's funny whenever Google says consumer services, usually consumer indicates just for me. But no, a consumer service for Google is at a scale that almost no business needs at a point in time. So you're not taking something and scaling it up-- >> Yeah. They're Tier one services-- for sure. >> Exactly. You're more often pairing it down so that a fortune 10 company can (laughs) leverage it. >> So let's dig into data protection in the Cloud, disaster recovery in the Cloud, Ransomware protection and then let's get into why Google. Maybe you could give us the trends that you're seeing, how you guys approach it, and why Google. >> Yeah. One of the things I always tell people, there's certain best practices and principles from on-prem that are just still applicable in the Cloud. And one of 'em is just fundamentals around recovery point objective and recovery time objective. You should know, for your apps, what you need, you should tier your apps, get best practice around them and think about those in the Cloud as well. The concept of RPO and RTO don't just magically go away just 'cause you're running in the Cloud. You should think about these things. And it's one of the reasons we're here at the VeeamON event. It's important, obviously, they have a tremendous skill in technology, but helping customers implement the right RPO and RTO for their different applications. And they also help do that in Google Cloud. So we have a great partnership with them, two main offerings that they offer in Google. One is integration for their on-prem things to use, basically Google as a backup target or DR target and then cloud-native backups they have some technologies, Veeam backup for Google. And obviously they also bought Kasten a while ago. 'Cause they also got excited about the container trend and obviously great technologies for those customers to use those in Google Cloud as well. >> So RPO and RTO is kind of IT terms, right? But we think of them as sort of the business requirement. Here's the business language. How much data are you willing to lose? And the business person says, "What? I don't want to lose any data." Oh, how big's your budget, right? Oh, okay. That's RPO. RTO is how fast you want to get it back? "How fast do you want to get it back if there's an outage?" "Instantly." "How much money do you want to spend on that?" "Oh." Okay. And then your application value will determine that. Okay. So that's what RPO and RTO is for those who you may not know that. Sometimes we get into the acronym too much. Okay. Why Google Cloud? >> Yeah. When I think about some of the infrastructure Google has and like why does it matter to a customer of Google Cloud? The first couple things I usually talk about is networking and storage. Compute's awesome, we can talk about containers and Kubernetes in a little bit, but if you just think about core infrastructure, networking, Google's got one of the biggest networks in the world, obviously to service all these consumer applications. Two things that I often tell people about the Google network, one, just tremendous backbone bandwidth across the regions. One of the things to think about with data protection, it's a large data set. When you're going to do recoveries, you're pushing lots of terabytes often and big pipes matter. Like it helps you hit the right recovery time objective 'cause you, "I want to do a restore across the country." You need good networks. And obviously Google has a tremendous network. I think we have like 20 subsea cables that we've built underneath the the world's oceans to connect the world on the internet. >> Awesome. >> The other thing that I think is really underappreciated about the Google network is how quickly you get into it. One of the reasons all the consumer apps have such good response time is there's a local access point to get into the Google network somewhere close to you almost anywhere in the world. I'm sure you can find some obscure place where we don't have an access point, but look Search and Photos and Maps and Workspace, they all work so well because you get in the Google network fast, local access points and then we can control the quality of service. And that underlying substrate is the same substrate we have in Google Cloud. So the network is number one. Second one in storage, we have some really incredible capabilities in cloud storage, particularly around our dual region and multi-region buckets. The multi-region bucket, the way I describe it to people, it's a continent sized bucket. Single bucket name, strongly consistent that basically spans a continent. It's in some senses a little bit of the Nirvana of storage. No more DR failover, right? In a lot of places, traditionally on-prem but even other clouds, two buckets, failover, right? Orchestration, set up. Whenever you do orchestration, the DR is a lot more complicated. You got to do more fire drills, make sure it works. We have this capability to have a single name space that spans regions and it has strong read after write consistency, everything you drop into it you can read back immediately. >> Say I'm on the west coast and I have a little bit of an on-premises data center still and I'm using Veeam to back something up and I'm using storage within GCP. Trace out exactly what you mean by that in terms of a continent sized bucket. Updates going to the recovery volume, for lack of a better term, in GCP. Where is that physically? If I'm on the west coast, what does that look like? >> Two main options. It depends again on what your business goals are. First option is you pick a regional bucket, multiple zones in a Google Cloud region are going to store your data. It's resilient 'cause there's three zones in the region but it's all in one region. And then your second option is this multi-region bucket, where we're basically taking a set of the Google Cloud regions from around North America and storing your data basically in the continent, multiple copies of your data. And that's great because if you want to protect yourself from a regional outage, right? Earthquake, natural disaster of some sort, this multi-region, it basically gives you this DR protection for free and it's... Well, it's not free 'cause you have to pay for it of course, but it's a free from a failover perspective. Single name space, your app doesn't need to know. You restart the app on the east coast, same bucket name. >> Right. That's good. >> Read and write instantly out of the bucket. >> Cool. What are you doing with Veeam? >> So we have this great partnership, obviously for data protection and DR. And I really often segment the conversation into two pieces. One is for traditional on-prem customers who essentially want to use the Cloud as either a backup or a DR target. Traditional Veeam backup and replication supports Google Cloud targets. You can write to cloud storage. Some of these advantages I mentioned. Our archive storage, really cheap. We just actually lowered the price for archive storage quite significantly, roughly a third of what you find in some of the other competitive clouds if you look at the capabilities. Our archive class storage, fast recovery time, right? Fast latency, no hours to kind of rehydrate. >> Good. Storage in the cloud is overpriced. >> Yeah. >> It is. It is historically overpriced despite all the rhetoric. Good. I didn't know that. I'm glad to hear. >> Yeah. So the archive class store, so you essentially read and write into this bucket and restore. So it's often one of the things I joke with people about. I live in Silicon Valley, I still see the tape truck driving around. I really think people can really modernize these environments and use the cloud as a backup target. You get a copy of your data off-prem. >> Don't you guys use tape? >> Well, we don't talk a lot about-- >> No comment. Just checking. >> And just to be clear, when he says cloud storage is overpriced, he thinks that a postage stamp is overpriced, right? >> No. >> If I give you 50 cents, are you going to deliver a letter cross country? No. Cloud storage, it's not overpriced. >> Okay. (David laughing) We're going to have that conversation. I think it's historically overpriced. I think it could be more attractive, relative to the cost of the underlying technology. So good for you guys pushing prices. >> Yeah. So this archive class storage, is one great area. The second area we really work with Veeam is protecting cloud-native workloads. So increasingly customers are running workloads in the Cloud, they run VMware in the Cloud, they run normal VMs, they run containers. Veeam has two offerings in Google that essentially help customers protect that data, hit their RPO, RTO objectives. Another thing that is not different in the Cloud is the need to meet your compliance regulations, right? So having a product like Veeam that is easy to show back to your auditor, to your regulator to make sure that you have copies of your data, that you can hit an appropriate recovery time objective if you're in finance or healthcare, energy. So there's some really good Veeam technologies that work in Google Cloud to protect applications that actually run in Google Cloud all in. >> To your point about the tape truck I was kind of tongue in cheek, but I know you guys use tape. But the point is you shouldn't have to call the tape truck, right, you should go to Google and say, "Okay. I need my data back." Now having said that sometimes the highest bandwidth in the world is putting all this stuff on the truck. Is there an option for that? >> Again, it gets back to this networking capability that I mentioned. Yes. People do like to joke about, okay, trucks and trains and things can have a lot of bandwidth, big networks can push a lot of data around, obviously. >> And you got a big network. >> We got a huge network. So if you want to push... I've seen statistics. You can do terabits a second to a single Google Cloud storage bucket, super computing type performance inside Google Cloud, which from a scale perspective, whether it be network compute, these are things scale. If there's one thing that Google's really, really good at, it's really high scale. >> If your's companies can't afford to. >> Yeah, if you're that sensitive, avoid moving the data altogether. If you're that sensitive, have your recovery capability be in GCP. >> Yeah. Well, and again-- >> So that when you're recovering you're not having to move data. >> It's approximate to, yeah. That's the point. >> Recovering GCV, fail over your VMware cluster. >> Exactly. >> And use the cloud as a DR target. >> We got very little time but can you just give us a rundown of your portfolio in storage? >> Yeah. So storage, cloud storage for object storage got a bunch of regional options and classes of storage, like I mentioned, archive storage. Our first party offerings in the file area, our file store, basic enterprise and high scale, which is really for highly concurrent paralyzed applications. Persistent disk is our block storage offering. We also have a very high performance cash block storage offering and local SSDs. So that's the main kind of food groups of storage, block file object, increasingly doing a lot of work in data protection and in transfer and distributed cloud environments where the edge of the cloud is pushing outside the cloud regions themselves. But those are our products. Also, we spend a lot of time with our partners 'cause Google's really good at building and open sourcing and partnering at the same time hence with Veeam, obviously with file. We partner with NetApp and Dell and a bunch of folks. So there's a lot of partnerships we have that are important to us as well. >> Yeah. You know, we didn't get into Kubernetes, a great example of open source, Istio, Anthos, we didn't talk about the on-prem stuff. So Brian we'll have to have you back and chat about those things. >> I look forward to it. >> To quote my friend Matt baker, it's not a zero sum game out there and it's great to see Google pushing the technology. Thanks so much for coming on. All right. And thank you for watching. Keep it right there. Our next guest will be up shortly. This is Dave Vellante for Dave Nicholson. We're live at VeeamON 2022 and we'll be right back. (soft beats music)

Published Date : May 18 2022

SUMMARY :

He's the director of product Super excited to be here. So we're going to get into it here. You joined, right as the pandemic hit. and all the public kind of things that Google has the In a rapidly What's different in the technology? the same and familiar to you, in that regard? (Dave laughing) storage in the cloud, BigQuery, obviously on the analytics side, around the world. a lot of the cloud of the consumer services the early Hadoop worlds, is at a scale that for sure. so that a fortune 10 company protection in the Cloud, And it's one of the reasons of the business requirement. One of the things to think is the same substrate we have If I'm on the west coast, of the Google Cloud regions That's good. out of the bucket. And I really often segment the cloud is overpriced. despite all the rhetoric. So it's often one of the things No comment. are you going to deliver the underlying technology. is the need to meet your But the point is you shouldn't have a lot of bandwidth, So if you want to push... avoid moving the data altogether. So that when you're recovering That's the point. over your VMware cluster. So that's the main kind So Brian we'll have to have you back pushing the technology.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave NicholsonPERSON

0.99+

David NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

Brian SchwarzPERSON

0.99+

DavidPERSON

0.99+

GoogleORGANIZATION

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

50 centsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

two piecesQUANTITY

0.99+

10QUANTITY

0.99+

NetAppORGANIZATION

0.99+

second optionQUANTITY

0.99+

two offeringsQUANTITY

0.99+

15%QUANTITY

0.99+

VeeamORGANIZATION

0.99+

First optionQUANTITY

0.99+

three zonesQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

15QUANTITY

0.99+

one regionQUANTITY

0.99+

oneQUANTITY

0.99+

BigQueryTITLE

0.99+

OracleORGANIZATION

0.99+

Two main optionsQUANTITY

0.99+

OneQUANTITY

0.99+

Matt bakerPERSON

0.99+

DavePERSON

0.99+

second areaQUANTITY

0.98+

Second oneQUANTITY

0.98+

20 subsea cablesQUANTITY

0.98+

10, $12 billionQUANTITY

0.98+

two main offeringsQUANTITY

0.97+

North AmericaLOCATION

0.97+

nine productsQUANTITY

0.97+

two bucketsQUANTITY

0.96+

one thingQUANTITY

0.96+

SingleQUANTITY

0.96+

HadoopTITLE

0.95+

Google CloudTITLE

0.95+

one eyeQUANTITY

0.95+

AnthosORGANIZATION

0.95+

Two thingsQUANTITY

0.94+

PureORGANIZATION

0.94+

first partyQUANTITY

0.92+

VeeamON 2022EVENT

0.91+

pandemicEVENT

0.91+