Image Title

Search Results for each cycle:

Jon Turow, Madrona Venture Group | CloudNativeSecurityCon 23


 

(upbeat music) >> Hello and welcome back to theCUBE. We're here in Palo Alto, California. I'm your host, John Furrier with a special guest here in the studio. As part of our Cloud Native SecurityCon Coverage we had an opportunity to bring in Jon Turow who is the partner at Madrona Venture Partners formerly with AWS and to talk about machine learning, foundational models, and how the future of AI is going to be impacted by some of the innovation around what's going on in the industry. ChatGPT has taken the world by storm. A million downloads, fastest to the million downloads there. Before some were saying it's just a gimmick. Others saying it's a game changer. Jon's here to break it down, and great to have you on. Thanks for coming in. >> Thanks John. Glad to be here. >> Thanks for coming on. So first of all, I'm glad you're here. First of all, because two things. One, you were formerly with AWS, got a lot of experience running projects at AWS. Now a partner at Madrona, a great firm doing great deals, and they had this future at modern application kind of thesis. Now you are putting out some content recently around foundational models. You're deep into computer vision. You were the IoT general manager at AWS among other things, Greengrass. So you know a lot about data. You know a lot about some of this automation, some of the edge stuff. You've been in the middle of all these kind of areas that now seem to be the next wave coming. So I wanted to ask you what your thoughts are of how the machine learning and this new automation wave is coming in, this AI tools are coming out. Is it a platform? Is it going to be smarter? What feeds AI? What's your take on this whole foundational big movement into AI? What's your general reaction to all this? >> So, thanks, Jon, again for having me here. Really excited to talk about these things. AI has been coming for a long time. It's been kind of the next big thing. Always just over the horizon for quite some time. And we've seen really compelling applications in generations before and until now. Amazon and AWS have introduced a lot of them. My firm, Madrona Venture Group has invested in some of those early players as well. But what we're seeing now is something categorically different. That's really exciting and feels like a durable change. And I can try and explain what that is. We have these really large models that are useful in a general way. They can be applied to a lot of different tasks beyond the specific task that the designers envisioned. That makes them more flexible, that makes them more useful for building applications than what we've seen before. And so that, we can talk about the depths of it, but in a nutshell, that's why I think people are really excited. >> And I think one of the things that you wrote about that jumped out at me is that this seems to be this moment where there's been a multiple decades of nerds and computer scientists and programmers and data thinkers around waiting for AI to blossom. And it's like they're scratching that itch. Every year is going to be, and it's like the bottleneck's always been compute power. And we've seen other areas, genome sequencing, all kinds of high computation things where required high forms computing. But now there's no real bottleneck to compute. You got cloud. And so you're starting to see the emergence of a massive acceleration of where AI's been and where it needs to be going. Now, it's almost like it's got a reboot. It's almost a renaissance in the AI community with a whole nother macro environmental things happening. Cloud, younger generation, applications proliferate from mobile to cloud native. It's the perfect storm for this kind of moment to switch over. Am I overreading that? Is that right? >> You're right. And it's been cooking for a cycle or two. And let me try and explain why that is. We have cloud and AWS launch in whatever it was, 2006, and offered more compute to more people than really was possible before. Initially that was about taking existing applications and running them more easily in a bigger scale. But in that period of time what's also become possible is new kinds of computation that really weren't practical or even possible without that vast amount of compute. And so one result that came of that is something called the transformer AI model architecture. And Google came out with that, published a paper in 2017. And what that says is, with a transformer model you can actually train an arbitrarily large amount of data into a model, and see what happens. That's what Google demonstrated in 2017. The what happens is the really exciting part because when you do that, what you start to see, when models exceed a certain size that we had never really seen before all of a sudden they get what we call emerging capabilities of complex reasoning and reasoning outside a domain and reasoning with data. The kinds of things that people describe as spooky when they play with something like ChatGPT. That's the underlying term. We don't as an industry quite know why it happens or how it happens, but we can measure that it does. So cloud enables new kinds of math and science. New kinds of math and science allow new kinds of experimentation. And that experimentation has led to this new generation of models. >> So one of the debates we had on theCUBE at our Supercloud event last month was, what's the barriers to entry for say OpenAI, for instance? Obviously, I weighed in aggressively and said, "The barriers for getting into cloud are high because all the CapEx." And Howie Xu formerly VMware, now at ZScaler, he's an AI machine learning guy. He was like, "Well, you can spend $100 million and replicate it." I saw a quote that set up for 180,000 I can get this other package. What's the barriers to entry? Is ChatGPT or OpenAI, does it have sustainability? Is it easy to get into? What is the market like for AI? I mean, because a lot of entrepreneurs are jumping in. I mean, I just read a story today. San Francisco's got more inbound migration because of the AI action happening, Seattle's booming, Boston with MIT's been working on neural networks for generations. That's what we've found the answer. Get off the neural network, Boston jump on the AI bus. So there's total excitement for this. People are enthusiastic around this area. >> You can think of an iPhone versus Android tension that's happening today. In the iPhone world, there are proprietary models from OpenAI who you might consider as the leader. There's Cohere, there's AI21, there's Anthropic, Google's going to have their own, and a few others. These are proprietary models that developers can build on top of, get started really quickly. They're measured to have the highest accuracy and the highest performance today. That's the proprietary side. On the other side, there is an open source part of the world. These are a proliferation of model architectures that developers and practitioners can take off the shelf and train themselves. Typically found in Hugging face. What people seem to think is that the accuracy and performance of the open source models is something like 18 to 20 months behind the accuracy and performance of the proprietary models. But on the other hand, there's infinite flexibility for teams that are capable enough. So you're going to see teams choose sides based on whether they want speed or flexibility. >> That's interesting. And that brings up a point I was talking to a startup and the debate was, do you abstract away from the hardware and be software-defined or software-led on the AI side and let the hardware side just extremely accelerate on its own, 'cause it's flywheel? So again, back to proprietary, that's with hardware kind of bundled in, bolted on. Is it accelerator or is it bolted on or is it part of it? So to me, I think that the big struggle in understanding this is that which one will end up being right. I mean, is it a beta max versus VHS kind of thing going on? Or iPhone, Android, I mean iPhone makes a lot of sense, but if you're Apple, but is there an Apple moment in the machine learning? >> In proprietary models, here does seem to be a jump ball. That there's going to be a virtuous flywheel that emerges that, for example, all these excitement about ChatGPT. What's really exciting about it is it's really easy to use. The technology isn't so different from what we've seen before even from OpenAI. You mentioned a million users in a short period of time, all providing training data for OpenAI that makes their underlying models, their next generation even better. So it's not unreasonable to guess that there's going to be power laws that emerge on the proprietary side. What I think history has shown is that iPhone, Android, Windows, Linux, there seems to be gravity towards this yin and yang. And my guess, and what other people seem to think is going to be the case is that we're going to continue to see these two poles of AI. >> So let's get into the relationship with data because I've been emerging myself with ChatGPT, fascinated by the ease of use, yes, but also the fidelity of how you query it. And I felt like when I was doing writing SQL back in the eighties and nineties where SQL was emerging. You had to be really a guru at the SQL to get the answers you wanted. It seems like the querying into ChatGPT is a good thing if you know how to talk to it. Labeling whether your input is and it does a great job if you feed it right. If you ask a generic questions like Google. It's like a Google search. It gives you great format, sounds credible, but the facts are kind of wrong. >> That's right. >> That's where general consensus is coming on. So what does that mean? That means people are on one hand saying, "Ah, it's bullshit 'cause it's wrong." But I look at, I'm like, "Wow, that's that's compelling." 'Cause if you feed it the right data, so now we're in the data modeling here, so the role of data's going to be critical. Is there a data operating system emerging? Because if this thing continues to go the way it's going you can almost imagine as you would look at companies to invest in. Who's going to be right on this? What's going to scale? What's sustainable? What could build a durable company? It might not look what like what people think it is. I mean, I remember when Google started everyone thought it was the worst search engine because it wasn't a portal. But it was the best organic search on the planet became successful. So I'm trying to figure out like, okay, how do you read this? How do you read the tea leaves? >> Yeah. There are a few different ways that companies can differentiate themselves. Teams with galactic capabilities to take an open source model and then change the architecture and retrain and go down to the silicon. They can do things that might not have been possible for other teams to do. There's a company that that we're proud to be investors in called RunwayML that provides video accelerated, sorry, AI accelerated video editing capabilities. They were used in everything, everywhere all at once and some others. In order to build RunwayML, they needed a vision of what the future was going to look like and they needed to make deep contributions to the science that was going to enable all that. But not every team has those capabilities, maybe nor should they. So as far as how other teams are going to differentiate there's a couple of things that they can do. One is called prompt engineering where they shape on behalf of their own users exactly how the prompt to get fed to the underlying model. It's not clear whether that's going to be a durable problem or whether like Google, we consumers are going to start to get more intuitive about this. That's one. The second is what's called information retrieval. How can I get information about the world outside, information from a database or a data store or whatever service into these models so they can reason about them. And the third is, this is going to sound funny, but attribution. Just like you would do in a news report or an academic paper. If you can state where your facts are coming from, the downstream consumer or the human being who has to use that information actually is going to be able to make better sense of it and rely better on it. So that's prompt engineering, that's retrieval, and that's attribution. >> So that brings me to my next point I want to dig in on is the foundational model stack that you published. And I'll start by saying that with ChatGPT, if you take out the naysayers who are like throwing cold water on it about being a gimmick or whatever, and then you got the other side, I would call the alpha nerds who are like they can see, "Wow, this is amazing." This is truly NextGen. This isn't yesterday's chatbot nonsense. They're like, they're all over it. It's that everybody's using it right now in every vertical. I heard someone using it for security logs. I heard a data center, hardware vendor using it for pushing out appsec review updates. I mean, I've heard corner cases. We're using it for theCUBE to put our metadata in. So there's a horizontal use case of value. So to me that tells me it's a market there. So when you have horizontal scalability in the use case you're going to have a stack. So you publish this stack and it has an application at the top, applications like Jasper out there. You're seeing ChatGPT. But you go after the bottom, you got silicon, cloud, foundational model operations, the foundational models themselves, tooling, sources, actions. Where'd you get this from? How'd you put this together? Did you just work backwards from the startups or was there a thesis behind this? Could you share your thoughts behind this foundational model stack? >> Sure. Well, I'm a recovering product manager and my job that I think about as a product manager is who is my customer and what problem he wants to solve. And so to put myself in the mindset of an application developer and a founder who is actually my customer as a partner at Madrona, I think about what technology and resources does she need to be really powerful, to be able to take a brilliant idea, and actually bring that to life. And if you spend time with that community, which I do and I've met with hundreds of founders now who are trying to do exactly this, you can see that the stack is emerging. In fact, we first drew it in, not in January 2023, but October 2022. And if you look at the difference between the October '22 and January '23 stacks you're going to see that holes in the stack that we identified in October around tooling and around foundation model ops and the rest are organically starting to get filled because of how much demand from the developers at the top of the stack. >> If you look at the young generation coming out and even some of the analysts, I was just reading an analyst report on who's following the whole data stacks area, Databricks, Snowflake, there's variety of analytics, realtime AI, data's hot. There's a lot of engineers coming out that were either data scientists or I would call data platform engineering folks are becoming very key resources in this area. What's the skillset emerging and what's the mindset of that entrepreneur that sees the opportunity? How does these startups come together? Is there a pattern in the formation? Is there a pattern in the competency or proficiency around the talent behind these ventures? >> Yes. I would say there's two groups. The first is a very distinct pattern, John. For the past 10 years or a little more we've seen a pattern of democratization of ML where more and more people had access to this powerful science and technology. And since about 2017, with the rise of the transformer architecture in these foundation models, that pattern has reversed. All of a sudden what has become broader access is now shrinking to a pretty small group of scientists who can actually train and manipulate the architectures of these models themselves. So that's one. And what that means is the teams who can do that have huge ability to make the future happen in ways that other people don't have access to yet. That's one. The second is there is a broader population of people who by definition has even more collective imagination 'cause there's even more people who sees what should be possible and can use things like the proprietary models, like the OpenAI models that are available off the shelf and try to create something that maybe nobody has seen before. And when they do that, Jasper AI is a great example of that. Jasper AI is a company that creates marketing copy automatically with generative models such as GPT-3. They do that and it's really useful and it's almost fun for a marketer to use that. But there are going to be questions of how they can defend that against someone else who has access to the same technology. It's a different population of founders who has to find other sources of differentiation without being able to go all the way down to the the silicon and the science. >> Yeah, and it's going to be also opportunity recognition is one thing. Building a viable venture product market fit. You got competition. And so when things get crowded you got to have some differentiation. I think that's going to be the key. And that's where I was trying to figure out and I think data with scale I think are big ones. Where's the vulnerability in the stack in terms of gaps? Where's the white space? I shouldn't say vulnerability. I should say where's the opportunity, where's the white space in the stack that you see opportunities for entrepreneurs to attack? >> I would say there's two. At the application level, there is almost infinite opportunity, John, because almost every kind of application is about to be reimagined or disrupted with a new generation that takes advantage of this really powerful new technology. And so if there is a kind of application in almost any vertical, it's hard to rule something out. Almost any vertical that a founder wishes she had created the original app in, well, now it's her time. So that's one. The second is, if you look at the tooling layer that we discussed, tooling is a really powerful way that you can provide more flexibility to app developers to get more differentiation for themselves. And the tooling layer is still forming. This is the interface between the models themselves and the applications. Tools that help bring in data, as you mentioned, connect to external actions, bring context across multiple calls, chain together multiple models. These kinds of things, there's huge opportunity there. >> Well, Jon, I really appreciate you coming in. I had a couple more questions, but I will take a minute to read some of your bios for the audience and we'll get into, I won't embarrass you, but I want to set the context. You said you were recovering product manager, 10 plus years at AWS. Obviously, recovering from AWS, which is a whole nother dimension of recovering. In all seriousness, I talked to Andy Jassy around that time and Dr. Matt Wood and it was about that time when AI was just getting on the radar when they started. So you guys started seeing the wave coming in early on. So I remember at that time as Amazon was starting to grow significantly and even just stock price and overall growth. From a tech perspective, it was pretty clear what was coming, so you were there when this tsunami hit. >> Jon: That's right. >> And you had a front row seat building tech, you were led the product teams for Computer Vision AI, Textract, AI intelligence for document processing, recognition for image and video analysis. You wrote the business product plan for AWS IoT and Greengrass, which we've covered a lot in theCUBE, which extends out to the whole edge thing. So you know a lot about AI/ML, edge computing, IOT, messaging, which I call the law of small numbers that scale become big. This is a big new thing. So as a former AWS leader who's been there and at Madrona, what's your investment thesis as you start to peruse the landscape and talk to entrepreneurs as you got the stack? What's the big picture? What are you looking for? What's the thesis? How do you see this next five years emerging? >> Five years is a really long time given some of this science is only six months out. I'll start with some, no pun intended, some foundational things. And we can talk about some implications of the technology. The basics are the same as they've always been. We want, what I like to call customers with their hair on fire. So they have problems, so urgent they'll buy half a product. The joke is if your hair is on fire you might want a bucket of cold water, but you'll take a tennis racket and you'll beat yourself over the head to put the fire out. You want those customers 'cause they'll meet you more than halfway. And when you find them, you can obsess about them and you can get better every day. So we want customers with their hair on fire. We want founders who have empathy for those customers, understand what is going to be required to serve them really well, and have what I like to call founder-market fit to be able to build the products that those customers are going to need. >> And because that's a good strategy from an emerging, not yet fully baked out requirements definition. >> Jon: That's right. >> Enough where directionally they're leaning in, more than in, they're part of the product development process. >> That's right. And when you're doing early stage development, which is where I personally spend a lot of my time at the seed and A and a little bit beyond that stage often that's going to be what you have to go on because the future is going to be so complex that you can't see the curves beyond it. But if you have customers with their hair on fire and talented founders who have the capability to serve those customers, that's got me interested. >> So if I'm an entrepreneur, I walk in and say, "I have customers that have their hair on fire." What kind of checks do you write? What's the kind of the average you're seeing for seed and series? Probably seed, seed rounds and series As. >> It can depend. I have seen seed rounds of double digit million dollars. I have seen seed rounds much smaller than that. It really depends on what is going to be the right thing for these founders to prove out the hypothesis that they're testing that says, "Look, we have this customer with her hair on fire. We think we can build at least a tennis racket that she can use to start beating herself over the head and put the fire out. And then we're going to have something really interesting that we can scale up from there and we can make the future happen. >> So it sounds like your advice to founders is go out and find some customers, show them a product, don't obsess over full completion, get some sort of vibe on fit and go from there. >> Yeah, and I think by the time founders come to me they may not have a product, they may not have a deck, but if they have a customer with her hair on fire, then I'm really interested. >> Well, I always love the professional services angle on these markets. You go in and you get some business and you understand it. Walk away if you don't like it, but you see the hair on fire, then you go in product mode. >> That's right. >> All Right, Jon, thank you for coming on theCUBE. Really appreciate you stopping by the studio and good luck on your investments. Great to see you. >> You too. >> Thanks for coming on. >> Thank you, Jon. >> CUBE coverage here at Palo Alto. I'm John Furrier, your host. More coverage with CUBE Conversations after this break. (upbeat music)

Published Date : Feb 2 2023

SUMMARY :

and great to have you on. that now seem to be the next wave coming. It's been kind of the next big thing. is that this seems to be this moment and offered more compute to more people What's the barriers to entry? is that the accuracy and the debate was, do you that there's going to be power laws but also the fidelity of how you query it. going to be critical. exactly how the prompt to get So that brings me to my next point and actually bring that to life. and even some of the analysts, But there are going to be questions Yeah, and it's going to be and the applications. the radar when they started. and talk to entrepreneurs the head to put the fire out. And because that's a good of the product development process. that you can't see the curves beyond it. What kind of checks do you write? and put the fire out. to founders is go out time founders come to me and you understand it. stopping by the studio More coverage with CUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JonPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

2017DATE

0.99+

January 2023DATE

0.99+

Jon TurowPERSON

0.99+

OctoberDATE

0.99+

18QUANTITY

0.99+

MITORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

10 plus yearsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

October 2022DATE

0.99+

hundredsQUANTITY

0.99+

MadronaORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Madrona Venture PartnersORGANIZATION

0.99+

January '23DATE

0.99+

two groupsQUANTITY

0.99+

Matt WoodPERSON

0.99+

Madrona Venture GroupORGANIZATION

0.99+

180,000QUANTITY

0.99+

October '22DATE

0.99+

JasperTITLE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

six monthsQUANTITY

0.99+

2006DATE

0.99+

million downloadsQUANTITY

0.99+

Five yearsQUANTITY

0.99+

SQLTITLE

0.99+

last monthDATE

0.99+

two polesQUANTITY

0.99+

firstQUANTITY

0.99+

Howie XuPERSON

0.99+

VMwareORGANIZATION

0.99+

thirdQUANTITY

0.99+

20 monthsQUANTITY

0.99+

GreengrassORGANIZATION

0.99+

Madrona Venture GroupORGANIZATION

0.98+

secondQUANTITY

0.98+

OneQUANTITY

0.98+

SupercloudEVENT

0.98+

RunwayMLTITLE

0.98+

San FranciscoLOCATION

0.98+

ZScalerORGANIZATION

0.98+

yesterdayDATE

0.98+

oneQUANTITY

0.98+

FirstQUANTITY

0.97+

CapExORGANIZATION

0.97+

eightiesDATE

0.97+

ChatGPTTITLE

0.96+

Dr.PERSON

0.96+

Jason Collier, AMD | VMware Explore 2022


 

(upbeat music) >> Welcome back to San Francisco, "theCUBE" is live, our day two coverage of VMware Explore 2022 continues. Lisa Martin with Dave Nicholson. Dave and I are pleased to welcome Jason Collier, principal member of technical staff at AMD to the program. Jason, it's great to have you. >> Thank you, it's great to be here. >> So what's going on at AMD? I hear you have some juicy stuff to talk about. >> Oh, we've got a ton of juicy stuff to talk about. Clearly the Project Monterey announcement was big for us, so we've got that to talk about. Another thing that I really wanted to talk about was a tool that we created and we call it, it's the VMware Architecture Migration Tool, call it VAMT for short. It's a tool that we created and we worked together with VMware and some of their professional services crew to actually develop this tool. And it is also an open source based tool. And really the primary purpose is to easily enable you to move from one CPU architecture to another CPU architecture, and do that in a cold migration fashion. >> So we're probably not talking about CPUs from Tandy, Radio Shack systems, likely this would be what we might refer to as other X86 systems. >> Other X86 systems is a good way to refer to it. >> So it's interesting timing for the development and the release of a tool like this, because in this sort of X86 universe, there are players who have been delayed in terms of delivering their next gen stuff. My understanding is AMD has been public with the idea that they're on track for by the end of the year, Genoa, next gen architecture. So can you imagine a situation where someone has an existing set of infrastructure and they're like, hey, you know what I want to get on board, the AMD train, is this something they can use from the VMware environment? >> Absolutely, and when you think about- >> Tell us exactly what that would look like, walk us through 100 servers, VMware, 1000 VMs, just to make the math easy. What do you do? How does it work? >> So one, there's several things that the tool can do, we actually went through, the design process was quite extensive on this. And we went through all of the planning phases that you need to go through to do these VM migrations. Now this has to be a cold migration, it's not a live migration. You can't do that between the CPU architectures. But what we do is you create a list of all of the virtual machines that you want to migrate. So we take this CSV file, we import this CSV file, and we ask for things like, okay, what's the name? Where do you want to migrate it to? So from one cluster to another, what do you want to migrate it to? What are the networks that you want to move it to? And then the storage platform. So we can move storage, it could either be shared storage, or we could move say from VSAN to VSAN, however you want to set it up. So it will do those storage migrations as well. And then what happens is it's actually going to go through, it's going to shut down the VM, it's going to take a snapshot, it is going to then basically move the compute and/or storage resources over. And once it does that, it's going to power 'em back up. And it's going to check, we've got some validation tools, where it's going to make sure VM Tools comes back up where everything is copacetic, it didn't blue screen or anything like that. And once it comes back up, then everything's good, it moves onto the next one. Now a couple of things that we've got feature wise, we built into it. You can parallelize these tasks. So you can say, how many of these machines do you want to do at any given time? So it could be, say 10 machines, 50 machines, 100 machines at a time, that you want to go through and do this move. Now, if it did blue screen, it will actually roll it back to that snapshot on the origin cluster. So that there is some protection on that. A couple other things that are actually in there are things like audit tracking. So we do full audit logging on this stuff, we take a snapshot, there's basically kind of an audit trail of what happens. There's also full logging, SYS logging, and then also we'll do email reporting. So you can say, run this and then shoot me a report when this is over. Now, one other cool thing is you can also actually define a change window. So I don't want to do this in the middle of the afternoon on a Tuesday. So I want to do this later at night, over the weekend, you can actually just queue this up, set it, schedule it, it'll run. You can also define how long you want that change window to be. And what it'll do, it'll do as many as it can, then it'll effectively stop, finish up, clean up the tasks and then send you a report on what all was successfully moved. >> Okay, I'm going to go down the rabbit hole a little bit on this, 'cause I think it's important. And if I say something incorrect, you correct me. >> No problem. >> In terms of my technical understanding. >> I got you. >> So you've got a VM, essentially a virtual machine typically will consist of an entire operating system within that virtual machine. So there's a construct that containerizes, if you will, the operating system, what is the difference, where is the difference in the instruction set? Where does it lie? Is it in the OS' interaction with the CPU or is it between the construct that is the sort of wrapper around the VM that is the difference? >> It's really primarily the OS, right? And we've not really had too many issues doing this and most of the time, what is going to happen, that OS is going to boot up, it's going to recognize the architecture that it's on, it's going to see the underlying architecture, and boot up. All the major operating systems that we test worked fine. I mean, typically they're going to work on all the X86 platforms. But there might be instruction sets that are kind of enabled in one architecture that may not be in another architecture. >> And you're looking for that during this process. >> Well usually the OS itself is going to kind of detect that. So if it pops up, the one thing that is kind of a caution that you need to look for. If you've got an application that's explicitly using an instruction set that's on one CPU vendor and not the other CPU vendor. That's the one thing where you're probably going to see some application differences. That said, it'll probably be compatible, but you may not get that instruction set advantage in it. >> But this tool remediates against that. >> Yeah, and what we do, we're actually using VM Tools itself to go through and validate a lot of those components. So we'll look and make sure VM Tools is enabled in the first place, on the source system. And then when it gets to the destination system, we also look at VM Tools to see what is and what is not enabled. >> Okay, I'm going to put you on the spot here. What's the zinger, where doesn't it work? You already said cold, we understand, you can schedule for cold migrations, that's not a zinger. What's the zinger, where doesn't it work? >> It doesn't work like, live migrations just don't work. >> No live, okay, okay, no live. What about something else? What's the oh, you've got that version, you've got that version of X86 architecture, it-won't work, anything? >> A majority of those cases work, where it would fail, where it's going to kick back and say, hey, VM Tools is not installed. So where you would see this is if you're running a virtual appliance from some vendor, like insert vendor here that say, got a firewall, or got something like that, and they don't have VM Tools enabled. It's going to fail it out of the gate, and say, hey, VM Tools is not on this, you might want to manually do it. >> But you can figure out how to fix that? >> You can figure out how to do that. You can also, and there's a flag in there, so in kind of the options that you give it, you say, ignore VM Tools, don't care, move it anyway. So if you've got less, some VMs that are in there, but they're not a priority VM, then it's going to migrate just fine. >> Got It. >> Can you elaborate a little bit on the joint development work that AMD and VMware are doing together and the value in it for customers? >> Yeah, so it's one of those things we worked with VMware to basically produce this open source tool. So we did a lot of the core component and design and we actually engaged VMware Professional Services. And a big shout out to Austin Browder. He helped us a ton in this project specifically. And we basically worked, we created this, kind of co-designed, what it was going to look like. And then jointly worked together on the coding, of pulling this thing together. And then after that, and this is actually posted up on VMware's public repos now in GitHub. So you can go to GitHub, you can go to the VMware samples code, and you can download this thing that we've created. And it's really built to help ease migrations from one architecture to another. So if you're looking for a big data center move and you got a bunch of VMs to move. I mean, even if it's same architecture to same architecture, it's definitely going to ease the pain of going through and doing a migration of, it's one thing when you're doing 10 machines, but when you're doing 10,000 virtual machines, that's a different story. It gets to be quite operationally inefficient. >> I lose track after three. >> Yeah. >> So I'm good for three, not four. >> I was going to ask you what your target market segment is here. Expand on that a little bit and talk to me about who you're working with and those organizations. >> So really this is targeted toward organizations that have large deployments in enterprise, but also I think this is a big play with channel partners as well. So folks out there in the channel that are doing these migrations and they do a lot of these, when you're thinking about the small and mid-size organizations, it's a great fit for that. Especially if they're kind of doing that upgrade, the lift and shift upgrade, from here's where you've been five to seven years on an architecture and you want to move to a new architecture. This is really going to help. And this is not a point and click GUI kind of thing. It's command line driven, it's using PowerShell, we're using PowerCLI to do the majority of this work. And for channel partners, this is an excellent opportunity to put the value and the value add and VAR, And there's a lot of opportunity for, I think, channel partners to really go and take this. And once again, being open source. We expect this to be extensible, we want the community to contribute and put back into this to basically help grow it and make it a more useful tool for doing these cold migrations between CPU architectures. >> Have you seen any in the last couple of years of dynamics, obviously across the world, any industries in particular that are really leading edge for what you guys are doing? >> Yeah, that's really, really interesting. I mean, we've seen it, it's honestly been a very horizontal problem, pretty much across all vertical markets. I mean, we've seen it in financial services, we've seen it in, honestly, pretty much across the board. Manufacturing, financial services, healthcare, we have seen kind of a strong interest in that. And then also we we've actually taken this and presented this to some of our channel partners as well. And there's been a lot of interest in it. I think we presented it to about 30 different channel partners, a couple of weeks back about this. And I got contact from 30 different channel partners that said they're interested in basically helping us work on it. >> Tagging on to Lisa's question, do you have visibility into the AMD thought process around the timing of your next gen release versus others that are competitors in the marketplace? How you might leverage that in terms of programs where partners are going out and saying, hey, perfect time, you need a refresh, perfect time to look at AMD, if you haven't looked at them recently. Do you have any insight into that in what's going on? I know you're focused on this area. But what are your thoughts on, well, what's the buzz? What's the buzz inside AMD on that? >> Well, when you look overall, if you look at the Gartner Hype Cycle, when VMware was being broadly adopted, when VMware was being broadly adopted, I'm going to be blunt, and I'm going to be honest right here, AMD didn't have a horse in the race. And the majority of those VMware deployments we see are not running on AMD. Now that said, there's an extreme interest in the fact that we've got these very cored in systems that are now coming up on, now you're at that five to seven year refresh window of pulling in new hardware. And we have extremely attractive hardware when it comes to running virtualized workloads. The test cluster that I'm running at home, I've got that five to seven year old gear, and I've got some of the, even just the Milan systems that we've got. And I've got three nodes of another architecture going onto AMD. And when I got these three nodes completely maxed to the number of VMs that I can run on 'em, I'm at a quarter of the capacity of what I'm putting on the new stuff. So what you get is, I mean, we worked the numbers, and it's definitely, it's like a 30% decrease in the amount of resources that you need. >> That's a compelling number. >> It's a compelling number. >> 5%, 10%, nobody's going to do anything for that. You talk 30%. >> 30%. It's meaningful, it's meaningful. Now you you're out of Austin, right? >> Yes. >> So first thing I thought of when you talk about running clusters in your home is the cost of electricity, but you're okay. >> I'm okay. >> You don't live here, you don't live here, you don't need to worry about that. >> I'm okay. >> Do you have a favorite customer example that you think really articulates the value of AMD when you're in customer conversations and they go, why AMD and you hit back with this? >> Yeah. Actually it's funny because I had a conversation like that last night, kind of random person I met later on in the evening. We were going through this discussion and they were facing exactly this problem. They had that five to seven year infrastructure. It's funny, because the guy was a gamer too, and he's like, man, I've always been a big AMD fan, I love the CPUs all the way since back in basically the Opterons and Athlons right. He's like, I've always loved the AMD systems, loved the graphics cards. And now with what we're doing with Ryzen and all that stuff. He's always been a big AMD fan. He's like, and I'm going through doing my infrastructure refresh. And I told him, I'm just like, well, hey, talk to your VAR and have 'em plug some AMD SKUs in there from the Dells, HPs and Lenovos. And then we've got this tool to basically help make that migration easier on you. And so once we had that discussion and it was great, then he swung by the booth today and I was able to just go over, hey, this is the tool, this is how you use it, here's all the info. Call me if you need any help. >> Yeah, when we were talking earlier, we learned that you were at Scale. So what are you liking about AMD? How does that relate? >> The funny thing is this is actually the first time in my career that I've actually had a job where I didn't work for myself. I've been doing venture backed startups the last 25 years and we've raised couple hundred million dollars worth of investment over the years. And so one, I figured, here I am going to AMD, a larger corporation. I'm just like, am I going to be able to make it a year? And I have been here longer than a year and I absolutely love it. The culture at AMD is amazing. We still have that really, I mean, almost it's like that underdog mentality within the organization. And the team that I'm working with is a phenomenal team. And it's actually, our EVP and our Corp VP, were actually my executive sponsors, we were at a prior company. They were one of my executive sponsors when I was at Scale. And so my now VP boss calls me up and says, hey, I'm putting a band together, are you interested? And I was kind of enjoying a semi-retirement lifestyle. And then I'm just like, man, because it's you, yes, I am interested. And the group that we're in, the work that we're doing, the way that we're really focusing on forward looking things that are affecting the data center, what's going to be the data center like three to five years from now. It's exciting, and I am having a blast, I'm having the time of my life. I absolutely love it. >> Well, that relationship and the trust that you will have with each other, that bleeds into the customer conversations, the partner conversations, the employee conversations, it's all inextricably linked. >> Yes it is. >> And we want to know, you said three to five years out, like what? Like what? Just general futurist stuff, where do you think this is going. >> Well, it's interesting. >> So moon collides with the earth in 2025, we already know that. >> So we dialed this back to the Pensando acquisition. When you look at the Pensando acquisition and you look at basically where data centers are today, but then you look at where basically the big hyperscalers are. You look at an AWS, you look at their architecture, you specifically wrap Nitro around that, that's a very different architecture than what's being run in the data center. And when you look at what Pensando does, that's a lot of starting to bring what these real clouds out there, what these big hyperscalers are running into the grasps of the data center. And so I think you're going to see a fundamental shift. The next 10 years are going to be exciting because the way you look at a data center now, when you think of what CPUs do, what shared storage, how the networking is all set up, it ain't going to look the same. >> Okay, so the competing vision with that, to play devil's advocate, would be DPUs are kind of expensive. Why don't we just use NICs, give 'em some more bandwidth, and use the cheapest stuff. That's the competing vision. >> That could be. >> Or the alternative vision, and I imagine everything else we've experienced in our careers, they will run in parallel paths, fit for function. >> Well, parallel paths always exist, right? Otherwise, 'cause you know how many times you've heard mainframe's dead, tape's dead, spinning disk is dead. None of 'em dead, right? The reality is you get to a point within an industry where it basically goes from instead of a growth curve like that, it goes to a growth curve of like that, it's pretty flat. So from a revenue growth perspective, I don't think you're going to see the revenue growth there. I think you're going to see the revenue growth in DPUs. And when you actually take, they may be expensive now, but you look at what Monterey's doing and you look at the way that those DPUs are getting integrated in at the OEM level. It's going to be a part of it. You're going to order your VxRail and VSAN style boxes, they're going to come with them. It's going to be an integrated component. Because when you start to offload things off the CPU, you've driven your overall utilization up. When you don't have to process NSX on basically the X86, you've just freed up cores and a considerable amount of them. And you've also moved that to where there's a more intelligent place for that pack to be processed right, out here on this edge. 'Cause you know what, that might not need to go into the host bus at all. So you have just alleviated any transfers over a PCI bus, over the PCI lanes, into DRAM, all of these components, when you're like, but all to come with, oh, that bit needs to be on this other machine. So now it's coming in and it's making that decision there. And then you take and integrate that into things like the Aruba Smart Switch, that's running the Pensando technology. So now you got top of rack that is already making those intelligent routing decisions on where packets really need to go. >> Jason, thank you so much for joining us. I know you guys could keep talking. >> No, I was going to say, you're going to have to come back. You're going to have to come back. >> We've just started to peel the layers of the onion, but we really appreciate you coming by the show, talking about what AMD and VMware are doing, what you're enabling customers to achieve. Sounds like there's a lot of tailwind behind you. That's awesome. >> Yeah. >> Great stuff, thank you. >> It's a great time to be at AMD, I can tell you that. >> Oh, that's good to hear, we like it. Well, thank you again for joining us, we appreciate it. For our guest and Dave Nicholson, I'm Lisa Martin. You're watching "theCUBE Live" from San Francisco, VMware Explore 2022. We'll be back with our next guest in just a minute. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

Jason, it's great to have you. I hear you have some to easily enable you to move So we're probably good way to refer to it. and the release of a tool like this, 1000 VMs, just to make the math easy. And it's going to check, we've Okay, I'm going to In terms of my that is the sort of wrapper and most of the time, that during this process. that you need to look for. in the first place, on the source system. What's the zinger, where doesn't it work? It doesn't work like, live What's the oh, you've got that version, So where you would see options that you give it, And a big shout out to Austin Browder. I was going to ask you what and the value add and VAR, and presented this to some of competitors in the marketplace? in the amount of resources that you need. nobody's going to do anything for that. Now you you're out of Austin, right? is the cost of electricity, you don't live here, you don't They had that five to So what are you liking about AMD? that are affecting the data center, Well, that relationship and the trust where do you think this is going. we already know that. because the way you look Okay, so the competing Or the alternative vision, And when you actually take, I know you guys could keep talking. You're going to have to come back. peel the layers of the onion, to be at AMD, I can tell you that. Oh, that's good to hear, we like it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Jason CollierPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

50 machinesQUANTITY

0.99+

10 machinesQUANTITY

0.99+

JasonPERSON

0.99+

10 machinesQUANTITY

0.99+

100 machinesQUANTITY

0.99+

DavePERSON

0.99+

AMDORGANIZATION

0.99+

AustinLOCATION

0.99+

San FranciscoLOCATION

0.99+

San FranciscoLOCATION

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

100 serversQUANTITY

0.99+

seven yearQUANTITY

0.99+

theCUBE LiveTITLE

0.99+

10,000 virtual machinesQUANTITY

0.99+

LenovosORGANIZATION

0.99+

30%QUANTITY

0.99+

2025DATE

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

10%QUANTITY

0.99+

30 different channel partnersQUANTITY

0.99+

five yearsQUANTITY

0.99+

earthLOCATION

0.99+

5%QUANTITY

0.99+

1000 VMsQUANTITY

0.99+

DellsORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

seven yearsQUANTITY

0.98+

Austin BrowderPERSON

0.98+

a yearQUANTITY

0.98+

TandyORGANIZATION

0.98+

Radio ShackORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

MontereyORGANIZATION

0.98+

todayDATE

0.97+

HPsORGANIZATION

0.97+

first timeQUANTITY

0.97+

TuesdayDATE

0.97+

ScaleORGANIZATION

0.97+

VM ToolsTITLE

0.97+

one thingQUANTITY

0.96+

last nightDATE

0.96+

about 30 different channel partnersQUANTITY

0.95+

firstQUANTITY

0.95+

AthlonsCOMMERCIAL_ITEM

0.95+

VxRailCOMMERCIAL_ITEM

0.95+

X86TITLE

0.94+

PensandoORGANIZATION

0.94+

VMware Explore 2022TITLE

0.94+

RyzenCOMMERCIAL_ITEM

0.94+

five yearsQUANTITY

0.93+

Mark Roberge, Stage 2 Capital & Paul Fifield, Sales Impact Academy | CUBEconversation


 

(gentle upbeat music) >> People hate to be sold, but they love to buy. We become what we think about, think, and grow rich. If you want to gather honey, don't kick over the beehive. The world is replete with time-tested advice and motivational ideas for aspiring salespeople, Dale Carnegie, Napoleon Hill, Norman Vincent Peale, Earl Nightingale, and many others have all published classics with guidance that when followed closely, almost always leads to success. More modern personalities have emerged in the internet era, like Tony Robbins, and Gary Vaynerchuk, and Angela Duckworth. But for the most part, they've continued to rely on book publishing, seminars, and high value consulting to peddle their insights and inspire action. Welcome to this video exclusive on theCUBE. This is Dave Vellante, and I'm pleased to welcome back Professor Mark Roberge, who is one of the Managing Directors at Stage 2 Capital, and Paul Fifield, who's the CEO and Co-Founder of Sales Impact Academy. Gentlemen, welcome. Great to see you. >> You too Dave and thanks. >> All right, let's get right into it. Paul, you guys are announcing today a $4 million financing round. It comprises $3 million in a seed round led by Stage 2 and a million dollar in debt financing. So, first of all, congratulations. Paul, why did you start Sales Impact Academy? >> Cool, well, I think my background is sort of two times CRO, so I've built two reasonably successful companies. Built a hundred plus person teams. And so I've got kind of this firsthand experience of having to learn literally everything on the job whilst delivering these very kind of rapid, like achieving these very rapid growth targets. And so when I came out of those two journeys, I literally just started doing some voluntary teaching in and around London where I now live. I spend a bunch of time over in New York, and literally started this because I wanted to sort of kind of give back, but just really wanted to start helping people who were just really, really struggling in high pressure environments. And that's both leadership from sense of revenue leadership people, right down to sort of frontline SDRs. And I think as I started just doing this voluntary teaching, I kind of realized that actually the sort of global education system has done is a massive, massive disservice, right? I actually call it the greatest educational travesty of the last 50 years, where higher education has entirely overlooked sales as a profession. And the knock-on consequences of that have been absolutely disastrous for our profession. Partly that the profession is seen as a bit sort of embarrassing to be a part of. You kind of like go get a sales job if you can't get a degree. But more than that, the core fundamental within revenue teams and within sales people is now completely lacking 'cause there's no structured formal kind of like learning out there. So that's really the problem we're trying to solve on the kind of like the skill side. >> Great. Okay. And mark, always good to have you on, and I got to ask you. So even though, I know this is the wheelhouse for you and your partners, and of course, you've got a deep bench of LPs, but lay out the investment thesis here. What's the core problem that you saw and how are you looking at the market? >> Yeah, sure, Dave. So this one was a special one for me. We've spoken in the past. I mean, just personally I've always had a similar passion to Paul that it's amazing how important sales execution is to all companies, nevermind just the startup ecosystem. And I've always personally been motivated by anything that can help the startup ecosystem increase their success. Part of why I teach at Harvard and try to change some of the stuff that Paul's talking about, which is like, it's amazing how little education is done around sales. But in this particular one, not only personally was I excited about, but from a fun perspective, we've got to look at the economic outcomes. And we've been thinking a lot about the sales tech stack. It's evolved a ton in the last couple of decades. We've gone from the late '90s where every sales VP was just, they had a thing called the CRM that none of their reps even used, right? And we've come so far in 20 years, we've got all these amazing tools that help us cold call, that help us send emails efficiently and automatically and track everything, but nothing's really happened on the education side. And that's really the enormous gap that we've seen is, these organizations being much more proactive around adopting technology that can prove sales execution, but nothing on the education side. And the other piece that we saw is, it's almost like all these companies are reinventing the wheel of looking in the upcoming year, having a dozen sales people to hire, and trying to put together a sales enablement program within their organization to teach salespeople sales 101. Like how to find a champion, how to develop a budget, how to develop sense of urgency. And what Paul and team can do in the first phase of essay, is can sort of centralize that, so that all of these organizations can benefit from the best content and the best instructors for their team. >> So Paul, exactly, thank you, mark. Exactly what do you guys do? What do you sell? I'm curious, is this sort of, I'm thinking in my head, is this E-learning, is it really part of the sales stack? Maybe you could help us understand that better. >> Well, I think this problem of having to upscale teams has been around like forever. And kind of going back to the kind of education problem, it's what's wild is that we would never accept this of our lawyers, our accountants, or HR professionals. Imagine like someone in your finance team arriving on day one and they're searching YouTube to try and work out how to like put a balance sheet together. So it's a chronic, chronic problem. And so the way that we're addressing this, and I think the problem is well understood, but there's always been a terrible market, sort of product market fit for how the problem gets solved. So as mark was saying, typically it's in-house revenue leaders who themselves have got massive gaps in their knowledge, hack together some internal learning that is just pretty poor, 'cause it's not really their skillset. The other alternative is bringing in really expensive consultants, but they're consultants with a very single worldview and the complexity of a modern revenue organization is very, very high these days. And so one consultant is not going to really kind of like cover every topic you need. And then there's the kind of like fairly old fashioned sales training companies that just come in, one big hit, super expensive and then sort of leave again. So the sort of product market fit to solve, has always been a bit pretty bad. So what we've done is we've created a subscription model. We've essentially productized skills development. The way that we've done that is we teach live instruction. So one of the big challenges Andreessen Horowitz put a post out around this so quite recently, one of the big problems of online learning is that this kind of huge repository of online learning, which puts all the onus on the learner to have the discipline to go through these courses and consume them in an on-demand way is actually they're pretty ineffective. We see sort of completion rates of like 7 to 8%. So we've always gone from a live instruction model. So the sort of ingredients are the absolute very best people in the world in their very specific skill teaching live classes just two hours per week. So we're not overwhelming the learners who are already in work, and they have targets, and they've got a lot of pressure. And we have courses that last maybe four to like 12 hours over two to sort of six to seven weeks. So highly practical live instruction. We have 70, 80, sometimes even 90% completion rates of the sort of live class experience, and then teams then rapidly put that best practice into practice and see amazing results in things like top of funnel, or conversion, or retention. >> So live is compulsory and I presume on-demand? If you want to refresh you have an on demand option? >> Yeah, everything's recorded, so you can kind of catch up on a class if you've missed it, But that live instruction is powerful because it's kind of in your calendar, right? So you show up. But the really powerful thing, actually, is that entire teams within companies can actually learn at exactly the same pace. So we teach it eight o'clock Pacific, 11 o'clock Eastern, >> 4: 00 PM in the UK, and 5:00 PM Europe. So your entire European and North American teams can literally learn in the same class with a world-class expert, like a Mark, or like a Kevin Dorsey, or like Greg Holmes from Zoom. And you're learning from these incredible people. Class finishes, teams can come back together, talk about this incredible best practice they've just learned, and then immediately put it into practice. And that's where we're seeing these incredible, kind of almost instant impact on performance at real scale. >> So, Mark, in thinking about your investment, you must've been thinking about, okay, how do we scale this thing? You've got an instructor component, you've got this live piece. How are you thinking about that at scale? >> Yeah, there's a lot of different business model options there. And I actually think multiple of them are achievable in the longer term. That's something we've been working with Paul quite a bit, is like, they're all quite compelling. So just trying to think about which two to start with. But I think you've seen a lot of this in education models today. Is a mixture of on-demand with prerecorded. And so I think that will be the starting point. And I think from a scalability standpoint, we were also, we don't always try to do this with our investments, but clearly our LP base or limited partner base was going to be a key ingredient to at least the first cycle of this business. You know, our VC firm's backed by over 250 CRO CMOs heads of customer success, all of which are prospective instructors, prospective content developers, and prospective customers. So that was a little nicety around the scale and investment thesis for this one. >> And what's in it for them? I mean, they get paid. Obviously, you have a stake in the game, but what's in it for the instructors. They get paid on a sort of a per course basis? How does that model work? >> Yeah, we have a development fee for each kind of hour of teaching that gets created So we've mapped out a pretty significant curriculum. And we have about 250 hours of life teaching now already written. We actually think it's going to be about 3000 hours of learning before you get even close to a complete curriculum for every aspect of a revenue organization from revenue operations, to customer success, to marketing, to sales, to leadership, and management. But we have a development fee per class, and we have a teaching fee as well. >> Yeah, so, I mean, I think you guys, it's really an underserved market, and then when you think about it, most organizations, they just don't invest in training. And so, I mean, I would think you'd want to take it, I don't know what the right number is, 5, 10% of your sales budget and actually put it on this and the return would be enormous. How do you guys think about the market size? Like I said before, is it E-learning, is it part of the CRM stack? How do you size this market? >> Well, I think for us it's service to people. A highly skilled sales rep with an email address, a phone and a spreadsheet would do really well, okay? You don't need this world-class tech stack to do well in sales. You need the skills to be able to do the job. But the reverse, that's not true, right? An unskilled person with a world-class tech stack won't do well. And so fundamentally, the skill level of your team is the number one most important thing to get right to be successful in revenue. But as I said before, the product market for it to solve that problem, has been pretty terrible. So we see ourselves 100%. And so if you're looking at like a com, you look at Gong, who we've just signed as a customer, which is fantastic. Gong has a technology that helps salespeople do better through call recording. You have Outreach, who is also a customer. They have technologies that help SDRs be more efficient in outreach. And now you have Sales Impact Academy, and we help with skills development of your team, of the entirety of your revenue function. So we absolutely see ourselves as a key part of that stack. In terms of the TAM, 60 million people in sales are on, according to LinkedIn. You're probably talking 150 million people in go to market to include all of the different roles. 50% of the world's companies are B2B. The TAM is huge. But what blows my mind, and this kind of goes back to this why the global education system has overlooked this because essentially if half the world's companies are B2B, that's probably a proxy for the half of the world's GDP, Half of the world's economic growth is relying on the revenue function of half the world's companies, and they don't really know what they're doing, (laughs) which is absolutely staggering. And if we can solve that in a meaningfully meaningful way at massive scale, then the impact should be absolutely enormous. >> So, Mark, no lack of TAM. I know that you guys at Stage 2, you're also very much focused on the metrics. You have a fundamental philosophy that your product market fit and retention should come before hyper growth. So what were the metrics that enticed you to make this investment? >> Yeah, it's a good question, Dave, 'cause that's where we always look first, which I think is a little different than most early stage investors. There's a big, I guess, meme, triple, triple, double, double that's popular in Silicon Valley these days, which refers to triple your revenue in year one, triple your revenue in year two, double in year three, and four, and five. And that type of a hyper growth is critical, but it's often jumped too quickly in our opinion. That there's a premature victory called on product market fit, which kills a larger percentage of businesses than is necessary. And so with all our investments, we look very heavily first at user engagement, any early indicators of user retention. And the numbers were just off the charts for SIA in terms of the customers, in terms of the NPS scores that they were getting on their sessions, in terms of the completion rate on their courses, in terms of the customers that started with a couple of seats and expanded to more seats once they got a taste of the program. So that's where we look first as a strong foundation to build a scalable business, and it was off the charts positive for SIA. >> So how about the competition? If I Google sales training software, I'll get like dozens of companies. Lessonly, and MindTickle, or Brainshark will come up, that's not really a fit. So how do you think about the competition? How are you different? >> Yeah, well, one thing we try and avoid is any reference to sales training, 'cause that really sort of speaks to this very old kind of fashioned way of doing this. And I actually think that from a pure pedagogy perspective, so from a pure learning design perspective, the old fashioned way of doing sales training was pull a whole team off site, usually in a really terrible hotel with no windows for a day or two. And that's it, that's your learning experience. And that's not how human beings learn, right? So just even if the content was fantastic, the learning experience was so terrible, it was just very kind of ineffective. So we sort of avoid kind of like sales training, The likes of MindTickle, we're actually talking to them at the moment about a partnership there. They're a platform play, and we're certainly building a platform, but we're very much about the live instruction and creating the biggest curriculum and the broadest curriculum on the internet, in the world, basically, for revenue teams. So the competition is kind of interesting 'cause there is not really a direct subscription-based live like learning offering out there. There's some similar ish companies. I honestly think at the moment it's kind of status quo. We're genuinely creating a new category of in-work learning for revenue teams. And so we're in this kind of semi and sort of evangelical sort of phase. So really, status quo is one of the biggest sort of competitors. But if you think about some of those old, old fashioned sort of Miller Heimans, and then perhaps even like Sandlers, there's an analogy perhaps here, which is kind of interesting, which is a little bit like Siebel and Salesforce in the sort of late '90s, where in Siebel you have this kind of old way of doing things. It was a little bit ineffective. It was really expensive. Not accessible to a huge space of the market. And Salesforce came along and said, "Hey, we're going to create this cool thing. It's going to be through the browser, it's going to be accessible to everyone, and it's going to be really, really effective." And so there's some really kind of interesting parallels almost between like Siebel and Salesforce and what we're doing to completely kind of upend the sort of the old fashioned way of delivering sort of sales training, if you like. >> And your target customer profile is, you're selling to teams, right? B2B teams, right? It's not for individuals. Is that correct, Paul? >> Currently. Yeah, yeah. So currently we've got a big foothold in series A to series B. So broadly speaking out, our target market currently is really fast growth technology companies. That's the sector that we're really focusing on. We've got a very good strong foothold in series A series B companies. We've now won some much larger later stage companies. We've actually even won a couple of corporates, I can't say names yet, but names that are very, very, very familiar and we're incredibly excited by them, which could end up being thousand plus seat deals 'cause we do this on a per seat basis. But yeah, very much at the moment it's fast growth tech companies, and we're sort of moving up the chain towards enterprise. >> And how do you deal with the sort of maturity curve, if you will, of your students? You've got some that are brand new, just fresh out of school. You've got others that are more seasoned. What do you do, pop them into different points of the curriculum? How do you handle it? >> Yeah we have, I'll say we have about 30 courses right now. We have about another 15 in development where post this fundraise, we want to be able to get to around about 20 courses that we're developing every quarter and getting out to market. So we're literally, we've sort of identified about 20 to 25 key roles across everything within revenue. That's, let's say revenue ops, customer success, account management, sales, engineering, all these different kinds of roles. And we are literally plotting the sort of skills development for these individuals over multiple, multiple years. And I think what we've never ceases to amaze me is actually the breadth of learning in revenue is absolutely enormous. And what kind of just makes you laugh is, this is all of this knowledge that we're now creating it's what companies just hope that their teams somehow acquire through osmosis, through blogs, through events. And it's just kind of crazy that there is... It's absolutely insane that we don't already exist, basically. >> And if I understand it correctly, just from looking at your website, you've got the entry level package. I think it's up to 15 seats, and then you scale up from there, correct? Is it sort of as a seat-based license model? >> Yeah, it's a seat-based model, as Mark mentioned. In some cases we sell, let's say 20 or $30,000 deal out the gate and that's most of the team. That will be maybe a series A, series B deal, but then we've got these land and expand models that are working tremendously well. We have seven, eight customers in Q1 that have doubled their spend Q2. That's the impact that they're seeing. And our net revenue retention number for Q2 is looking like it's going to be 177% to think exceeds companies like Snowflakes. Well, our underlying retention metrics, because people are seeing this incredible impact on teams and performance, is really, really strong. >> That's a nice metric compare with Snowflake (Paul laughs) It's all right. (Dave and Paul laugh) >> So, Mark, this is a larger investment for Stage 2 You guys have been growing and sort of upping your game. And maybe talk about that a little bit. >> Yeah, we're in the middle of Fund II right now. So, Fund I was in 2018. We were doing smaller checks. It was our first time out of the gate. The mission has really taken of, our LP base has really taken off. And so this deal looks a lot like more like our second fund. We'll actually make an announcement in a few weeks now that we've closed that out. But it's a much larger fund and our first investments should be in that 2 to $3 million range. >> Hey, Paul, what are you going to do with the money? What are the use of funds? >> Put it on black, (chuckles) we're going to like- (Dave laughs) >> Saratoga is open. (laughs) (Mark laughs) >> We're going to, look, the curriculum development for us is absolutely everything, but we're also going to be investing in building our own technology platform as well. And there are some other really important aspects to the kind of overall offering. We're looking at building an assessment tool so we can actually kind of like start to assess skills across teams. We certify every course has an exam, so we want to get more robust around the certification as well, because we're hoping that our certification becomes the global standard in understanding for the first time in the industry what individual competencies and skills people have, which will be huge. So we have a broad range of things that we want to start initiating now. But I just wanted to quickly say Stage 2 has been nothing short of incredible in every kind of which way. Of course, this investment, the fit is kind of insane, but the LPs have been extraordinary in helping. We've got a huge number of them are now customers very quickly. Mark and the team are helping enormously on our own kind of like go to market and metrics. I've been doing this for 20 years. I've raised over 100 million myself in venture capital. I've never known a venture capital firm with such value add like ever, or even heard of other people getting the kind of value add that we're getting. So I just wanted to a quick shout out for Stage 2. >> Quite a testimony of you guys. Definitely Stage 2 punches above its weight. Guys, we'll leave it there. Thanks so much for coming on. Good luck and we'll be watching. Appreciate your time. >> Thanks, Dave. >> Thank you very much. >> All right, thank you everybody for watching this Cube conversation. This is Dave Vellante, and we'll see you next time.

Published Date : Jul 21 2021

SUMMARY :

emerged in the internet era, So, first of all, congratulations. of the last 50 years, And mark, always good to have you on, And the other piece that we saw is, really part of the sales stack? And so the way that we're addressing this, But the really powerful thing, actually, 4: 00 PM in the UK, and 5:00 PM Europe. How are you thinking about that at scale? in the longer term. of a per course basis? We actually think it's going to be and the return would be enormous. of the entirety of your revenue function. focused on the metrics. And the numbers were just So how about the competition? So just even if the content was fantastic, And your target customer profile is, That's the sector that of the curriculum? And it's just kind of and then you scale up from there, correct? That's the impact that they're seeing. (Dave and Paul laugh) And maybe talk about that a little bit. should be in that 2 to $3 million range. Saratoga is open. Mark and the team are helping enormously Quite a testimony of you guys. All right, thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PaulPERSON

0.99+

Dave VellantePERSON

0.99+

Gary VaynerchukPERSON

0.99+

DavePERSON

0.99+

Mark RobergePERSON

0.99+

Angela DuckworthPERSON

0.99+

MarkPERSON

0.99+

2018DATE

0.99+

LondonLOCATION

0.99+

sixQUANTITY

0.99+

Paul FifieldPERSON

0.99+

70QUANTITY

0.99+

Sales Impact AcademyORGANIZATION

0.99+

Greg HolmesPERSON

0.99+

Norman Vincent PealePERSON

0.99+

Tony RobbinsPERSON

0.99+

$3 millionQUANTITY

0.99+

sevenQUANTITY

0.99+

12 hoursQUANTITY

0.99+

Kevin DorseyPERSON

0.99+

$30,000QUANTITY

0.99+

90%QUANTITY

0.99+

100%QUANTITY

0.99+

Dale CarnegiePERSON

0.99+

New YorkLOCATION

0.99+

UKLOCATION

0.99+

$4 millionQUANTITY

0.99+

Andreessen HorowitzPERSON

0.99+

7QUANTITY

0.99+

20QUANTITY

0.99+

Earl NightingalePERSON

0.99+

5:00 PMDATE

0.99+

177%QUANTITY

0.99+

SiebelORGANIZATION

0.99+

4: 00 PMDATE

0.99+

Napoleon HillPERSON

0.99+

20 yearsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

LinkedInORGANIZATION

0.99+

series AOTHER

0.99+

seven weeksQUANTITY

0.99+

oneQUANTITY

0.99+

Stage 2ORGANIZATION

0.99+

a dayQUANTITY

0.99+

MindTickleORGANIZATION

0.99+

80QUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.99+

eight o'clock PacificDATE

0.99+

50%QUANTITY

0.99+

BrainsharkORGANIZATION

0.99+

150 million peopleQUANTITY

0.99+

second fundQUANTITY

0.99+

SalesforceORGANIZATION

0.99+

todayDATE

0.99+

2QUANTITY

0.99+

first investmentsQUANTITY

0.99+

series B.OTHER

0.98+

over 100 millionQUANTITY

0.98+

SaratogaPERSON

0.98+

first cycleQUANTITY

0.98+

bothQUANTITY

0.98+

8%QUANTITY

0.98+

TAMORGANIZATION

0.98+

5QUANTITY

0.98+

11 o'clock EasternDATE

0.98+

fiveQUANTITY

0.98+

Avi Shua, Orca Security | CUBE Conversation May 2021


 

(calm music)- Hello, and welcome to this CUBE conversation here in Palo Alto, California in theCUBE Studios, I'm John Furrier, host of theCUBE. We are here with the hot startup really working on some real, super important security technology for the cloud, great company, Orca Security, Avi Shua, CEO, and co founder. Avi, thank you for coming on theCUBE and share your story >> Thanks for having me. >> So one of the biggest problems that enterprises and large scale, people who are going to the cloud and are in the cloud and are evolving with cloud native, have realized that the pace of change and the scale is a benefit to the organizations for the security teams, and getting that security equation, right, is always challenging, and it's changing. You guys have a solution for that, I really want to hear what you guys are doing. I like what you're talking about. I like what you're thinking about, and you have some potentially new technologies. Let's get into it. So before we get started, talk about what is Orca Security, what do you guys do? What problem do you solve? >> So what we invented in Orca, is a unique technology called site scanning, that essentially enables us to connect to any cloud environment in a way which is as simple as installing a smartphone application and getting a full stack visibility of your security posture, meaning seeing all of the risk, whether it's vulnerability, misconfiguration, lateral movement risk, work that already been compromised, and more and more, literally in minutes without deploying any agent, without running any network scanners, literally with no change. And while it sounds to many of us like it can't happen, it's snake oil, it's simply because we are so used to on premise environment where it simply wasn't possible in physical server, but it is possible in the cloud. >> Yeah, and you know, we've had many (indistinct) on theCUBE over the years. One (indistinct) told us that, and this is a direct quote, I'll find the clip and share it on Twitter, but he said, "The cloud is more secure than on premise, because it's more changes going on." And I asked him, "Okay, how'd you do?" He says, "It's hard, you got to stay on top of it." A lot of people go to the cloud, and they see some security benefits with the scale. But there are gaps. You guys are building something that solves those gaps, those blind spots, because of things are always changing, you're adding more services, sometimes you're integrating, you now have containers that could have, for instance, you know, malware on it, gets introduced into a cluster, all kinds of things can go on in a cloud environment, that was fine yesterday, you could have a production cluster that's infected. So you have all of these new things. How do you figure out the gaps and the blind spots? That's what you guys do, I believe, what are the gaps in cloud security? Share with us. >> So definitely, you're completely correct. You know, I totally agree the cloud can be dramatically more secluded on-prem. At the end of the day, unlike an on-prem data center, where someone can can plug a new firewall, plug a new switch, change things. And if you don't instrument, it won't see what's inside. This is not possible in the cloud. In the cloud it's all code. It's all running on one infrastructure that can be used for the instrumentation. On the other hand, the cloud enabled businesses to act dramatically faster, by say dramatically, we're talking about order of magnitude faster, you can create new networks in matter of minutes, workloads can come and go within seconds. And this creates a lot of changes that simply haven't happened before. And it involves a lot of challenges, also from security instrumentation point of view. And you cannot use the same methodologies that you used for the on-prem because if you use them, you're going to lose, they were a compromise, that worked for certain physics, certain set of constraints that no longer apply. And our thesis is that essentially, you need to use the capabilities of the cloud itself, for the instrumentation of everything that can runs on the cloud. And when you do that, by definition, you have full coverage, because if it's run on the cloud, it can be instrumented on cloud, this essentially what Docker does. And you're able to have this full visibility for all of the risks and the importance because all of them, essentially filter workload, which we're able to analyze. >> What are some of the blind spots in the public cloud, for instance. I mean, that you guys are seeing that you guys point out or see with the software and the services that you guys have. >> So the most common ones are the things that we have seen in the last decades. I don't think they are materially different simply on steroids. We see things, services that are launched, nobody maintained for years, using things like improper segmentation, that everyone have permission to access everything. And therefore if one environment is breached, everything is breached. We see organization where something goes dramatically hardened. So people find a way to a very common thing is that, and now ever talks about CIM and the tightening their permission and making sure that every workload have only the capabilities that they need. But sometimes developers are a bit lazy. So they'll walk by that, but also have keys that are stored that can bypass the entire mechanism that, again, everyone can do everything on any environment. So at the end of the day, I think that the most common thing is the standard aging issues, making sure that your environment is patched, it's finger tightened, there is no alternative ways to go to the environment, at scale, because the end of the day, they are destined for security professional, you need to secure everything that they can just need to find one thing that was missed. >> And you guys provide that visibility into the cloud. So to identify those. >> Exactly. I think one of the top reasons that we implemented Orca using (indistinct) technology that I've invented, is essentially because it guarantees coverage. For the first time, we can guarantee you that if you scan it, that way, we'll see every instance, every workload, every container, because of its running, is a native workload, whether it's a Kubernetes, whether it's a service function, we see it all because we don't rely on any (indistinct) integration, we don't rely on friction within the organization. So many times in my career, I've been in discussion with customer that has been breached. And when we get to the core of the issue, it was, you couldn't, you haven't installed that agent, you haven't configured that firewall, the IPS was not up to date. So the protections weren't applied. So this is technically true, but it doesn't solve the customer problem, which is, I need the security to be applied to all of my environment, and I can't rely on people to do manual processes, because they will fail. >> Yeah, yeah. I mean, it's you can't get everything now and the velocity, the volume of activity. So let me just get this right, you guys are scanning container. So the risk I hear a lot is, you know, with Kubernetes, in containers is, a fully secure cluster could have a container come in with malware, and penetrate. And even if it's air gapped, it's still there. So problematic, you would scan that? Is that how it would work? >> So yes, but so for nothing but we are not scanning only containers, the essence of Orca is scanning the cloud environment holistically. We scan your cloud configuration, we scan your Kubernetes configuration, we scan your Dockers, the containers that run on top of them, we scan the images that are installed and we scan the permission that these images are one, and most importantly, we combined these data points. So it's not like you buy one solution that look to AWS configuration, is different solution that locate your virtual machines at one cluster, another one that looks at your cluster configuration. Another one that look at a web server and one that look at identity. And then you have resolved from five different tools that each one of them claims that this is the most important issue. But in fact, you need to infuse the data and understand yourself what is the most important items or they're correlated. We do it in an holistic way. And at the end of the day, security is more about thinking case graphs is vectors, rather than list. So it is to tell you something like this is a container, which is vulnerable, it has permission to access your sensitive data, it's running on a pod that is indirectly connected to the internet to this load balancer, which is exposed. So this is an attack vector that can be utilized, which is just a tool that to say you have a vulnerable containers, but you might have hundreds, where 99% of them are not exposed. >> Got it, so it's really more logical, common sense vectoring versus the old way, which was based on perimeter based control points, right? So is that what I get? is that right is that you're looking at it like okay, a whole new view of it. Not necessarily old way. Is that right? >> Yes, it is right, we are looking at as one problem that is entered in one tool that have one unified data model. And on top of that, one scanning technology that can provide all the necessary data. We are not a tool that say install vulnerability scanner, install identity access management tools and infuse all of the data to Orca will make sense, and if you haven't installed the tools to you, it's not our problem. We are scanning your environment, all of your containers, virtual machine serverless function, cloud configuration using guard technology. When standard risk we put them in a graph and essentially what is the attack vectors that matter for you? >> The sounds like a very promising value proposition. if I've workloads, production workloads, certainly in the cloud and someone comes to me and says you could have essentially a holistic view of your security posture at any given point in that state of operations. I'm going to look at it. So I'm compelled by it. Now tell me how it works. Is there overhead involved? What's the cost to, (indistinct) Australian dollars, but you can (indistinct) share the price to would be great. But like, I'm more thinking of me as a customer. What do I have to do? What operational things, what set up? What's my cost operationally, and is there overhead to performance? >> You won't believe me, but it's almost zero. Deploying Orca is literally three clicks, you just go log into the application, you give it the permission to read only permission to the environment. And it does the rest, it doesn't run a single awkward in the environment, it doesn't send a single packet. It doesn't create any overhead we have within our public customer list companies with a very critical workloads, which are time sensitive, I can quote some names companies like Databricks, Robinhood, Unity, SiteSense, Lemonade, and many others that have critical workloads that have deployed it for all of the environment in a very quick manner with zero interruption to the business continuity. And then focusing on that, because at the end of the day, in large organization, friction is the number one thing that kills security. You want to deploy your security tool, you need to talk with the team, the team says, okay, we need to check it doesn't affect the environment, let's schedule it in six months, in six months is something more urgent then times flybys and think of security team in a large enterprise that needs to coordinate with 500 teams, and make sure it's deployed, it can't work, Because we can guarantee, we do it because we leverage the native cloud capabilities, there will be zero impact. This allows to have the coverage and find these really weak spot nobody's been looking at. >> Yeah, I mean, this having the technology you have is also good, but the security teams are burning out. And this is brings up the cultural issue we were talking before we came on camera around the cultural impact of the security assessment kind of roles and responsibilities inside companies. Could you share your thoughts on this because this is a real dynamic, the people involved as a people process technology, the classic, you know, things that are impacted with digital transformation. But really the cultural impact of how developers push code, the business drivers, how the security teams get involved. And sometimes it's about the security teams are not under the CIO or under these different groups, all kinds of impacts to how the security team behaves in context to how code gets shipped. What's your vision and view on the cultural impact of security in the cloud. >> So, in fact, many times when people say that the cloud is not secure, I say that the culture that came with the cloud, sometimes drive us to non secure processes, or less secure processes. If you think about that, only a decade ago, if an organization could deliver a new service in a year, it would be an amazing achievement, from design to deliver. Now, if an organization cannot ship it, within weeks, it's considered a failure. And this is natural, something that was enabled by the cloud and by the technologies that came with the cloud. But it also created a situation where security teams that used to be some kind of a checkpoint in the way are no longer in that position. They're in one end responsible to audit and make sure that things are acting as they should. But on the other end, things happen without involvement. And this is a very, very tough place to be, nobody wants to be the one that tells the business you can't move as fast as you want. Because the business want to move fast. So this is essentially the friction that exists whether can we move fast? And how can we move fast without breaking things, and without breaking critical security requirements. So I believe that security is always about a triode, of educate, there's nothing better than educate about putting the guardrails to make sure that people cannot make mistakes, but also verify an audit because there will be failures in even if you educate, even if you put guardrails, things won't work as needed. And essentially, our position within this, triode is to audit, to verify to empower the security teams to see exactly what's happening, and this is an enabler for a discussion. Because if you see what are the risks, the fact that you have, you know, you have this environment that hasn't been patched for a decade with the password one to six, it's a different case, then I need you to look at this environment because I'm concerned that I haven't reviewed it in a year. >> That's exactly a great comment. You mentioned friction kills innovation earlier. This is one friction point that mismatch off cadence between ownership of process, business owners goals of shipping fast, security teams wanting to be secure. And developers just want to write code faster too. So productivity, burnout, innovation all are a factor in cloud security. What can a company do to get involved? You mentioned easy to deploy. How do I work with Orca? You guys are just, is it a freemium? What is the business model? How do I engage with you if I'm interested in deploying? >> So one thing that I really love about the way that we work is that you don't need to trust a single word I said, you can get a free trial of Orca at website orca.security, one a scan on your cloud environment, and see for yourself, whether there are critical ways that were overlooked, whether everything is said and there is no need for a tool or whether they some areas that are neglected and can be acted at any given moment (indistinct) been breached. We are not a freemium but we offer free trials. And I'm also a big believer in simplicity and pricing, we just price by the average number workload that you have, you don't need to read a long formula to understand the pricing. >> Reducing friction, it's a very ethos sounds like you guys have a good vision on making things easy and frictionless and sets that what we want. So maybe I should ask you a question. So I want to get your thoughts because a lot of conversations in the industry around shifting left. And that's certainly makes a lot of sense. Which controls insecurity do you want to shift left and which ones you want to shift right? >> So let me put it at, I've been in this industry for more than two decades. And like any industry every one's involved, there is a trend and of something which is super valuable. But some people believe that this is the only thing that you need to do. And if you know Gartner Hype Cycle, at the beginning, every technology is (indistinct) of that. And we believe that this can do everything and then it reaches (indistinct) productivity of the area of the value that it provides. Now, I believe that shifting left is similar to that, of course, you want to shift left as much as possible, you want things to be secure as they go out of the production line. This doesn't mean that you don't need to audit what's actually warning, because everything you know, I can quote, Amazon CTO, Werner Vogels about everything that can take will break, everything fails all the time. You need to assume that everything will fail all the time, including all of the controls that you baked in. So you need to bake as much as possible early on, and audit what's actually happening in your environment to find the gaps, because this is the responsibility of security teams. Now, just checking everything after the fact, of course, it's a bad idea. But only investing in shifting left and education have no controls of what's actually happening is a bad idea as well. >> A lot of people, first of all, great call out there. I totally agree, shift left as much as possible, but also get the infrastructure and your foundational data strategies, right and when you're watching and auditing. I have to ask you the next question on the context of the data, right, because you could audit all day long, all night long. But you're going to have a pile of needles looking for haystack of needles, as they say, and you got to have context. And you got to understand when things can be jumped on. You can have alert fatigue, for instance, you don't know what to look at, you can have too much data. So how do you manage the difference between making the developers productive in the shift left more with the shift right auditing? What's the context and (indistinct)? How do you guys talk about that? Because I can imagine, yeah, it makes sense. But I want to get the right alert at the right time when it matters the most. >> We look at risk as a combination of three things. Risk is not only how pickable the lock is. If I'll come to your office and will tell you that you have security issue, is that they cleaning, (indistinct) that lock can be easily picked. You'll laugh at me, technically, it might be the most pickable lock in your environment. But you don't care because the exposure is limited, you need to get to the office, and there's nothing valuable inside. So I believe that we always need to take, to look at risk as the exposure, who can reach that lock, how easily pickable this lock is, and what's inside, is at your critical plan tools, is it keys that can open another lock that includes this plan tools or just nothing. And when you take this into context, and the one wonderful thing about the cloud, is that for the first time in the history of computing, the data that is necessary to understand the exposure and the impact is in the same place where you can understand also the risk of the locks. You can make a very concise decision of easily (indistinct) that makes sense. That is a critical attack vector, that is a (indistinct) critical vulnerability that is exposed, it is an exposed service and the service have keys that can download all of my data, or maybe it's an internal service, but the port is blocked, and it just have a default web server behind it. And when you take that, you can literally quantize 0.1% of the alert, even less than that, that can be actually exploited versus device that might have the same severity scores or sound is critical, but don't have a risk in terms of exposure or business impact. >> So this is why context matters. I want to just connect what you said earlier and see if I get this right. What you just said about the lock being picked, what's behind the door can be more keys. I mean, they're all there and the thieves know, (indistinct) bad guys know exactly what these vectors are. And they're attacking them. But the context is critical. But now that's what you were getting at before by saying there's no friction or overhead, because the old way was, you know, send probes out there, send people out in the network, send packers to go look at things which actually will clutter the traffic up or, you know, look for patterns, that's reliant on footsteps or whatever metaphor you want to use. You don't do that, because you just wire up the map. And then you put context to things that have weights, I'm imagining graph technologies involved or machine learning. Is that right? Am I getting that kind of conceptually, right, that you guys are laying it out holistically and saying, that's a lock that can be picked, but no one really cares. So no one's going to pick and if they do, there's no consequence, therefore move on and focus energy. Is that kind of getting it right? Can you correct me where I got that off or wrong? >> So you got it completely right. On one end, we do the agentless deep assessment to understand your workloads, your virtual machine or container, your apps and service that exists with them. And using the site scanning technology that some people you know, call the MRI for the cloud. And we build the map to understand what are connected to the security groups, the load balancer, the keys that they hold, what these keys open, and we use this graph to essentially understand the risk. Now we have a graph that includes risk and exposure and trust. And we use this graph to prioritize detect vectors that matters to you. So you might have thousands upon thousands of vulnerabilities on servers that are simply internal and these cannot be manifested, that will be (indistinct) and 0.1% of them, that can be exploited indirectly to a load balancer, and we'll be able to highlight these one. And this is the way to solve alert fatigue. We've been in large organizations that use other tools that they had million critical alerts, using the tools before Orca. We ran our scanner, we found 30. And you can manage 30 alerts if you're a large organization, no one can manage a million alerts. >> Well, I got to say, I love the value proposition. I think you're bringing a smart view of this. I see you have the experience there, Avi and team, congratulations, and it makes sense of the cloud is a benefit, it can be leveraged. And I think security being rethought this way, is smart. And I think it's being validated. Now, I did check the news, you guys have raised significant traction as valuation certainly raised around the funding of (indistinct) 10 million, I believe, a (indistinct) Funding over a billion dollar valuation, pushes a unicorn status. I'm sure that's a reflection of your customer interaction. Could you share customer success that you're having? What's the adoption look like? What are some of the things customers are saying? Why do they like your product? Why is this happening? I mean, I can connect the dots myself, but I want to hear what your customers think. >> So definitely, we're seeing huge traction. We grew by thousands of percent year over year, literally where times during late last year, where our sales team, literally you had to wait two or three weeks till you managed to speak to a seller to work with Orca. And we see the reasons as organization have the same problems that we were in, and that we are focusing. They have cloud environments, they don't know their security posture, they need to own it. And they need to own it now in a way which guarantees coverage guarantees that they'll see the important items and there was no other solution that could do that before Orca. And this is the fact. We literally reduce deployment (indistinct) it takes months to minutes. And this makes it something that can happen rather than being on the roadmap and waiting for the next guy to come and do that. So this is what we hear from our customers and the basic value proposition for Orca haven't changed. We're providing literally Cloud security that actually works that is providing full coverage, comprehensive and contextual, in a seamless manner. >> So talk about the benefits to customers, I'll give you an example. Let's just say theCUBE, we have our own cloud. It's growing like crazy. And we have a DevOps team, very small team, and we start working with big companies, they all want to know what our security posture is. I have to go hire a bunch of security people, do I just work with Orca, because that's the more the trend is integration. I just was talking to another CEO of a hot startup and the platform engineering conversations about people are integrating in the cloud and across clouds and on premises. So integration is all about posture, as well, too I want to know, people want to know who they're working with. How does that, does that factor into anything? Because I think, that's a table stakes for companies to have almost a posture report, almost like an MRI you said, or a clean (indistinct) health. >> So definitely, we are both providing the prioritized risk assessment. So let's say that your cloud team want to check their security, the cloud security risk, they'll will connect Orca, they'll see the (indistinct) in a very, very clear way, what's been compromised (indistinct) zero, what's in an imminent compromise meaning the attacker can utilize today. And you probably want to fix it as soon as possible and things that are hazardous in terms that they are very risky, but there is no clear attack vectors that can utilize them today, there might be things that combining other changes will become imminent compromise. But on top of that, when standard people also have compliance requirements, people are subject to a regulation like PCI CCPA (indistinct) and others. So we also show the results in the lens of these compliance frameworks. So you can essentially export a report showing, okay, we were scanned by Orca, and we comply with all of these requirements of SOC 2, etc. And this is another value proposition of essentially not only showing it in a risk lens, but also from the compliance lens. >> You got to be always on with security and cloud. Avi, great conversation. Thank you for sharing nice knowledge and going deep on some of the solution and appreciate your conversation. Thanks for coming on. >> Thanks for having me. >> Obviously, you are CEO and co founder of Orca Security, hot startup, taking on security in the cloud and getting it right. I'm John Furrier with theCUBE. Thanks for watching. (calm music)

Published Date : May 18 2021

SUMMARY :

technology for the cloud, and are in the cloud and are but it is possible in the cloud. And I asked him, "Okay, how'd you do?" of everything that can runs on the cloud. I mean, that you guys are seeing So at the end of the day, And you guys provide that For the first time, we can guarantee you So the risk I hear a lot is, So it is to tell you something like So is that what I get? and infuse all of the data the price to would be great. And it does the rest, the classic, you know, I say that the culture What is the business model? about the way that we work is that and which ones you want to shift right? that you need to do. I have to ask you the next question is that for the first time that you guys are laying it out that some people you know, What are some of the things and the basic value proposition So talk about the in the lens of these and going deep on some of the solution taking on security in the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Orca SecurityORGANIZATION

0.99+

John FurrierPERSON

0.99+

OrcaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

Avi ShuaPERSON

0.99+

500 teamsQUANTITY

0.99+

May 2021DATE

0.99+

AWSORGANIZATION

0.99+

30 alertsQUANTITY

0.99+

99%QUANTITY

0.99+

RobinhoodORGANIZATION

0.99+

SiteSenseORGANIZATION

0.99+

hundredsQUANTITY

0.99+

0.1%QUANTITY

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AviPERSON

0.99+

SOC 2TITLE

0.99+

LemonadeORGANIZATION

0.99+

six monthsQUANTITY

0.99+

five different toolsQUANTITY

0.99+

yesterdayDATE

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

Werner VogelsPERSON

0.99+

UnityORGANIZATION

0.99+

three weeksQUANTITY

0.99+

three clicksQUANTITY

0.99+

one toolQUANTITY

0.99+

single packetQUANTITY

0.98+

one problemQUANTITY

0.98+

10 millionQUANTITY

0.98+

a decade agoDATE

0.98+

late last yearDATE

0.98+

theCUBEORGANIZATION

0.98+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

sixQUANTITY

0.97+

a yearQUANTITY

0.97+

30QUANTITY

0.97+

more than two decadesQUANTITY

0.97+

each oneQUANTITY

0.96+

one thingQUANTITY

0.96+

one clusterQUANTITY

0.96+

one environmentQUANTITY

0.96+

last decadesDATE

0.95+

KubernetesTITLE

0.95+

single wordQUANTITY

0.95+

singleQUANTITY

0.95+

thousands of percentQUANTITY

0.95+

todayDATE

0.94+

orca.securityORGANIZATION

0.94+

three thingsQUANTITY

0.93+

one solutionQUANTITY

0.92+

Gartner Hype CycleORGANIZATION

0.92+

TwitterORGANIZATION

0.91+

one endQUANTITY

0.91+

million critical alertsQUANTITY

0.91+

OneQUANTITY

0.9+

a decadeQUANTITY

0.89+

over a billion dollarQUANTITY

0.87+

zero impactQUANTITY

0.83+

million alertsQUANTITY

0.8+

DevOpsORGANIZATION

0.77+

theCUBE StudiosORGANIZATION

0.77+

Jasmine James, Twitter and Stephen Augustus, Cisco | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Narrator: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe, 2021 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hello, welcome back to theCUBE'S coverage of KubeCon and CloudNativeCon 2021 Virtual, I'm John Furrier your host of theCUBE. We've got two great guests here, always great to talk to the KubeCon co-chairs and we have Stephen Augustus Head of Open Source at Cisco and also the KubeCon co-chair great to have you back. And Jasmine James Manager and Engineering Effectives at Twitter, the KubeCon co-chair, she's new on the job so we're not going to grill her too hard but she's excited to share her perspective, Jasmine, Stephen great to see you. Thanks for coming on theCUBE. >> Thanks for having us. >> Thank you. >> So obviously the co-chairs you guys see everything upfront Jasmine, you're going to learn that this is a really kind of key fun position because you've got to multiple hats you got to wear, you got to put a great program together, you got to entertain and surprise and delight the attendees and also can get the right trends, pick everything right and then keep that harmonious vibe going at CNCF and KubeCon is hard so it's a hard job. So I got to ask you out of the gate, what are the top trends that you guys have selected and are pushing forward this year that we're seeing evolve and unfold here at KubeCon? >> For sure yeah. So I'm excited to see, and I would say that some of the top trends for Cloud Native right now are just changes in the ecosystem, how we think about different use cases for Cloud Native technology. So you'll see lot's of talk about new architectures being introduced into Cloud Native technologies or things like WebAssembly. WebAssembly Wasm used cases and really starting to and again, I think I mentioned this every time, but like what are the customer used cases actually really thinking about how all of these building blocks connect and create a cohesive story. So I think a lot of it is enduring and will always be a part. My favorite thing to see is pretty much always maintainer and user stories, but yeah, but architecture is Wasm and security. Security is a huge focus and it's nice to see it comes to the forefront as we talked about having these like the security day, as well as all of the talk arounds, supply chain security, it has been a really, really, really big event (laughs) I'll say. >> Yeah. Well, great shot from last year we have been we're virtual again, but we're back in, the real world is coming back in the fall, so we hopefully in North America we'll be in person. Jasmine, you're new to the job. Tell us a little about you introduce yourself to the community and tell more about who you are and why you're so excited to be the co-chair with Stephen. >> Yeah, absolutely. So I'm Jasmine James, I've been in the industry for the past five or six years previous at Delta Airlines, now at Twitter, as a part of my job at Delta we did a huge drive on adopting Kubernetes. So a lot of those experiences, I was very, very blessed to be a part of in making the adoption and really the cultural shift, easy for developers during my time there. I'm really excited to experience like Cloud Native from the co-chair perspective because historically I've been like on the consumer side going to talk, taking all those best practices, stealing everything I could into bring it back into my job. So make everyone's life easier. So it's really, really great to see all of the fantastic ideas that are being presented, all of the growth and maturity within the Cloud Native world. Similar to Stephen, I'm super excited to hear about the security stuff, especially as it relates to making it easy for developers to shift left on security versus it being such an afterthought and making it something that you don't really have to think about. Developer experience is huge for me which is why I took the job at Twitter six months ago, so I'm really excited to see what I can learn from the other co-chairs and to bring it back to my day-to-day. >> Yeah, Twitter's been very active in open source. Everyone knows that and it's a great chance to see you land there. One of the interesting trends is this year I'll see besides security is GitOps but the one that I think is relevant to your background so fresh is the end user contributions and involvement has been really exploding on the scene. It's always been there. We've covered, Envoy with Lyft but now enterprise is now mainstream enterprises have been kind of going to the open source well and bringing those goodies back to their camps and building out and bringing it back. So you starting to see that flywheel developing you've been on that side now here. Talk about that dynamic and how real that is an important and share some perspective of what's really going on around this explosion around more end user contribution, more end user involvement. >> Absolutely. So I really think that a lot of industry like players are starting to see the importance of contributing back to open source because historically we've done a lot of taking, utilizing these different components to drive the business logic and not really making an investment in the product itself. So it's really, really great to see large companies invest in open source, even have whole teams dedicated to open source and how it's consumed internally. So I really think it's going to be a big win for the companies and for the open source community because I really am a big believer in like giving back and making sure that you should give back as much as you're taking and by making it easy for companies to do the right thing and then even highlighting it as a part of CNCF, it'll be really, really great, just a drive for a great environment for everyone. So really excited to see that. >> That's really good. She has been awesome stuff. Great, great insight. Stephen, I just have you piggyback off that and comment on companies enterprises that want to get more involved with the Cloud Native community from their respective experiences, what's the playbook, is there a new on-ramps? Is there new things? Is there a best practice? What's your view? I mean, obviously everyone's growing and changing. You look at IT has changed. I mean, IT is evolving completely to CloudOps, SRE get ops day two operations. It's pretty much standard now but they need to learn and change. What's your take on this? >> Yeah, so I think that to Jasmine's point and I'm not sure how much we've discussed my background in the past, but I actually came from the corporate IT background, did Desktop Sr, Desktop helped us support all of that stuff up into operations, DevOps, SRE, production engineering. I was an SRE at a startup who used core West technologies and started using Kubernetes back when Kubernetes is that one, two, I think. And that was my first journey into Cloud Native. And I became core less is like only customer to employee convert, right? So I'm very much big on that end user story and figuring out how to get people involved because that was my story as well. So I think that, some of the work that we do or a lot of the work that we do in contributor strategy, the SIG CNCF St. Contributor Strategy is all around thinking through how to bring on new contributors to these various Cloud Native projects, Right? So we've had chats with container D and linker D and a bunch of other folks across the ecosystem, as well as the kind of that maintainer circle sessions that we hold which are kind of like a private, not recorded. So maintainers can kind of get raw and talk about what they're feeling, whether it be around bolstering contributions or whether it'd be like managing burnout, right? Or thinking about how you talk through the values and the principles for your projects. So I think that, part of that story is building for multiple use cases, right? You take Kubernetes for example, right? So Ameritas chair for sync PM over in Kubernetes, one of the sub project owners for the enhancements sub project which involves basically like figuring out how we intake new enhancements to the community but as well as like what the end user cases are all of the use cases for that, right? How do we make it easy to use the technology and how we make it more effective for people to have conversations about how they use technology, right? So I think it's kind of a continuing story and it's delightful to see all of the people getting involved in a SIG Contributor Strategy, because it means that they care about all of the folks that are coming into their projects and making it a more welcoming and easier to contribute place so. >> Yeah. That's great stuff. And one of the things you mentioned about IT in your background and the scale change from IT and just the operational change over is interesting. I was just talking with a friend and we were talking about, get Op and, SRAs and how, in colleges is that an engineering track or is it computer science and it's kind of a hybrid, right? So you're seeing essentially this new operational model at scale that's CloudOps. So you've got hybrid, you've got on-premise, you've got Cloud Native and now soon to be multi-cloud so new things come into play architecture, coding, and programmability. All these things are like projects now in CNCF. And that's a lot of vendors and contributors but as a company, the IT functions is changing fast. So that's going to require more training and more involvement and yet open source is filling the void if you look at some of the successes out there, it's interesting. Can you comment on the companies that are out there saying, "Hey, I know my IT department is going to be turning into essentially SRE operations or CloudOps at scale. How do they get there? How could they work with KubeCon and what's the key playbook? How would you answer that? >> Yeah, so I would say, first off the place to go is the one-on-one track. We specifically craft that one-on-one track to make sure that people who are new to Cloud Native get a very cohesive story around what they're trying to get into, right? At any one time. So head to the one-on-one track, please add to the one-on-one track, hang out, definitely check out all of the keynotes that again, the keynotes, we put a lot of work into making sure these keynotes tell a very nice story about all of the technology and the amount of work that our presenters put into it as well is phenomenal. It's top notch. It's top notch every time. So those will always be my suggestions. Actually go to the keynotes and definitely check out the one-on-one track. >> Awesome. Jasmine, I got to get your take on this now that you're on the KubeCon and you're co-chairing with Stephen, what's your story to the folks that are in the end user side out there that were in your old position that you were at Delta doing some great Kubernetes work but now it's going beyond Kubernetes. I was just talking with another participant in the KubeCon ecosystem is saying, "It's not just Kubernetes anymore. There's other systems that we're going to deploy our real-time metrics on and whatnot". So what's the story? What's the update? What do you see on the inside now now that you're on board and you're at a Hyperscale at Twitter, what's your advice? What's your commentary to your old friends and the end user world? >> Yeah. It's not an easy task. I think that was, you had mentioned about starting with the one-on-one is like super key. Like that's where you should start. There's so many great stories out there in previous KubeCon that have been told. I was listening to those stories and the great thing about our community is that it's authentic, right? We're telling like all of the ways we tripped up so we can prevent you from doing this same thing and having an easier path, which is really awesome. Another thing I would say is do not underestimate the cultural shift, right? There are so many tools and technologies out there, but there's also a cultural transformation that has to happen. You're shifting from, traditional IT roles to a really holistic like so many different things are changing about the way infrastructure was interacted with the way developers are developing. So don't underestimate the cultural shift and make sure you're bringing everyone to the party because there's a lot of perspectives from the development side that needs to be considered before you make the shift initially So that way you can make sure you're approaching the problem in the right way. So those would be my recommendation. >> Also, speaking of cultural shifts, Stephen I know this is a big passion of yours is diversity in the ecosystem. I think with COVID we've seen probably in the past two years a major cultural shifts on the personnel involved, the people participating, still a lot more work to get done. Where are we on diversity in the ecosystem? How would you rate the progress and the overall achievements? >> I would say doing better, but never stop what has happened in COVID I think, if you look across companies, if you look across the opportunities that have opened up for people in general, there have been plenty of doors that have shut, right? And doors that have really made the assumption that you need to be physical are in person to do good work. And I think that the Cloud Native ecosystem the work that the LF and CNCF do, and really the way that we interact in projects has kind of pushed towards this async first, this remote first work culture, right? So you see it in these large corporations that have had to change the travel policies because of COVID and really for someone who's coming off being like a field engineer and solutions architect, right? The bread and butter is hopping on and off a plane, shaking hands, going to dinner, doing the song and dance, right? With customers. And for that model to functionally shift, right? Having conversations in different ways, right? And yeah, sometimes it's a lot of Zoom calls, right? Zoom calls, webinars, all of these things but I think some of what has happened is, you take the release team, for example, the Kubernetes release team. This is our first cycle with Dave Vellante who's our 121 released team lead is based in India, right? And that's the first time that we've had APAC region release team lead and what that forced us to do, we were already working on it. But what that forced us to do is really focused on asynchronous communication. How can we get things done without having to have people in the room? And we were like, "With Dave Vellante in here, it either works or it doesn't like, we're either going to prove that what we've put in place works for asynchronous communication or it doesn't." And then, given that a project of this scale can operate just fine, right? Right just fine delivering a release with people all across the globe. It proves that we have a lot of flexibility in the way that we offer opportunities, both on the open source side, as well as on the company side. >> Yeah. And I got to say KubeCon has always been global from day one. I was in Shanghai and I was in hung, Jo, visiting Ali Baba. And who do I see in the lobby? The CNCF crew. And I'm like, "What are you guys doing here?" "Oh, we're here talking to the cloud with Alibaba." So global is huge. You guys have nailed that. So congratulations and keep that going. Jasmine, your perspective is women in tech. I mean, you're seeing more and more focus and some great doors opening. It's still not enough. We've been covering this for a long time. Still the numbers are down, but we had a great conference recently at Stanford Women in Data Science amazing conference, a lot of power players coming in, women in tech is evolving. What's your take on this still a lot more work to done. You're an inspiration. Share your story. >> Yeah. We have a long way to go. There's no question about it. I do think that there's a lot of great organizations CNCF being one of them, really doing a great job at sharing, networking opportunities, encouraging other women to contribute to open source and letting that be sort of the gateway into a tech career. My journey is starting as a systems engineer at Delta, working my way into leadership, somehow I'm not sure I ended up there but really sort of shifting and being able to lift other women up has been like so fortunate to be able to do that. Women who code being a mentor, things of that nature has been a great opportunity, but I do feel like the open source community has a long way go to be a more welcoming place for women contributors, things like code of conduct, that being very prevalent making sure that it's not daunting and scary, going into GitHub and starting to create a PR for out of fear of what someone might say about your contributions instead of it being sort of an educational experience. So I think there's a lot of opportunities but there's a lot of programs, networking opportunities out there, especially everyone being remote now that have presented themselves. So I'm very hopeful. And the CNCF, like I said is doing a great job at highlighting these women contributors that are making changes to CNCF projects in really making it something that is celebrated which is really great. >> Yeah. You know that I love Stephen and we thought this last time and the Clubhouse app has come online since we were last talking and it's all audio. So there's a lot of ideas and it's all open. So with a synchronous first you have more access but still context matters. So the language, so there's still more opportunities potentially to offend or get it right so this is now becoming a new cultural shift. You brought this up last time we chatted around the language, language is important. So I think this is something that we're keeping an eye on and trying to keep open dialogue around, "Hey it matters what you say, asynchronously or in texts." We all know that text moment where someone said, "I didn't really mean that." But it was offensive or- >> It's like you said it. (laughs) >> (murmurs) you passionate about this here. This is super important how we work. >> Yeah. So you mentioned Clubhouse and it's something that I don't like. (laughs) So no offense to anyone who is behind creating new technologies for sure. But I think that Clubhouse from, if you take platforms like that, let's generalize, you take platforms like that and you think about the unintentional exclusion that those platforms involve, right? If you think about folks with disabilities who are not necessarily able to hear a conversation, right? Or you don't provide opportunities to like caption your conversations, right? That either intentionally or unintentionally excludes a group of folks, right? So I've seen Cloud Native, I've seen Cloud Native things happen on a Clubhouse, on a Twitter Spaces. I won't personally be involved in them until I know that it's a platform that is not exclusive. So I think that it's great that we're having new opportunities to engage with folks that are not necessarily, you've got people prefer the Slack and discord vibe, you've got people who prefer the text over phone calls, so to speak thing, right? You've got people who prefer phone calls. So maybe like, maybe Clubhouse, Twitter Spaces, insert new, I guess Disco is doing a thing too- >> They call it stages. Disco has stages, which is- >> Stages. They have stages. Okay. All right. So insert, Clubhouse clone here and- >> Kube House. We've got a Kube House come on in. >> Kube House. Kube House. >> Trivial (murmurs). >> So we've got great ways to engage there for people who prefer that type of engagement and something that is explicitly different from the I'm on a Zoom call all day kind of vibe enjoy yourselves, try to make it as engaging as possible, just realize what you may unintentionally be doing by creating a community that not everyone can be a part of. >> Yeah. Technical consequences. I mean, this is key language matters to how you get involved and how you support it. I mean, the accessibility piece, I never thought about that. If you can't listen, I mean, you can't there's no content there. >> Yeah. Yeah. And that's a huge part of the Cloud Native community, right? Thinking through accessibility, internationalization, localization, to make sure that our contributions are actually accessible, right? To folks who want to get involved and not just prioritizing, let's say the U.S. or our English speaking part of the world so. >> Awesome. Jasmine, what's your take? What can we do better in the world to make the diversity and inclusion not a conversation because when it's not a conversation, then it's solved. I mean, ultimately it's got a lot more work to do but you can't be exclusive. You got to be diverse more and more output happens. What's your take on this? >> Yeah. I feel like they'll always be work to do in this space because there's so many groups of people, right? That we have to take an account for. I think that thinking through inclusion in the onset of whatever you're doing is the best way to get ahead of it. There's so many different components of it and you want to make sure that you're making a space for everyone. I also think that making sure that you have a pipeline of a network of people that represent a good subset of the world is going to be very key for shaping any program or any sort of project that anyone does in the future. But I do think it's something that we have to consistently keep at the forefront of our mind always consider. It's great that it's in so many conversations right now. It really makes me happy especially being a mom with an eight year old girl who's into computer science as well. That there'll be better opportunities and hopefully more prevalent opportunities and representation for her by the time she grows up. So really, really great. >> Get her coding early, as I always say. Jasmine great to have you and Stephen as well. Good to see you. Final question. What do you hope people walk away with this year from KubeCon? What's the final kind of objective? Jasmine, we'll start with you. >> Wow. Final objective. I think that I would want people to walk away with a sense of community. I feel like the KubeCon CNCF world is a great place to get knowledge, but also an established sense of community not stopping at just the conference and taking part of the community, giving back, contributing would be a great thing for people to walk away with. >> Awesome. Stephen? >> I'm all about community as well. So I think that one of the fun things that we've been doing, is just engaging in different ways than we have normally across the kind of the KubeCon boundaries, right? So you take CNCF Twitch, you take some of the things that I can't mention yet, but are coming out you should see around and pose KubeCon week, the way that we're engaging with people is changing and it's needed to change because of how the world is right now. So I hope that to reinforce the community point, my favorite part of any conference is the hallway track. And I think I've mentioned this last time and we're trying our best. We're trying our best to create it. We've had lots of great feedback about, whether it be people playing among us on CNCF Twitch or hanging out on Slack silly early hours, just chatting it up. And are kind of like crafted hallway track. So I think that engage, don't be afraid to say hello. I know that it's new and scary sometimes and trust me, we've literally all been here. It's going to be okay, come in, have some fun, we're all pretty friendly. We're all pretty friendly and we know and understand that the only way to make this community survive and thrive is to bring on new contributors, is to get new perspectives and continue building awesome technology. So don't be afraid. >> I love it. You guys have a global diverse and knowledgeable and open community. Congratulations. Jasmine James, Stephen Augustus, co-chairs for KubeCon here on theCUBE breaking it down, I'm John Furrier for your host, thanks for watching. (upbeat music)

Published Date : May 4 2021

SUMMARY :

brought to you by Red Hat, and also the KubeCon co-chair So I got to ask you out of the gate, and really starting to and tell more about who you are on the consumer side going to talk, to see you land there. and making sure that you but they need to learn and change. and it's delightful to see all and just the operational the place to go is the one-on-one track. that are in the end user side So that way you can make and the overall achievements? and really the way that And I got to say KubeCon has always been and being able to lift So the language, so there's It's like you said it. you passionate about this here. and it's something that I don't like. They call it stages. So insert, Clubhouse clone here and- We've got a Kube House come on in. Kube House. different from the I'm I mean, the accessibility piece, speaking part of the world so. You got to be diverse more of the world is going to be What's the final kind of objective? and taking part of the Awesome. So I hope that to reinforce and knowledgeable and open community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

JasminePERSON

0.99+

Dave VellantePERSON

0.99+

Jasmine JamesPERSON

0.99+

IndiaLOCATION

0.99+

ShanghaiLOCATION

0.99+

Stephen AugustusPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

last yearDATE

0.99+

Delta AirlinesORGANIZATION

0.99+

North AmericaLOCATION

0.99+

hungLOCATION

0.99+

CNCFORGANIZATION

0.99+

DiscoORGANIZATION

0.99+

KubeConEVENT

0.99+

six months agoDATE

0.99+

ClubhouseTITLE

0.99+

TwitterORGANIZATION

0.99+

APACORGANIZATION

0.98+

first cycleQUANTITY

0.98+

Ecosystem PartnersORGANIZATION

0.98+

oneQUANTITY

0.98+

CloudOpsTITLE

0.98+

this yearDATE

0.98+

Cloud NativeTITLE

0.98+

first journeyQUANTITY

0.97+

U.S.LOCATION

0.97+

first timeQUANTITY

0.97+

two great guestsQUANTITY

0.97+

GitOpsTITLE

0.97+

one timeQUANTITY

0.96+

KubernetesTITLE

0.96+

bothQUANTITY

0.96+

twoQUANTITY

0.96+

LFORGANIZATION

0.96+

SIGORGANIZATION

0.96+

CloudNativeCon 2021 VirtualEVENT

0.95+

121 released teamQUANTITY

0.94+

ClubhouseORGANIZATION

0.94+

Dr. Eng Lim Goh, Joachim Schultze, & Krishna Prasad Shastry | HPE Discover 2020


 

>> Narrator: From around the globe it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody. Welcome back. This is Dave Vellante for theCUBE, and this is our coverage of discover 2020, the virtual experience of HPE discover. We've done many, many discoveries, as usually we're on the show floor, theCUBE has been virtualized and we talk a lot at HPE discovers, a lot of storage and server and infrastructure and networking which is great. But the conversation we're going to have now is really, we're going to be talking about helping the world solve some big problems. And I'm very excited to welcome back to theCUBE Dr. Eng Lim Goh. He's a senior vice president of and CTO for AI, at HPE. Hello, Dr. Goh. Great to see you again. >> Hello. Thank you for having us, Dave. >> You're welcome. And then our next guest is Professor Joachim Schultze, who is the Professor for Genomics, and Immunoregulation at the university of Bonn amongst other things Professor, welcome. >> Thank you all. Welcome. >> And then Prasad Shastry, is the Chief Technologist for the India Advanced Development Center at HPE. Welcome, Prasad. Great to see you. >> Thank you. Thanks for having me. >> So guys, we have a CUBE first. I don't believe we've ever had of three guests in three separate times zones. I'm in a fourth time zone. (guests chuckling) So I'm in Boston. Dr. Goh, you're in Singapore, Professor Schultze, you're in Germany and Prasad, you're in India. So, we've got four different time zones. Plus our studio in Palo Alto. Who's running this program. So we've got actually got five times zones, a CUBE first. >> Amazing. >> Very good. (Prasad chuckles) >> Such as the world we live in. So we're going to talk about some of the big problems. I mean, here's the thing we're obviously in the middle of this pandemic, we're thinking about the post isolation economy, et cetera. People compare obviously no surprise to the Spanish flu early part of last century. They talk about the great depression, but the big difference this time is technology. Technology has completely changed the way in which we've approached this pandemic. And we're going to talk about that. Dr. Goh, I want to start with you. You've done a lot of work on this topic of swarm learning. If we could, (mumbles) my limited knowledge of this is we're kind of borrowing from nature. You think about, bees looking for a hive as sort of independent agents, but somehow they come together and communicate, but tell us what do we need to know about swarm learning and how it relates to artificial intelligence and we'll get into it. >> Oh, Dave, that's a great analogy using swarm of bees. That's exactly what we do at HPE. So let's use the of here. When deploying artificial intelligence, a hospital does machine learning of the outpatient data that could be biased, due to demographics and the types of cases they see more also. Sharing patient data across different hospitals to remove this bias is limited, given privacy or even sovereignty the restrictions, right? Like for example, across countries in the EU. HPE, so I'm learning fixers this by allowing each hospital, let's still continue learning locally, but at each cycle we collect the lumped weights of the neural networks, average them and sending it back down to older hospitals. And after a few cycles of doing this, all the hospitals would have learned from each other, removing biases without having to share any private patient data. That's the key. So, the ability to allow you to learn from everybody without having to share your private patients. That's swarm learning, >> And part of the key to that privacy is blockchain, correct? I mean, you you've been too involved in blockchain and invented some things in blockchain and that's part of the privacy angle, is it not? >> Yes, yes, absolutely. There are different ways of doing this kind of distributed learning, which swarm learning is over many of the other distributed learning methods. Require you to have some central control. Right? So, Prasad, and the team and us came up together. We have a method where you would, instead of central control, use blockchain to do this coordination. So, there is no more a central control or coordinator, especially important if you want to have a truly distributed swamp type learning system. >> Yeah, no need for so-called trusted third party or adjudicator. Okay. Professor Schultze, let's go to you. You're essentially the use case of this swarm learning application. Tell us a little bit more about what you do and how you're applying this concept. >> I'm actually by training a physician, although I haven't seen patients for a very long time. I'm interested in bringing new technologies to what we call precision medicine. So, new technologies both from the laboratories, but also from computational sciences, married them. And then I basically allow precision medicine, which is a medicine that is built on new measurements, many measurements of molecular phenotypes, how we call them. So, basically that process on different levels, for example, the genome or genes that are transcribed from the genome. We have thousands of such data and we have to make sense out of this. This can only be done by computation. And as we discussed already one of the hope for the future is that the new wave of developments in artificial intelligence and machine learning. We can make more sense out of this huge data that we generate right now in medicine. And that's what we're interesting in to find out how can we leverage these new technologies to build a new diagnostics, new therapy outcome predictors. So, to know the patient benefits from a disease, from a diagnostics or a therapy or not, and that's what we are doing for the last 10 years. The most exciting thing I have been  through in the last three, four, five years is really when HPE introduced us to swarm learning. >> Okay and Prasad, you've been helping Professor Schultze, actually implements swarm learning for specific use cases that we're going to talk about COVID, but maybe describe a little bit about what you've been or your participation in this whole equation. >> Yep, thank. As Dr Eng Lim Goh, mentioned. So, we have used blockchain as a backbone to implement the decentralized network. And through that we're enabling a privacy preserved these centralized network without having any control points, as Professor explained in terms of depression medicines. So, one of the use case we are looking at he's looking at the blood transcriptomes, think of it, different hospitals having a different set of transcriptome data, which they cannot share due to the privacy regulations. And now each of those hospitals, will clean the model depending upon their local data, which is available in that hospital. And shared the learnings coming out of that training with the other hospitals. And we played to over several cycles to merge all these learnings and then finally get into a global model. So, through that we are able to kind of get into a model which provides the performance is equal of collecting all the data into a central repository and trying to do it. And we could really think of when we are doing it, them, could be multiple kinds of challenges. So, it's good to do decentralized learning. But what about if you have a non ID type of data, what about if there is a dropout in the network connections? What about if there are some of the compute nodes we just practice or probably they're not seeing sufficient amount of data. So, that's something we tried to build into the swarm learning framework. You'll handle the scenarios of having non ID data. All in a simple word we could call it as seeing having the biases. An example, one of the hospital might see EPR trying to, look at, in terms of let's say the tumors, how many number of cases and whereas the other hospital might have very less number of cases. So, if you have kind of implemented some techniques in terms of doing the merging or providing the way that different kind of weights or the tuneable parameters to overcome these set of challenges in the swarm learning. >> And Professor Schultze, you you've applied this to really try to better understand and attack the COVID pandemic, can you describe in more detail your goals there and what you've actually done and accomplished? >> Yeah. So, we have actually really done it for COVID. The reason why we really were trying to do this already now is that we have to generate it to these transcriptomes from COVID-19 patients ourselves. And we realized that the scene of the disease is so strong and so unique compared to other infectious diseases, which we looked at in some detail that we felt that the blood transcriptome would be good starting point actually to identify patients. But maybe even more important to identify those with severe diseases. So, if you can identify them early enough that'd be basically could care for those more and find particular for those treatments and therapies. And the reason why we could do that is because we also had some other test cases done before. So, we used the time wisely with large data sets that we had collected beforehand. So, use cases learned how to apply swarm learning, and we are now basically ready to test directly with COVID-19. So, this is really a step wise process, although it was extremely fast, it was still a step wise probably we're guided by data where we had much more knowledge of which was with the black leukemia. So, we had worked on that for years. We had collected many data. So, we could really simulate a Swarm learning very nicely. And based on all the experience we get and gain together with Prasad, and his team, we could quickly then also apply that knowledge to the data that are coming now from COVID-19 patients. >> So, Dr. Goh, it really comes back to how we apply machine intelligence to the data, and this is such an interesting use case. I mean, the United States, we have 50 different States with 50 different policies, different counties. We certainly have differences around the world in terms of how people are approaching this pandemic. And so the data is very rich and varied. Let's talk about that dynamic. >> Yeah. If you, for the listeners who are or viewers who are new to this, right? The workflow could be a patient comes in, you take the blood, and you send it through an analysis? DNA is made up of genes and our genes express, right? They express in two steps the first they transcribe, then they translate. But what we are analyzing is the middle step, the transcription stage. And tens of thousands of these Transcripts that are produced after the analysis of the blood. The thing is, can we find in the tens of thousands of items, right? Or biomarkers a signature that tells us, this is COVID-19 and how serious it is for this patient, right? Now, the data is enormous, right? For every patient. And then you have a collection of patients in each hospitals that have a certain demographic. And then you have also a number of hospitals around. The point is how'd you get to share all that data in order to have good training of your machine? The ACO is of course a know privacy of data, right? And as such, how do you then share that information if privacy restricts you from sharing the data? So in this case, swarm learning only shares the learnings, not the private patient data. So we hope this approach would allow all the different hospitals to come together and unite sharing the learnings removing biases so that we have high accuracy in our prediction as well at the same time, maintaining privacy. >> It's really well explained. And I would like to add at least for the European union, that this is extremely important because the lawmakers have clearly stated, and the governments that even non of these crisis conditions, they will not minimize the rules of privacy laws, their compliance to privacy laws has to stay as high as outside of the pandemic. And I think there's good reasons for that, because if you lower the bond, now, why shouldn't you lower the bar in other times as well? And I think that was a wise decision, yes. If you would see in the medical field, how difficult it is to discuss, how do we share the data fast enough? I think swarm learning is really an amazing solution to that. Yeah, because this discussion is gone basically. Now we can discuss about how we do learning together. I'd rather than discussing what would be a lengthy procedure to go towards sharing. Which is very difficult under the current privacy laws. So, I think that's why I was so excited when I learned about it, the first place with faster, we can do things that otherwise are either not possible or would take forever. And for a crisis that's key. That's absolutely key. >> And is the byproduct. It's also the fact that all the data stay where they are at the different hospitals with no movement. >> Yeah. Yeah. >> Learn locally but only shared the learnings. >> Right. Very important in the EU of course, even in the United States, People are debating. What about contact tracing and using technology and cell phones, and smartphones to do that. Beside, I don't know what the situation is like in India, but nonetheless, that Dr. Goh's point about just sharing the learnings, bubbling it up, trickling just kind of metadata. If you will, back down, protects us. But at the same time, it allows us to iterate and improve the models. And so, that's a key part of this, the starting point and the conclusions that we draw from the models they're going to, and we've seen this with the pandemic, it changes daily, certainly weekly, but even daily. We continuously improve the conclusions and the models don't we. >> Absolutely, as Dr. Goh explained well. So, we could look at like they have the clinics or the testing centers, which are done in the remote places or wherever. So, we could collect those data at the time. And then if we could run it to the transcripting kind of a sequencing. And then as in, when we learn to these new samples and the new pieces all of them put kind of, how is that in the local data participate in the kind of use swarm learning, not just within the state or in a country could participate into an swarm learning globally to share all this data, which is coming up in a new way, and then also implement some kind of continuous learning to pick up the new signals or the new insight. It comes a bit new set of data and also help to immediately deploy it back into the inference or into the practice of identification. To do these, I think one of the key things which we have realized is to making it very simple. It's making it simple, to convert the machine learning models into the swarm learning, because we know that our subject matter experts who are going to develop these models on their choice of platforms and also making it simple to integrate into that complete machine learning workflow from the time of collecting a data pre processing and then doing the model training and then putting it onto inferencing and looking performance. So, we have kept that in the mind from the beginning while developing it. So, we kind of developed it as a plug able microservices kind of packed data with containers. So the whole library could be given it as a container with a kind of a decentralized management command controls, which would help to manage the whole swarm network and to start and initiate and children enrollment of new hospitals or the new nodes into the swarm network. At the same time, we also looked into the task of the data scientists and then try to make it very, very easy for them to take their existing models and convert that into the swarm learning frameworks so that they can convert or enabled they're models to participate in a decentralized learning. So, we have made it to a set callable rest APIs. And I could say that the example, which we are working with the Professor either in the case of leukemia or in the COVID kind of things. The noodle network model. So we're kind of using the 10 layer neural network things. We could convert that into the swarm model with less than 10 lines of code changes. So, that's kind of a simply three we are looking at so that it helps to make it quicker, faster and loaded the benefits. >> So, that's an exciting thing here Dr. Goh is, this is not an R and D project. This is something that you're actually, implementing in a real world, even though it's a narrow example, but there are so many other examples that I'd love to talk about, but please, you had a comment. >> Yes. The key thing here is that in addition to allowing privacy to be kept at each hospital, you also have the issue of different hospitals having day to day skewed differently. Right? For example, a demographics could be that this hospital is seeing a lot more younger patients, and other hospitals seeing a lot more older patients. Right? And then if you are doing machine learning in isolation then your machine might be better at recognizing the condition in the younger population, but not older and vice versa by using this approach of swarm learning, we then have the biases removed so that both hospitals can detect for younger and older population. All right. So, this is an important point, right? The ability to remove biases here. And you can see biases in the different hospitals because of the type of cases they see and the demographics. Now, the other point that's very important to reemphasize is what precise Professor Schultze mentioned, right? It's how we made it very easy to implement this.Right? This started out being so, for example, each hospital has their own neural network and they training their own. All you do is we come in, as Pasad mentioned, change a few lines of code in the original, machine learning model. And now you're part of the collective swarm. This is how we want to easy to implement so that we can get again, as I like to call, hospitals of the world to uniting. >> Yeah. >> Without sharing private patient data. So, let's double click on that Professor. So, tell us about sort of your team, how you're taking advantage of this Dr. Goh, just describe, sort of the simplicity, but what are the skills that you need to take advantage of this? What's your team look like? >> Yeah. So, we actually have a team that's comes from physicians to biologists, from medical experts up to computational scientists. So, we have early on invested in having these interdisciplinary research teams so that we can actually spend the whole spectrum. So, people know about the medicine they know about them the biological basics, but they also know how to implement such new technology. So, they are probably a little bit spearheading that, but this is the way to go in the future. And I see that with many institutions going this way many other groups are going into this direction because finally medicine understands that without computational sciences, without artificial intelligence and machine learning, we will not answer those questions with this large data that we're using. So, I'm here fine. But I also realize that when we entered this project, we had basically our model, we had our machine learning model from the leukemia's, and it really took almost no efforts to get this into the swarm. So, we were really ready to go in very short time, but I also would like to say, and then it goes towards the bias that is existing in medicine between different places. Dr. Goh said this very nicely. It's one aspect is the patient and so on, but also the techniques, how we do clinical essays, we're using different robots a bit. Using different automates to do the analysis. And we actually try to find out what the Swan learning is doing if we actually provide such a bias by prep itself. So, I did the following thing. We know that there's different ways of measuring these transcriptomes. And we actually simulated that two hospitals had an older technology and a third hospital had a much newer technology, which is good for understanding the biology and the diseases. But it is the new technology is prone for not being able anymore to generate data that can be used to learn and then predicting the old technology. So, there was basically, it's deteriorating, if you do take the new one and you'll make a classifier model and you try old data, it doesn't work anymore. So, that's a very hard challenge. We knew it didn't work anymore in the old way. So, we've pushed it into swarm learning and to swarm recognize that, and it didn't take care of it. It didn't care anymore because the results were even better by bringing everything together. I was astonished. I mean, it's absolutely amazing. That's although we knew about this limitations on that one hospital data, this form basically could deal with it. I think there's more to learn about these advantages. Yeah. And I'm very excited. It's not only a transcriptome that people do. I hope we can very soon do it with imaging or the DCNE has 10 sites in Germany connected to 10 university hospitals. There's a lot of imaging data, CT scans and MRIs, Rachel Grimes. And this is the next next domain in medicine that we would like to apply as well as running. Absolutely. >> Well, it's very exciting being able to bring this to the clinical world And make it in sort of an ongoing learnings. I mean, you think about, again, coming back to the pandemic, initially, we thought putting people on ventilators was the right thing to do. We learned, okay. Maybe, maybe not so much the efficacy of vaccines and other therapeutics. It's going to be really interesting to see how those play out. My understanding is that the vaccines coming out of China, or built to for speed, get to market fast, be interested in U.S. Maybe, try to build vaccines that are maybe more longterm effective. Let's see if that actually occurs some of those other biases and tests that we can do. That is a very exciting, continuous use case. Isn't it? >> Yeah, I think so. Go ahead. >> Yes. I, in fact, we have another project ongoing to use a transcriptome data and other data like metabolic and cytokines that data, all these biomarkers from the blood, right? Volunteers during a clinical trial. But the whole idea of looking at all those biomarkers, we talking tens of thousands of them, the same thing again, and then see if we can streamline it clinical trials by looking at it data and training with that data. So again, here you go. Right? We have very good that we have many vaccines on. In candidates out there right now, the next long pole in the tenth is the clinical trial. And we are working on that also by applying the same concept. Yeah. But for clinical trials. >> Right. And then Prasad, it seems to me that this is a good, an example of sort of an edge use case. Right? You've got a lot of distributed data. And I know you've spoken in the past about the edge generally, where data lives bringing moving data back to sort of the centralized model. But of course you don't want to move data if you don't have to real time AI inferencing at the edge. So, what are you thinking in terms of other other edge use cases that were there swarm learning can be applied. >> Yeah, that's a great point. We could kind of look at this both in the medical and also in the other fields, as we talked about Professor just mentioned about this radiographs and then probably, Using this with a medical image data, think of it as a scenario in the future. So, if we could have an edge note sitting next to these medical imaging systems, very close to that. And then as in when this the systems producers, the medical immediate speed could be an X-ray or a CT scan or MRI scan types of thing. The system next to that, sitting on the attached to that. From the modernity is already built with the swarm lending. It can do the inferencing. And also with the new setup data, if it looks some kind of an outlier sees the new or images are probably a new signals. It could use that new data to initiate another round up as form learning with all the involved or the other medical images across the globe. So, all this can happen without really sharing any of the raw data outside of the systems but just getting the inferencing and then trying to make all of these systems to come together and try to build a better model. >> So, the last question. Yeah. >> If I may, we got to wrap, but I mean, I first, I think we've heard about swarm learning, maybe read about it probably 30 years ago and then just ignored it and forgot about it. And now here we are today, blockchain of course, first heard about with Bitcoin and you're seeing all kinds of really interesting examples, but Dr. Goh, start with you. This is really an exciting area, and we're just getting started. Where do you see swarm learning, by let's say the end of the decade, what are the possibilities? >> Yeah. You could see this being applied in many other industries, right? So, we've spoken about life sciences, to the healthcare industry or you can't imagine the scenario of manufacturing where a decade from now you have intelligent robots that can learn from looking at across men building a product and then to replicate it, right? By just looking, listening, learning and imagine now you have multiple of these robots, all sharing their learnings across boundaries, right? Across state boundaries, across country boundaries provided you allow that without having to share what they are seeing. Right? They can share, what they have lunch learnt You see, that's the difference without having to need to share what they see and hear, they can share what they have learned across all the different robots around the world. Right? All in the community that you allow, you mentioned that time, right? That will even in manufacturing, you get intelligent robots learning from each other. >> Professor, I wonder if as a practitioner, if you could sort of lay out your vision for where you see something like this going in the future, >> I'll stay with the medical field at the moment being, although I agree, it will be in many other areas, medicine has two traditions for sure. One is learning from each other. So, that's an old tradition in medicine for thousands of years, but what's interesting and that's even more in the modern times, we have no traditional sharing data. It's just not really inherent to medicine. So, that's the mindset. So yes, learning from each other is fine, but sharing data is not so fine, but swarm learning deals with that, we can still learn from each other. We can, help each other by learning and this time by machine learning. We don't have to actually dealing with the data sharing anymore because that's that's us. So for me, it's a really perfect situation. Medicine could benefit dramatically from that because it goes along the traditions and that's very often very important to get adopted. And on top of that, what also is not seen very well in medicine is that there's a hierarchy in the sense of serious certain institutions rule others and swarm learning is exactly helping us there because it democratizes, onboarding everybody. And even if you're not sort of a small entity or a small institutional or small hospital, you could become remembering the swarm and you will become as a member important. And there is no no central institution that actually rules everything. But this democratization, I really laugh, I have to say, >> Pasad, we'll give you the final word. I mean, your job is very helping to apply these technologies to solve problems. what's your vision or for this. >> Yeah. I think Professor mentioned about one of the very key points to use saying that democratization of BI I'd like to just expand a little bit. So, it has a very profound application. So, Dr. Goh, mentioned about, the manufacturing. So, if you look at any field, it could be health science, manufacturing, autonomous vehicles and those to the democratization, and also using that a blockchain, we are kind of building a framework also to incentivize the people who own certain set of data and then bring the insight from the data into the table for doing and swarm learning. So, we could build some kind of alternative monetization framework or an incentivization framework on top of the existing fund learning stuff, which we are working on to enable the participants to bring their data or insight and then get rewarded accordingly kind of a thing. So, if you look at eventually, we could completely make dais a democratized AI, with having the complete monitorization incentivization system which is built into that. You may call the parties to seamlessly work together. >> So, I think this is just a fabulous example of we hear a lot in the media about, the tech backlash breaking up big tech but how tech has disrupted our lives. But this is a great example of tech for good and responsible tech for good. And if you think about this pandemic, if there's one thing that it's taught us is that disruptions outside of technology, pandemics or natural disasters or climate change, et cetera, are probably going to be the bigger disruptions then technology yet technology is going to help us solve those problems and address those disruptions. Gentlemen, I really appreciate you coming on theCUBE and sharing this great example and wish you best of luck in your endeavors. >> Thank you. >> Thank you. >> Thank you for having me. >> And thank you everybody for watching. This is theCUBE's coverage of HPE discover 2020, the virtual experience. We'll be right back right after this short break. (upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the globe it's theCUBE, But the conversation we're Thank you for having us, Dave. and Immunoregulation at the university Thank you all. is the Chief Technologist Thanks for having me. So guys, we have a CUBE first. Very good. I mean, here's the thing So, the ability to allow So, Prasad, and the team You're essentially the use case of for the future is that the new wave Okay and Prasad, you've been helping So, one of the use case we And based on all the experience we get And so the data is very rich and varied. of the blood. and the governments that even non And is the byproduct. Yeah. shared the learnings. and improve the models. And I could say that the that I'd love to talk about, because of the type of cases they see sort of the simplicity, and the diseases. and tests that we can do. Yeah, I think so. and then see if we can streamline it about the edge generally, and also in the other fields, So, the last question. by let's say the end of the decade, All in the community that you allow, and that's even more in the modern times, to apply these technologies You may call the parties to the tech backlash breaking up big tech the virtual experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PrasadPERSON

0.99+

IndiaLOCATION

0.99+

Joachim SchultzePERSON

0.99+

DavePERSON

0.99+

Palo AltoLOCATION

0.99+

Dave VellantePERSON

0.99+

BostonLOCATION

0.99+

ChinaLOCATION

0.99+

SchultzePERSON

0.99+

GermanyLOCATION

0.99+

SingaporeLOCATION

0.99+

United StatesLOCATION

0.99+

10 sitesQUANTITY

0.99+

Prasad ShastryPERSON

0.99+

10 layerQUANTITY

0.99+

10 university hospitalsQUANTITY

0.99+

COVID-19OTHER

0.99+

GohPERSON

0.99+

50 different policiesQUANTITY

0.99+

two hospitalsQUANTITY

0.99+

thousandsQUANTITY

0.99+

two stepsQUANTITY

0.99+

Krishna Prasad ShastryPERSON

0.99+

pandemicEVENT

0.99+

thousands of yearsQUANTITY

0.99+

Eng Lim GohPERSON

0.99+

HPEORGANIZATION

0.99+

firstQUANTITY

0.99+

ACOORGANIZATION

0.99+

DCNEORGANIZATION

0.99+

European unionORGANIZATION

0.99+

each hospitalsQUANTITY

0.99+

less than 10 linesQUANTITY

0.99+

both hospitalsQUANTITY

0.99+

oneQUANTITY

0.99+

Rachel GrimesPERSON

0.99+

eachQUANTITY

0.99+

three guestsQUANTITY

0.99+

each cycleQUANTITY

0.99+

third hospitalQUANTITY

0.99+

each hospitalQUANTITY

0.98+

fourQUANTITY

0.98+

30 years agoDATE

0.98+

India Advanced Development CenterORGANIZATION

0.98+

bothQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

fourth time zoneQUANTITY

0.98+

threeQUANTITY

0.98+

one aspectQUANTITY

0.97+

EULOCATION

0.96+

five yearsQUANTITY

0.96+

2020DATE

0.96+

todayDATE

0.96+

Dr.PERSON

0.95+

PasadPERSON

0.95+

Chhandomay Mandal, Dell Technologies | CUBE Conversation, May 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Hi, I'm Stu Miniman and welcome to a special CUBE conversation. Digging into some of the hottest topics in tech, of course, multicloud has been one of the big things we've been talking about for a number of years, the maturation from just cloud in general, hybrid cloud and multicloud. Happy to welcome back to the program, one of our CUBE alumni, Chhandomay Mandal. He's a director of marketing from Dell Technologies. Chhandomay, pleasure to see you. >> Happy to be here. >> All right, so last year we were together for Dell Technologies World, VM World and of course, I've seen how these solutions have been expanding out partnerships especially a lot of it from Dell's side on leveraging VMware technologies to extend and connect to what your customers are doing with their cloud strategies. So, give us the update as to you know, what you're hearing from customers and how Dell is moving to meet them. >> Sure. Cloud adoption is really growing and even from the three hyperscalers, AWS, Azure and Google Cloud. There are over 500 different services today. And with this fast pace of innovation, I see costumers adopting many different services from these public cloud vendors. And again, they want to add up the services because they are differentiated. They have workloads that can leverage their services and sometimes even from leveraging the same data set. One challenge that we're seeing is, how do customers move data around from one cloud to another so that they can take advantage of the great innovation that is happening with cloud storage or cloud providers? Because moving the data comes with not only the migration risk, but also huge egress fees, the time it takes. So, solving this customer's challenge is our number one priority in the cloud offering. >> Great, Chhandomay, you brought up a bunch of really good points there. Of course, nobody's solved the speed of light issue, so we know data has gravity, it's not easy to move it. And yeah, absolutely, you know, I've been saying for the last couple of years, that data is one of those flywheels, when it comes to the cloud. Well, once you've got it in there, it's not, you know kind, of the traditional lock-in, but I have access to the data, I have access to the services and it's not easy to move it out, even if customers would want to take advantage of multiple services from multiple clouds. So, I'd love to hear, you know, what's DELL's role in this discussion? How are you helping us make our data more of a multicloud-enabled environment? >> Absolutely, it's true. So with us, Dell Technologies, cloud storage for multicloud, we are delivering scalable, resilient cloud data storage with flexible, multi-cloud access options. Ideal for securely, deploying or moving, demanding applications to the cloud for many different use cases. The way we are doing it, effectively, that customers can leverage log or file storage consumed as a service, directly from any different clouds, like, AWS, Google cloud, Azure, and we are providing very high speed, low latency connections from Dell EMC storage, from our managed services provider location, using a direct cloud connect option and, let me give you like an example. We have Dell EMC Isilon industries number one scale-out NAS It has a very high performance drives, large throughput, all scales to multi-petabyte, use cases have multiple different protocol access simultaneously from many different applications. Now the same Isilon, today can be leveraged as Dell technology's cloud storage with direct access to Azure, Google cloud, AWS on zoomed in the cloud operating model. So now, you can run your applications in any cloud while having data sitting outside of your cloud with a high performance, high speed access that you need. That's where we are bringing the innovation and the value. >> Okay, and if I heard your right Chhandomay, this is a managed service solution because if I want that, you know, high speed, you know, direct connection, with Azure and with AWS, normally I need to be, you know, in some service provider, Dell of course has lots of partners that offer those services. I'm not just talking about, you know, connecting my ray that I have in my data center, connected over the internet. because that wouldn't necessarily give me the bandwidth and performance that I need. Did I get that correct? >> Yeah, absolutely because again, you need this connection and all locations with the hyperscalers to get the high speed connection. Say in the case of Microsoft Azure, the express route, it need to be co-located in a facility like right next to them, so that you have the high bandwidth, high performance that you need for this application. >> Yeah, that makes a lot of sense. It's kind of, you know, you're hyperscaler adjacent you just, through that connection it's relatively close. Might help if maybe if you've got, you know, a customer or an industry example of, you know, what the real life expectation and use cases, for a solution like this. >> Sure, so let me give you the example of genomics analysis. Now, is genome sequencer in a single cycle or a human being that creates 100 gigabytes of data and that's just like raw data. You need to run analysis, different types of analysis to check effects that are drug or something having on the DNA. Now, for example, NVIDIA Parabricks is a popular sequencing software that needs to be run on this data set. And again, it drives very high throughput. Sometimes it needs 100 gigabytes per second throughput to drive the performance. Now, we have worked with Microsoft Azure very closely and using Microsoft ExpressRoute, you can actually get that bandwidth, that throughput for running Parabricks, or nextgen sequencing or VMs in Azure leveraging Isilon. And in fact, we have worked with Azure to provide a completely egress fee data movement. So when this application is writing back data, to this application at it, as part of Dell technologies about storage, there is zero fee associated with it. And it's not just Microsoft Azure, right? You can have the same data set and run this Parabrick or nextgen sequencing VMs in Google cloud, AWS, Azure simultaneously. Thereby scaling up this process much faster. So, if you are a pharmaceutical company, trying to cure for disease spreading across the globe, you need to run, this on hundreds of thousands of patients creating hundreds of terabytes to petabytes of data, then, you can actually scale up the process across three or more different clouds very quickly. This truly shows you how you can leverage the power of Isilon, the scalable high performance storage in a multi cloud world. >> Yeah, very very interesting, you know, you talked about no cost for an egress fee and that, you know, can be one of those architectural killers. You think you have a good solution for a cloud, you put things out and then all of a sudden, you start getting things on your bill that you weren't expecting. So today, is there something special that the customer needs to do that it's for this service, that you're saying is that a partnership with Dell and Microsoft or you know, how does this differ from kind of the traditional egress fees that I'm used to getting or whether I'm using AWS, Azure or Google? >> So, this is like a, DELL and Microsoft Azure partnership. So that's where like you do not get, charged with the egress fee when the applications running in Azure, are connecting back to EMC storage as part of that cloud storage services. >> Okay, excellent, 'cause yeah, I mean, Chhandomay, I'm sure you're well familiar. A lot of times people look at cloud and they're saying, okay, when I look at the economic, if it's computer intensive, it makes a little bit more sense, if it's data intensive, there's lots of reasons that it might not make sense, that this is unlocking some of that data capabilities, I guess that leads to, you know, some of the opportunity around AI is of course, I need to think about my architecture. A lot of times data is not going to leave the edge environments, you know, autonomous vehicles is, you know, an obvious use case that we talk about. Usually there's training in a central location, but then I need to be able to actually do the work at the edge. So what does this, you know, cloud storage for multi cloud, how does AI fit into this whole gap? >> So, yes, for AI, you need to train very large data sets, for a long time, and you get to the results like you opt and you want. You gave the example of autonomous driving, right? The self driving car needs to understand many different scenarios, whether it's, icy road, a kid on the road, it's a slippery condition or you're running into a big wall, so on and so forth. Now, when it comes to dealing with this petabytes worth of data set and you need to train these models, okay? You need a very specific servers, GPU powered servers, okay. Now, to scale, you think that you'd go to the cloud and then you will be able to get the computer needs. However it turns out cloud is not an amorphous homogeneous place. Between the vendors, there is huge difference in terms of what GPU powered server you can get. And even like within one particular cloud vendor depending on the region, this vary widely. So it becomes critical, that you can have like data set that can be connected from many different clouds, from many different regions as you need it. And one more thing I want to highlight, AI is actually one area where these cloud providers are providing very differentiated services. So in the autonomous vehicle example, there are several stages of a model training depending on like what you are trying to achieve at one point in time. Now you can one day, or one part of the process, you can leverage AWS SageMaker for your model training. On the other part, you probably would like TensorFlow from Google cloud, good to it. Now when you have your data set outside of the cloud and you have the fast connection from many different clouds. You can take advantage not only, the different, GPU powered servers but also differentiated faster services that are available from this cloud providers. >> All right, so Chhandomay, how does the VMware Cloud Solution fit into this discussion? I know it's been a important piece of the DELL technologies cloud piece. So how does the multi cloud storage, VMware cloud and the multi cloud piece fit together? >> Sure, so the VMware cloud, on AWS is one of the key offering that we have, and it also fits into the multi cloud story very well. Actually, let me explain that with a customer example, okay. We have one of world's largest energy company down in Texas. They have four petabyte of Data Lake on Isilon, okay. And this is all seismic data, they are running analytics to workloads to figure out exactly which place in the ocean they should drill and, precision on here can be like millions or billions of dollars of difference. Now, they wanted to set up a secondary data center in the case of a disaster. What we were able to do is to spin up DR service for this customer leveraging Dell Technologies cloud storage. So they replicate the data to the cloud and then we spin up their DR environment with VMware Cloud on AWS, okay. And now the data is already in the cloud. So they got their, service with VMware Cloud on AWS but with the same data set. Now they are running those seismic analytics workloads from AWS, Google cloud and Azure, thereby speeding up the process of finding the next location to drill. So, you see the example where we leverage the EMC on AWS for DR as a service and since the data set is already there, now they are running their analytics to workloads for their regular operation. >> Great, well definitely a quite a bit of maturation in the Dell cloud solution, how that fits into multi cloud. Help put a point on it, Chhandomay, if you would, the conversations you're having with customers and Dell's role in the multi cloud discussion. >> Sure, so there are two important things. First, the ability to scale to many different clouds to leverage the different services, the compute infrastructure, so on and so forth. And the second part of it is depending on the applications, right, you might need to leverage for the same workload, working on the same data set, different services from different providers. Dell Technologies cloud storage for multi cloud is enabling that for our entire customer set. And I will, close out with one more important aspect. If you are a customer who is just starting your cloud journey or a one single cloud provider, go and see your cloud expert today. But still, you want to architect your solution so that, when the need comes, you can actually leverage multi cloud for compute or other services. So, if you decouple your services from like, where are your data is while doing the cloud access that actually makes your cloud architecture (mumbles). So, with Dell Technologies, cloud storage per multi cloud, we're helping customers not only today, but also for future. >> All right, well, Chhandomay Mandal, thanks so much for the updates, Congratulations to the theme on the process and look forward to talking to you again in the future. >> Thank you. >> All right, I'm Stu Miniman, thank you so much for watching theCUBE. (gentle music)

Published Date : May 20 2020

SUMMARY :

leaders all around the world, been one of the big things and how Dell is moving to meet them. and even from the three hyperscalers, So, I'd love to hear, you know, speed access that you need. normally I need to be, you so that you have the high It's kind of, you know, that needs to be run on this data set. that the customer needs to do So that's where like you do not get, the edge environments, you know, On the other part, you probably would like and the multi cloud piece fit together? of finding the next location to drill. and Dell's role in the First, the ability to scale to you again in the future. thank you so much for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DellORGANIZATION

0.99+

ChhandomayPERSON

0.99+

May 2020DATE

0.99+

DELLORGANIZATION

0.99+

TexasLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

Palo AltoLOCATION

0.99+

one dayQUANTITY

0.99+

millionsQUANTITY

0.99+

Chhandomay MandalPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

FirstQUANTITY

0.99+

100 gigabytesQUANTITY

0.99+

threeQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

One challengeQUANTITY

0.99+

hundreds of terabytesQUANTITY

0.99+

over 500 different servicesQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.98+

NVIDIAORGANIZATION

0.98+

oneQUANTITY

0.98+

three hyperscalersQUANTITY

0.98+

Dell Technologies WorldORGANIZATION

0.97+

EMCORGANIZATION

0.97+

one pointQUANTITY

0.97+

one partQUANTITY

0.97+

theCUBE StudiosORGANIZATION

0.96+

one areaQUANTITY

0.96+

hundreds of thousands of patientsQUANTITY

0.96+

CUBEORGANIZATION

0.96+

VMware CloudTITLE

0.95+

second partQUANTITY

0.95+

two important thingsQUANTITY

0.93+

AzureTITLE

0.93+

one more thingQUANTITY

0.93+

billions of dollarsQUANTITY

0.92+

one cloudQUANTITY

0.92+

IsilonLOCATION

0.9+

AzureORGANIZATION

0.9+

VM WorldORGANIZATION

0.88+

VMware cloudTITLE

0.88+

IsilonORGANIZATION

0.87+

TensorFlowTITLE

0.86+

single cycleQUANTITY

0.86+

zero feeQUANTITY

0.85+

Sreeram Visvanathan, IBM | IBM Think 2020


 

>>From the cube studios in Palo Alto in Boston covering IBM thing brought to you by IBM. >>Hi everybody. We're back and this is Dave Vellante and you're watching the cubes continuous coverage of the IBM think 2020 digital events experience Sriram these monotonous here he is the global managing director for government healthcare and life sciences three. Ron, thanks so much for coming on the cube. >>Great to be with you Dave. I wish we were Darren but it's, it's great to be here digitally indeed >>be good to be face to face and in San Francisco but this certainly will help our audience understand what's happening in these critical sectors. I mean you were at the heart of it. I mean these are three sectors and then there are sub sectors in there. Let's try to understand how you're communicating with your clients, what you've been doing in the near term and then I want to really try to understand, you know, what you see coming out of this, but please tell us what's been going on in your, in your world. >>You're right. I mean these sectors are keeping, keeping the engine running right now in terms of keeping society running, right? So if you look at the federal government, the state government, the local government, you look at providers of healthcare, you look at payers, we're making sure that their members are getting the, getting the advice and the service they need. You look at a life sciences companies or rapidly trying to find a cure for this, uh, for this virus. And then you look at education where, um, you know, the educational establishments are trying to work remotely and make sure that our children get the education they need. So kind of existential industries right front and center of this ninety-five, interestingly, they have 95% of IBM has, have continued to work from home and yet we are able to support the core operations of our clients. So if you look at some of the things that we've been doing over the last eight or nine weeks that we've been under this kind of lockdown, um, IBM, IBM is involved in the engine room. >>I would like to call it the engine room of many of these operations, right? Whether it just to keep a city running or a hospital running. Um, our systems, our software, our services teams are engaged in making sure that the core systems that allow those entities to function are actually operational, um, during these times. So we've had no blips. We've been able to support that. And that's a, that's a key part of it. Now, of course, there are extraordinary things we've done on top. For instance, you know, in the first two weeks after the crisis started, we used, um, a supercomputer with the department of energy that you must've heard about, uh, to narrow down over 8,000 compounds that could potentially be cures for the COBIT 19 virus and narrowed down to 80. That could be applicable, right? Um, so sharpening the time and allowing researchers not to focus on 80 compounds and stuff, 8,000 so that we can get a vaccine to market faster. >>And that's tremendous, right? I mean we've, we formed a, um, uh, you know, collaboration, uh, with, with 27 other, uh, partners, uh, that, who are all co innovating, uh, using modeling techniques, uh, to try and find a cure faster. The other end, um, you look at things like what we're doing with the state of New York, where we work for the government, uh, the duet to get 350,000 tablets with the right security software, with the right educational software so that students can continue to learn while, uh, you know, what they are, uh, when they're remote, but the right connectivity. So, um, extremes. And then of course as a backbone, you know, be using, we are starting to see real use of our AI tools, chat bots to stop it, that we have. Uh, we have allowed, uh, uh, customers to use for free. So they began answer that we can, we can consume the latest CDC advice, the latest advice from the governors and the state, and then, um, allow the technology to answer a lot of queries that are coming through, uh, with, with, uh, with citizens being worried about what, where they stand every single day. >>Yeah. So let's kind of break down some of the sectors that you follow. Um, let's start with, with government. I mean, certainly in the United States it's been all about the fiscal policy, the monetary policy, injecting cash into the system, liquidity, you know, supporting the credit markets. Certainly central banks around the world are facing, you know, similar, but somewhat different depending on their financial situations. Um, and so that's been the near term tactical focus and it actually seems to be working pretty well. Uh, you know, the stock market's any indicator, but going forward, I'm interested in your thoughts. You wrote a blog and you basically, it was a call to action to the government to really kind of reinvent its workforce, bringing in, uh, millennials. Um, and, and so my, my, my question is, how do you think the millennial workforce, you know, when we exit this thing, will embrace the government. What does the government have to do to attract millennials who want the latest and greatest technology? I mean, give us your thoughts on that. >>Well, it's an, it's a really interesting question. A couple of years ago I was talking about, uh, this is the time where governments have to have to really transform. They have to change. If you, if you go back in time compared governments to other industries, uh, governments have embraced technology, but it's been still kind of slow, incremental, right? Lots of systems of record, big massive systems that take 10 years, five years to implement. So we've implemented systems record. We've, we've started using data and analytics to kind of inform policymaking, but they tend to be sequential. And I think, uh, you know, coming back to the, the, the changing workforce, uh, what is it? By 2025, 75% of the workforce are going to be millennials, right? Um, and as they come into the workforce, I think they're going to demand that, uh, that we work in new ways in new, um, more integrated, more digitally savvy pace and uh, strange enough, I think this crisis is going to be a, is a proof point, right? >>Um, many governments are working remotely and yet they're functioning okay. Um, the, the, the world of, um, you know, providing policy seems to be working even if you are, if you are remote. So a lot of the naysayers who said we could not operate digit, operate digitally, um, now are starting to starting to get past that, uh, that bias if you like. And so I think as, as digital natives come into the for what we are going to see is this is a Stressless innovation of why do we do things the same way as we've done them for the last 20, 30 years. Um, granted we need to still have the, um, the, the division of policies, make sure that we are enforcing the policies of government. But at the same time, if you look at workflow, uh, this is the time where you can use automation, intelligent workflows, right? >>This is the time where we can use insights about what our citizens need so that services are tuned, a hyper-local are relevant to what the citizen is going through at that particular time. Uh, contextual and, um, are relevant to what, what that individual needs at that particular time. Uh, rather than us having to go to a portal and, uh, submit an application and submit relevant documents and then be told a few hours or a few minutes later then that you've got, you've got approval for something, right? So I think there's this period of restless innovation coming through that is from a citizen engagement perspective, but behind the scenes in terms of how budgeting works, how approvals work, how uh, uh, you know, the divisions between federal, state, local, how the handoffs between agencies work. All of that is going to be restlessly innovative. And, uh, this is the moment I think this is going to be a trigger point. We believe it's going to be a trigger point for that kind of a transformation? >>No, sure. I'm, I've talked to a number of, of CEOs in, in sort of hard hit industries, um, hospitality, you know, certainly, you know, the restaurant business, airlines and, and you know, they just basically have a dial down spending, um, and really just shift to only mission critical activities. Uh, and in your segments it's, it's mixed, right? I mean, obviously government, you use the engine room, uh, analogy before some of use the war room metaphor, but you think about healthcare, the frontline workers. So it's, it's, it's mixed what our CIO is telling you in, in the industries in which you're focused. >>Well, the CIO is right now. I mean, you're going to go through different phases, right? Phase one is just reactive. It's just coping with the, uh, with the situation today where you suddenly have 95%, a hundred percent of your workforce working remote, providing the ability to, it's providing the leadership, the ability to, to work remotely where possible. Um, and it take IBM for instance, you know, we've got 300,000 people around the world, but 95% of whom are working remotely. Um, but we've been, we've been preparing for moments like this where, uh, you know, we've got the tools, we've got the network bandwidth, we've got the security parameters. Uh, we have been modernizing our applications. Um, so you've been going to a hybrid cloud kind of architecture, but you're able to scale up and scale down, stand up additional capacity when you need it. So I think a lot of the CEOs that we talk to are, uh, you know, phase one was all about how do I keep everything running? >>Phase two is how do I prepare for the new norm where I think more collaborative tools are going to come into, into the work environment. Um, CEO's are going to be much more involved in how do I get design in the center of everything that we do no matter what kind of industry. Alright. So, um, it's, it's going gonna be an interesting change as to the role of the CIO going forward. Dave and I think, uh, again, it's a catalyst to saying why do we have to do things the same way we've been doing? Why do we need so many people in an office building doing things in traditional ways? And why can't we use these digital techniques as the new norm? >>Yeah, there are a lot of learnings going on and I think huge opportunities to, to, to, to save money going forward because we've had to do that in the near term. But, but more importantly, it's like how are we going to invest in the future? And that's, that's something that I think a lot of people are beginning now to think about. They haven't had much time to do anything other than think tactically. But now we're at the point where, okay, we're maybe starting to come out of this a little bit, trying to envision how we come back. And organizations I think are beginning to think about, okay, what is our mid to longer term strategy? It's, we're not just going to go back to 2019. So what do we do going forward? So we're starting to spend more cycles and more energy, you know, on that topic. What do you see? >>Yeah, I mean, take every segment of my, uh, my sector, right? Take the education industry, will you, uh, will you spend 60, $70,000 a year to send a child to university, um, when a lot of the learning is available digitally and when, when we've seen that they can learn as much and probably more, uh, you know, more agile manner and follow their interests. So I think the whole education industry is going to leverage digital in a big way. And I think you're going to see partnerships form, you can see more, uh, you're going to see more choice, uh, for the student and for the parents, uh, in the education industry. And so that industry, which has been kind of falling the same type of pattern, uh, you know, for a hundred years, it's suddenly going to reinvent itself. Take the healthcare industry. Um, you know, it's interesting, a lot of providers are following, uh, following staff because elective, uh, elective treatment as really, you know, uh, fallen tremendously. >>Right? On the one hand you have huge demand for covert 19 related, uh, treatment on the other hand, electives have come down. So cost is a big issue. So I, I believe we're going to see M and a activity, uh, in that sector. And as you see that what's going to happen is people are gonna, uh, restlessly reinvent. So w you know, I think telemedicine is going to, is not going to become a reality. I think, um, if you look at the payer space and if you look at the insurance providers, they're all going to be in the market saying, Harbor, how do I capture more members and retain them and how do I give them more choice? Um, and how do I keep them safe? It's interesting, I was speaking to a colleague in Japan, uh, yesterday and he was saying to me in the automotive industry that, um, I was arguing that, you know, you will see a huge downfall. >>Uh, but his argument back was people are actually so afraid of taking public transport that, uh, they're expecting to see a spike in personal transportation. Right? So I think from a government perspective, the kind of policy implications, um, you know, whether there would be economic stimulus related in the short term, governments are going to introduce inefficiencies to get the economy back to where it needs to be. But over a long term I think we're back to these efficiencies. We are going to look at supply chain, there's going to be a postmortem on how do we get where we got to now. And um, so I think in terms of citizen engagement, in terms of supply chain, in terms of back office operations, in terms of how agencies coordinate, um, do stockpiling command and control, all of that is going to change, right? And it's an exciting time in a way to be at the forefront of these industries shaping, shaping the future. >>I want to ask your thoughts on, on education and excuse me, drill into that a little bit. I've actually got pretty personal visibility in sort of let's, let's break it down. Um, you know, secondary universities, uh, nine through 12 and K through six and then you're seeing some definite differences. Uh, I think actually the universities are pretty well set up. They've been doing online courses for quite some time. They've, they've started, you know, revenue streams in that regard and, and so their technology is pretty good and their processes are pretty good at the other end of the spectrum, sort of the K through six, you know, there's a lot of homeschooling going on and, and parents are at home, they're adjusting pretty well. Whether it's young kids with manipulatives or basic math and vocabulary skills, they're able to support that and you know, adjust their work lives accordingly. >>I find in the, in the high school it's, it's really different. I mean it's new to these folks. I had an interesting conversation with my son last night and he was explaining to me, he spends literally hours a day just trying to figure out what he's got to do because every process is different from every teacher. And so that's that sort of fat middle, if you will, which is a critical time, especially for juniors in high school and so forth where that is so new. And I wonder what you're seeing and maybe those three sectors, is that sort of consistent with what you see and, and what do you see coming out of this? >>I think it's, it's broadly consistent and I have personally experienced, I have one university grade, uh, university senior and I have a high school senior and I see pretty much the same pattern no matter which part of the world they're in. Right? I, I do believe that, um, you know, this notion of choice for students and how they learn and making curriculum customized to get the best out of students is the new reality. How fast we will get there. How do you get there? It's not a linear line. I think what is going to happen is you're going to, you're going to see partnerships between, uh, content providers. You're going to see partnerships between platform providers and you're going to see these educational institutions, uh, less restless. The reinvent to say, okay, this particular student learns in this way and this is, this is how I shape a personalized curriculum, but still achieving a minimum outcome. Right? I think that's going to come, but it's going to take a few years to get there. >>I think it was a really interesting observations. I mean, many children that I observed today are sort of autodidactic and if you give them the tooling to actually set their own learning curriculum, they'll, they'll absorb that and obviously the technology has gotta be there to support it. So it's sort of hitting the escape key. Let's sort of end on that. I mean, in terms of just IBM, how you're positioning in the industries that you're focused on to help people take this new technology journey. As they said, we're not going back to the last decade. It's a whole new world that we're going to going to come out of this post. Coven, how do you see IBM has positioned their Sri round? >>Dave, I think I'd be positioned brilliantly. Um, as you know, we've, Arvind Christianized is our new CEO and, uh, he, he recently talked about this on CNBC. So if you look at the core platforms that we've been building, right? Um, so CA occupies an industry, whether it's, whether it be government, healthcare, life sciences or education are going to look for speed. They're going to look for agility, they're going to look to change processes quickly so they can, they can react to situations like this in the future in a much more agile way, right? In order to do that, their it systems, their applications, their infrastructure needs to scale up and down needs to be, uh, you need to be able to configure things in a way where you can change parameters. You can change policies without having to read a long time, right? And so if you think about things like HyperCloud our investment in, uh, in, in red hat, uh, our, uh, our, uh, position on data and open technologies and, um, you know, our policies around making sure that, that our client's data and insights are their insights and we don't, we don't want to taste that. >>On of those things. Our investments in blockchain are deep, deep, uh, incumbency in services. But there'd our technology services, our consulting services, our deep industry knowledge, allowing all of these technologies to be used at to solve these problems. Um, I think we are really well positioned and, uh, you know, a great example is the New York example, right? So, uh, getting 350,000 students to work in a completely new way in a matter of two weeks. It's not something that every single company can do. It's not just a matter of providing the tech, the tool itself, it's the content, it's the consumption, it's the design must experience. And that's where a company like IBM can bring everything together. And then you have the massive issues of government, like social reform, like mental health, like making sure the stimulus money is going to the people who need it the most, um, in, in the most useful way. And that's where I work between industries, between government and banks and other industries really comes to, comes to fruition. So I think we have the technology but the services depth. And I think we've got the relevance of the industry to make a difference. And I'm excited about the future. >>Well, it's interesting that you mentioned, you know, the basically one of my takeaways is that you've got to be agile. You've gotta be flexible. You, you've been in the consulting business for most of your career and in the early part of your career. And even up until, you know, maybe recently we were automating processes that we knew well, but today the processes are, we so much is unknown. And so you've got to move fast. You've got to be agile, you've got to experiment, uh, and apply that sort of, you know, test, experiment, methodology and iterate and have that continuous improvement. That's a different world than what we've known. Obviously. You know, as I say, you've seen this over the decades. Uh, your final thoughts on, uh, on the future. >>Well, my final thoughts are, um, yeah, you're exactly right. I mean, if I take a simple example, right, that, that, uh, controls how quickly the commerce works. Think about simple things like bill of lading. Uh, the government has to issue a federal government has to prove that a state government has to prove it and local government has to prove it. Why? That's the way we've been doing it for a long time. Right? There are control points, but to your point, imagine if you can shorten that from a seven day cycle to a seven second cycle. The impact on commerce, the impact on GDP, and this is one simple process. This is the time for us to re to, to, to break it all apart and say why not do something differently? And the technology is right. The CA, the AI is getting more and more and more mature and you've got interesting things like quantum to look forward to. So I think the timing is right for, for reinventing, uh, the core of this industry. >>Yeah, I think they really are. I mean, it's difficult as this crisis has been a lot of opportunities going to present coming out of a tree room. Thanks so much for coming on the cube and making this happen. Really appreciate your time. It's great to be here. Thank you for having me. Dave, you're very welcome and thank you everybody for watching. This is Dave Volante for the Cuban or continuous coverage of the IBM think 2020 digital event experience. Keep it right there and we right back right after this short break.

Published Date : May 5 2020

SUMMARY :

IBM thing brought to you by IBM. Ron, thanks so much for coming on the cube. Great to be with you Dave. you know, what you see coming out of this, but please tell us what's been going on in your, And then you look at education where, um, you know, the educational establishments are trying to work remotely Um, so sharpening the time and allowing researchers not to focus on 80 compounds and continue to learn while, uh, you know, what they are, uh, when they're remote, but the right connectivity. injecting cash into the system, liquidity, you know, supporting the credit markets. And I think, uh, you know, coming back to the, the, the changing workforce, uh, But at the same time, if you look at workflow, uh, this is the time where you can use automation, works, how approvals work, how uh, uh, you know, the divisions between um, hospitality, you know, certainly, you know, the restaurant business, Um, and it take IBM for instance, you know, we've got 300,000 people around the Um, CEO's are going to be much more involved in So we're starting to spend more cycles and more energy, you know, on that topic. of pattern, uh, you know, for a hundred years, it's suddenly going to reinvent itself. I think, um, if you look at the payer space and if you look at the insurance providers, um, you know, whether there would be economic stimulus related in the short term, they're able to support that and you know, adjust their work lives accordingly. and maybe those three sectors, is that sort of consistent with what you see and, and what do you see coming um, you know, this notion of choice for students and and if you give them the tooling to actually set their own learning curriculum, to be, uh, you need to be able to configure things in a way where you can change parameters. and, uh, you know, a great example is the New York example, And even up until, you know, maybe recently we were Uh, the government has to issue a federal government has to prove that a state government has to prove it and local I mean, it's difficult as this crisis has been a lot of opportunities going to present

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

RonPERSON

0.99+

JapanLOCATION

0.99+

Arvind ChristianizedPERSON

0.99+

10 yearsQUANTITY

0.99+

Dave VolantePERSON

0.99+

five yearsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

95%QUANTITY

0.99+

2019DATE

0.99+

Palo AltoLOCATION

0.99+

New YorkLOCATION

0.99+

80 compoundsQUANTITY

0.99+

Sreeram VisvanathanPERSON

0.99+

two weeksQUANTITY

0.99+

300,000 peopleQUANTITY

0.99+

8,000QUANTITY

0.99+

350,000 tabletsQUANTITY

0.99+

yesterdayDATE

0.99+

75%QUANTITY

0.99+

SriramPERSON

0.99+

CovenPERSON

0.99+

United StatesLOCATION

0.99+

350,000 studentsQUANTITY

0.99+

seven dayQUANTITY

0.99+

2025DATE

0.99+

27QUANTITY

0.99+

CNBCORGANIZATION

0.99+

80QUANTITY

0.99+

BostonLOCATION

0.99+

sixQUANTITY

0.98+

todayDATE

0.98+

ninety-fiveQUANTITY

0.98+

DarrenPERSON

0.97+

last nightDATE

0.97+

oneQUANTITY

0.97+

over 8,000 compoundsQUANTITY

0.96+

12QUANTITY

0.96+

one universityQUANTITY

0.96+

one simple processQUANTITY

0.96+

seven second cycleQUANTITY

0.95+

nineQUANTITY

0.95+

three sectorsQUANTITY

0.94+

hours a dayQUANTITY

0.94+

three sectorsQUANTITY

0.93+

last decadeDATE

0.93+

IBM think 2020EVENT

0.93+

nine weeksQUANTITY

0.91+

COBIT 19 virusOTHER

0.89+

couple of years agoDATE

0.87+

first two weeksQUANTITY

0.87+

hundred percentQUANTITY

0.87+

Phase twoQUANTITY

0.83+

CALOCATION

0.83+

60, $70,000 a yearQUANTITY

0.82+

single dayQUANTITY

0.81+

eightQUANTITY

0.78+

HyperCloudORGANIZATION

0.77+

hundred yearsQUANTITY

0.75+

Think 2020EVENT

0.7+

phase oneQUANTITY

0.69+

governmentORGANIZATION

0.66+

CubanOTHER

0.65+

CDCORGANIZATION

0.65+

HarborORGANIZATION

0.64+

19QUANTITY

0.63+

PhaseQUANTITY

0.63+

single companyQUANTITY

0.63+

30 yearsQUANTITY

0.62+

20QUANTITY

0.61+

few minutes laterDATE

0.6+

oneOTHER

0.52+

2020DATE

0.51+

lastDATE

0.42+

UNLIST TILL 4/2 - Vertica @ Uber Scale


 

>> Sue: Hi, everybody. Thank you for joining us today, for the Virtual Vertica BDC 2020. This breakout session is entitled "Vertica @ Uber Scale" My name is Sue LeClaire, Director of Marketing at Vertica. And I'll be your host for this webinar. Joining me is Girish Baliga, Director I'm sorry, user, Uber Engineering Manager of Big Data at Uber. Before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. There will be a Q and A session, at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer offline. Alternately, you can also Vertica forums to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. And as a reminder, you can maximize your screen by clicking the double arrow button, in the lower right corner of the slides. And yet, this virtual session is being recorded, and you'll be able to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Girish over to you. >> Girish: Thanks a lot Sue. Good afternoon, everyone. Thanks a lot for joining this session. My name is Girish Baliga. And as Sue mentioned, I manage interactive and real time analytics teams at Uber. Vertica is one of the main platforms that we support, and Vertica powers a lot of core business use cases. In today's talk, I wanted to cover two main things. First, how Vertica is powering critical business use cases, across a variety of orgs in the company. And second, how we are able to do this at scale and with reliability, using some of the additional functionalities and systems that we have built into the Vertica ecosystem at Uber. And towards the end, I also have a little extra bonus for all of you. I will be sharing an easy way for you to take advantage of, many of the ideas and solutions that I'm going to present today, that you can apply to your own Vertica deployments in your companies. So stick around and put on your seat belts, and let's go start on the ride. At Uber, our mission is to ignite opportunity by setting the world in motion. So we are focused on solving mobility problems, and enabling people all over the world to solve their local problems, their local needs, their local issues, in a manner that's efficient, fast and reliable. As our CEO Dara has said, we want to become the mobile operating system of local cities and communities throughout the world. As of today, Uber is operational in over 10,000 cities around the world. So, across our various business lines, we have over 110 million monthly users, who use our rides, services, or eat services, and a whole bunch of other services that we provide to Uber. And just to give you a scale of our daily operations, we in the ride business, have over 20 million trips per day. And that each business is also catching up, particularly during the recent times that we've been having. And so, I hope these numbers give you a scale of the amount of data, that we process each and every day. And support our users in their analytical and business reporting needs. So who are these users at Uber? Let's take a quick look. So, Uber to describe it very briefly, is a lot like Amazon. We are largely an operation and logistics company. And employee work based reflects that. So over 70% of our employees work in teams, which come under the umbrella of Community Operations and Centers of Excellence. So these are all folks working in various cities and towns that we operate around the world, and run the Uber businesses, as somewhat local businesses responding to local needs, local market conditions, local regulation and so forth. And Vertica is one of the most important tools, that these folks use in their day to day business activities. So they use Vertica to get insights into how their businesses are going, to deeply into any issues that they want to triage , to generate reports, to plan for the future, a whole lot of use cases. The second big class of users, are in our marketplace team. So marketplace is the engineering team, that backs our ride shared business. And as part of this, running this business, a key problem that they have to solve, is how to determine what prices to set, for particular rides, so that we have a good match between supply and demand. So obviously the real time pricing decisions they're made by serving systems, with very detailed and well crafted machine learning models. However, the training data that goes into this models, the historical trends, the insights that go into building these models, a lot of these things are powered by the data that we store, and serve out of Vertica. Similarly, in each business, we have use cases spanning all the way from engineering and back-end systems, to support operations, incentives, growth, and a whole bunch of other domains. So the big class of applications that we support across a lot of these business lines, is dashboards and reporting. So we have a lot of dashboards, which are built by core data analysts teams and shared with a whole bunch of our operations and other teams. So these are dashboards and reports that run, periodically say once a week or once a day even, depending on the frequency of data that they need. And many of these are powered by the data, and the analytics support that we provide on our Vertica platform. Another big category of use cases is for growth marketing. So this is to understand historical trends, figure out what are various business lines, various customer segments, various geographical areas, doing in terms of growth, where it is necessary for us to reinvest or provide some additional incentives, or marketing support, and so forth. So the analysis that backs a lot of these decisions, is powered by queries running on Vertica. And finally, the heart and soul of Uber is data science. So data science is, how we provide best in class algorithms, pricing, and matching. And a lot of the analysis that goes into, figuring out how to build these systems, how to build the models, how to build the various coefficients and parameters that go into making real time decisions, are based on analysis that data scientists run on Vertica systems. So as you can see, Vertica usage spans a whole bunch of organizations and users, all across the different Uber teams and ecosystems. Just to give you some quick numbers, we have over 5000 weekly active, people who run queries at least once a week, to do some critical business role or problem to solve, that they have in their day to day operations. So next, let's see how Vertica fits into the Uber data ecosystem. So when users open up their apps, and request for a ride or order food delivery on each platform, the apps are talking to our serving systems. And the serving systems use online storage systems, to store the data as the trips and eat orders are getting processed in real time. So for this, we primarily use an in house built, key value storage system called Schemaless, and an open source system called Cassandra. We also have other systems like MySQL and Redis, which we use for storing various bits of data to support serving systems. So all of this operations generates a lot of data, that we then want to process and analyze, and use for our operational improvements. So, we have ingestion systems that periodically pull in data from our serving systems and land them in our data lake. So at Uber a data lake is powered by Hadoop, with files stored on HDFS clusters. So once the raw data lines on the data lake, we then have ETL jobs that process these raw datasets, and generate, modeled and customize datasets which we then use for further analysis. So once these model datasets are available, we load them into our data warehouse, which is entirely powered by Vertica. So then we have a business intelligence layer. So with internal tools, like QueryBuilder, which is a UI interface to write queries, and look at results. And it read over the front-end sites, and Dashbuilder, which is a dash, board building tool, and report management tool. So these are all various tools that we have built within Uber. And these can talk to Vertica and run SQL queries to power, whatever, dashboards and reports that they are supporting. So this is what the data ecosystem looks like at Uber. So why Vertica and what does it really do for us? So it powers insights, that we show on dashboards as folks use, and it also powers reports that we run periodically. But more importantly, we have some core, properties and core feature sets that Vertica provides, which allows us to support many of these use cases, very well and at scale. So let me take a brief tour of what these are. So as I mentioned, Vertica powers Uber's data warehouse. So what this means is that we load our core fact and dimension tables onto Vertica. The core fact tables are all the trips, all the each orders and all these other line items for various businesses from Uber, stored as partitioned tables. So think of having one partition per day, as well as dimension tables like cities, users, riders, career partners and so forth. So we have both these two kinds of datasets, which will load into Vertica. And we have full historical data, all the way since we launched these businesses to today. So that folks can do deeper longitudinal analysis, so they can look at patterns, like how the business has grown from month to month, year to year, the same month, over a year, over multiple years, and so forth. And, the really powerful thing about Vertica, is that most of these queries, you run the deep longitudinal queries, run very, very fast. And that's really why we love Vertica. Because we see query latency P90s. That is 90 percentile of all queries that we run on our platform, typically finish in under a minute. So that's very important for us because Vertica is used, primarily for interactive analytics use cases. And providing SQL query execution times under a minute, is critical for our users and business owners to get the most out of analytics and Big Data platforms. Vertica also provides a few advanced features that we use very heavily. So as you might imagine, at Uber, one of the most important set of use cases we have is around geospatial analytics. In particular, we have some critical internal dashboards, that rely very heavily on being able to restrict datasets by geographic areas, cities, source destination pairs, heat maps, and so forth. And Vertica has a rich array of functions that we use very heavily. We also have, support for custom projections in Vertica. And this really helps us, have very good performance for critical datasets. So for instance, in some of our core fact tables, we have done a lot of query and analysis to figure out, how users run their queries, what kind of columns they use, what combination of columns they use, and what joints they do for typical queries. And then we have laid out our custom projections to maximize performance on these particular dimensions. And the ability to do that through Vertica, is very valuable for us. So we've also had some very successful collaborations, with the Vertica engineering team. About a year and a half back, we had open-sourced a Python Client, that we had built in house to talk to Vertica. We were using this Python Client in our business intelligence layer that I'd shown on the previous slide. And we had open-sourced it after working closely with Eng team. And now Vertica formally supports the Python Client as an open-source project, which you can download to and integrate into your systems. Another more recent example of collaboration is the Vertica Eon mode on GCP. So as most of or at least some of you know, Vertica Eon mode is formally supported on AWS. And at Uber, we were also looking to see if we could run our data infrastructure on GCP. So Vertica team hustled on this, and provided us early preview version, which we've been testing out to see how performance, is impacted by running on the Cloud, and on GCP. And so far, I think things are going pretty well, but we should have some numbers about this very soon. So here I have a visualization of an internal dashboard, that is powered solely by data and queries running on Vertica. So this GIF has sequence have different visualizations supported by this tool. So for instance, here you see a heat map, downgrading heat map of source of traffic demand for ride shares. And then you will see a bunch of arrows here about source destination pairs and the trip lines. And then you can see how demand moves around. So, as the cycles through the various animations, you can basically see all the different kinds of insights, and query shapes that we send to Vertica, which powers this critical business dashboard for our operations teams. All right, so now how do we do all of this at scale? So, we started off with a single Vertica cluster, a few years back. So we had our data lake, the data would land into Vertica. So these are the core fact and dimension tables that I just spoke about. And then Vertica powers queries at our business intelligence layer, right? So this is a very simple, and effective architecture for most use cases. But at Uber scale, we ran into a few problems. So the first issue that we have is that, Uber is a pretty big company at this point, with a lot of users sending almost millions of queries every week. And at that scale, what we began to see was that a single cluster was not able to handle all the query traffic. So for those of you who have done an introductory course, on queueing theory, you will realize that basically, even though you could have all the query is processed through a single serving system. You will tend to see larger and larger queue wait times, as the number of queries pile up. And what this means in practice for end users, is that they are basically just seeing longer and longer query latencies. But even though the actual query execution time on Vertica itself, is probably less than a minute, their query sitting in the queue for a bunch of minutes, and that's the end user perceived latency. So this was a huge problem for us. The second problem we had was that the cluster becomes a single point of failure. Now Vertica can handle single node failures very gracefully, and it can probably also handle like two or three node failures depending on your cluster size and your application. But very soon, you will see that, when you basically have beyond a certain number of failures or nodes in maintenance, then your cluster will probably need to be restarted or you will start seeing some down times due to other issues. So another example of why you would have to have a downtime, is when you're upgrading software in your clusters. So, essentially we're a global company, and we have users all around the world, we really cannot afford to have downtime, even for one hour slot. So that turned out to be a big problem for us. And as I mentioned, we could have hardware issues. So we we might need to upgrade our machines, or we might need to replace storage or memory due to issues with the hardware in there, due to normal wear and tear, or due to abnormal issues. And so because of all of these things, having a single point of failure, having a single cluster was not really practical for us. So the next thing we did, was we set up multiple clusters, right? So we had a bunch of identities clusters, all of which have the same datasets. So then we would basically load data using ingestion pipelines from our data lake, onto each of these clusters. And then the business intelligence layer would be able to query any of these clusters. So this actually solved most of the issues that I pointed out in the previous slide. So we no longer had a single point of failure. Anytime we had to do version upgrades, we would just take off one cluster offline, upgrade the software on it. If we had node failures, we would probably just take out one cluster, if we had to, or we would just have some spare nodes, which would rotate into our production clusters and so forth. However, having multiple clusters, led to a new set of issues. So the first problem was that since we have multiple clusters, you would end up with inconsistent schema. So one of the things to understand about our platform, is that we are an infrastructure team. So we don't actually own or manage any of the data that is served on Vertica clusters. So we have dataset owners and publishers, who manage their own datasets. Now exposing multiple clusters to these dataset owners. Turns out, it's not a great idea, right? Because they are not really aware of, the importance of having consistency of schemas and datasets across different clusters. So over time, what we saw was that the schema for the same tables would basically get out of order, because they were all the updates are not consistently applied on all clusters. Or maybe they were just experimenting some new columns or some new tables in one cluster, but they forgot to delete it, whatever the case might be. We basically ended up in a situation where, we saw a lot of inconsistent schemas, even across some of our core tables in our different clusters. A second issue was, since we had ingestion pipelines that were ingesting data independently into all these clusters, these pipelines could fail independently as well. So what this meant is that if, for instance, the ingestion pipeline into cluster B failed, then the data there would be older than clusters A and C. So, when a query comes in from the BI layer, and if it happens to hit B, you would probably see different results, than you would if you went to a or C. And this was obviously not an ideal situation for our end users, because they would end up seeing slightly inconsistent, slightly different counts. But then that would lead to a bad situation for them where they would not able to fully trust the data that was, and the results and insights that were being returned by the SQL queries and Vertica systems. And then the third problem was, we had a lot of extra replication. So the 20/80 Rule, or maybe even the 90/10 Rule, applies to datasets on our clusters as well. So less than 10% of our datasets, for instance, in 90% of the queries, right? And so it doesn't really make sense for us to replicate all of our data on all the clusters. And so having this set up where we had to do that, was obviously very suboptimal for us. So then what we did, was we basically built some additional systems to solve these problems. So this brings us to our Vertica ecosystem that we have in production today. So on the ingestion side, we built a system called Vertica Data Manager, which basically manages all the ingestion into various clusters. So at this point, people who are managing datasets or dataset owners and publishers, they no longer have to be aware of individual clusters. They just set up their ingestion pipelines with an endpoint in Vertica Data Manager. And the Vertica Data Manager ensures that, all the schemas and data is consistent across all our clusters. And on the query side, we built a proxy layer. So what this ensures is that, when queries come in from the BI layer, the query was forwarded, smartly and with knowledge and data about which cluster up, which clusters are down, which clusters are available, which clusters are loaded, and so forth. So with these two layers of abstraction between our ingestion and our query, we were able to have a very consistent, almost single system view of our entire Vertica deployment. And the third bit, we had put in place, was the data manifest, which were the communication mechanism between ingestion and proxy. So the data manifest basically is a listing of, which tables are available on which clusters, which clusters are up to date, and so forth. So with this ecosystem in place, we were also able to solve the extra replication problem. So now we basically have some big clusters, where all the core tables, and all the tables, in fact, are served. So any query that hits 90%, less so tables, goes to the big clusters. And most of the queries which hit 10% heavily queried important tables, can also be served by many other small clusters, so much more efficient use of resources. So this basically is the view that we have today, of Vertica within Uber, so external to our team, folks, just have an endpoint, where they basically set up their ingestion jobs, and another endpoint where they can forward their Vertica SQL queries. And they are so to a proxy layer. So let's get a little more into details, about each of these layers. So, on the data management side, as I mentioned, we have two kinds of tables. So we have dimension tables. So these tables are updated every cycle, so the list of cities list of drivers, the list of users and so forth. So these change not so frequently, maybe once a day or so. And so we are able to, and since these datasets are not very big, we basically swap them out on every single cycle. Whereas the fact tables, so these are tables which have information about our trips or each orders and so forth. So these are partition. So we have one partition roughly per day, for the last couple of years, and then we have more of a hierarchical partitions set up for older data. So what we do is we load the partitions for the last three days on every cycle. The reason we do that, is because not all our data comes in at the same time. So we have updates for trips, going over the past two or three days, for instance, where people add ratings to their trips, or provide feedback for drivers and so forth. So we want to capture them all in the row corresponding to that particular trip. And so we upload partitions for the last few days to make sure we capture all those updates. And we also update older partitions, if for instance, records were deleted for retention purposes, or GDPR purposes, for instance, or other regulatory reasons. So we do this less frequently, but these are also updated if necessary. So there are endpoints which allow dataset owners to specify what partitions they want to update. And as I mentioned, data is typically managed using a hierarchical partitioning scheme. So in this way, we are able to make sure that, we take advantage of the data being clustered by day, so that we don't have to update all the data at once. So when we are recovering from an cluster event, like a version upgrade or software upgrade, or hardware fix or failure handling, or even when we are adding a new cluster to the system, the data manager takes care of updating the tables, and copying all the new partitions, making sure the schemas are all right. And then we update the data and schema consistency and make sure everything is up to date before we, add this cluster to our serving pool, and the proxy starts sending traffic to it. The second thing that the data manager provides is consistency. So the main thing we do here, is we do atomic updates of our tables and partitions for fact tables using a two-phase commit scheme. So what we do is we load all the new data in temp tables, in all the clusters in phase one. And then when all the clusters give us access signals, then we basically promote them to primary and set them as the main serving tables for incoming queries. We also optimize the load, using Vertica Data Copy. So what this means is earlier, in a parallel pipelines scheme, we had to ingest data individually from HDFS clusters into each of the Vertica clusters. That took a lot of HDFS bandwidth. But using this nice feature that Vertica provides called Vertica Data Copy, we just load it data into one cluster and then much more efficiently copy it, to the other clusters. So this has significantly reduced our ingestion overheads, and speed it up our load process. And as I mentioned as the second phase of the commit, all data is promoted at the same time. Finally, we make sure that all the data is up to date, by doing some checks around the number of rows and various other key signals for freshness and correctness, which we compare with the data in the data lake. So in terms of schema changes, VDM automatically applies these consistently across all the clusters. So first, what we do is we stage these changes to make sure that these are correct. So this catches errors that are trying to do, an incompatible update, like changing a column type or something like that. So we make sure that schema changes are validated. And then we apply them to all clusters atomically again for consistency. And provide a overall consistent view of our data to all our users. So on the proxy side, we have transparent support for, replicated clusters to all our users. So the way we handle that is, as I mentioned, the cluster to table mapping is maintained in the manifest database. And when we have an incoming query, the proxy is able to see which cluster has all the tables in that query, and route the query to the appropriate cluster based on the manifest information. Also the proxy is aware of the health of individual clusters. So if for some reason a cluster is down for maintenance or upgrades, the proxy is aware of this information. And it does the monitoring based on query response and execution times as well. And it uses this information to route queries to healthy clusters, and do some load balancing to ensure that we award hotspots on various clusters. So the key takeaways that I have from the stock, are primarily these. So we started off with single cluster mode on Vertica, and we ran into a bunch of issues around scaling and availability due to cluster downtime. We had then set up a bunch of replicated clusters to handle the scaling and availability issues. Then we run into issues around schema consistency, data staleness, and data replication. So we built an entire ecosystem around Vertica, with abstraction layers around data management and ingestion, and proxy. And with this setup, we were able to enforce consistency and improve storage utilization. So, hopefully this gives you all a brief idea of how we have been able to scale Vertica usage at Uber, and power some of our most business critical and important use cases. So as I mentioned at the beginning, I have a interesting and simple extra update for you. So an easy way in which you all can take advantage of many of the features that we have built into our ecosystem, is to use the Vertica Eon mode. So the Vertica Eon mode, allows you to set up multiple clusters with consistent data updates, and set them up at various different sizes to handle different query loads. And it automatically handles many of these issues that I mentioned in our ecosystem. So do check it out. We've also been, trying it out on DCP, and initial results look very, very promising. So thank you all for joining me on this talk today. I hope you guys learned something new. And hopefully you took away something that you can also apply to your systems. We have a few more time for some questions. So I'll pause for now and take any questions.

Published Date : Mar 30 2020

SUMMARY :

Any questions that we don't address, So the first issue that we have is that,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Girish BaligaPERSON

0.99+

UberORGANIZATION

0.99+

GirishPERSON

0.99+

10%QUANTITY

0.99+

one hourQUANTITY

0.99+

Sue LeClairePERSON

0.99+

90%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SuePERSON

0.99+

twoQUANTITY

0.99+

VerticaORGANIZATION

0.99+

DaraPERSON

0.99+

first issueQUANTITY

0.99+

less than a minuteQUANTITY

0.99+

MySQLTITLE

0.99+

FirstQUANTITY

0.99+

first problemQUANTITY

0.99+

third problemQUANTITY

0.99+

third bitQUANTITY

0.99+

less than 10%QUANTITY

0.99+

each platformQUANTITY

0.99+

secondQUANTITY

0.99+

one clusterQUANTITY

0.99+

oneQUANTITY

0.99+

second issueQUANTITY

0.99+

PythonTITLE

0.99+

todayDATE

0.99+

second phaseQUANTITY

0.99+

two kindsQUANTITY

0.99+

over 10,000 citiesQUANTITY

0.99+

over 70%QUANTITY

0.99+

each businessQUANTITY

0.99+

second thingQUANTITY

0.98+

second problemQUANTITY

0.98+

VerticaTITLE

0.98+

bothQUANTITY

0.98+

Vertica Data ManagerTITLE

0.98+

two-phaseQUANTITY

0.98+

firstQUANTITY

0.98+

90 percentileQUANTITY

0.98+

once a weekQUANTITY

0.98+

eachQUANTITY

0.98+

single pointQUANTITY

0.97+

SQLTITLE

0.97+

once a dayQUANTITY

0.97+

RedisTITLE

0.97+

one partitionQUANTITY

0.97+

under a minuteQUANTITY

0.97+

@ Uber ScaleORGANIZATION

0.96+

Gou Rao, Portworx & Julio Tapia, Red Hat | KubeCon + CloudNativeCon 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back to theCUBE here in San Diego for KubeCon CloudNativeCon, with John Troyer, I'm Stu Miniman, and happy to welcome to the program two guests, first time guests, I believe. Julio Tapia, who's the director of Cloud BU partner and community with Red Hat and Gou Rao, who's the founder and CEO at Portworx. Gentlemen, thanks so much for joining us. >> Thank you, happy to be here. >> Thanks for having us. >> Alright, let's start with community, ecosystem, it's a big theme we have here at the show. Tell us your main focus, what the team's doing here. >> Sure, so I'm part of a product team, we're responsible for OpenShift, OpenStack and Red Hat virtualization. And my responsibility is to build a partner ecosystem and to do our community development. On the partner front, we work with a lot of different partners. We work with ISVs, we work with OEMs, SIs, COD providers, TelCo partners. And my role is to help evangelize, to help on integrations, a lot of joint solutions, and then do a little bit of go to market as well. And the community side, it's to evangelize with upstream projects or customers with developers, and so forth. >> Alright, so, Gou, actually, it's not luck, but I had a chance to catch up with the Red Hat storage team. Back when I was on the vendor side I partnered with them. Red Hat doesn't sell gear, they're a software company. Everything open-source, and when it comes to data and storage, obviously they're working with partners. So put Portworx into the mix and tell us about the relationship and what you both do together. >> Sure, yeah, we're a Red Hat OpenShift partner. We've been working with them for quite some time now, partner with IBM as well. But yeah, Portworx, we focus on enabling cloud native storage, right? So we complement the OpenShift ecosystem. Essentially we enable people to run stateful services in OpenShift with a lot of agility and we bring DR backup functionality to OpenShift. I'm sure you're familiar with this, but, people, when they deploy OpenShift, they're running fleets of OpenShift clusters. So, multi-cluster management and data accessibility across clusters is a big topic. >> Yeah, if you could, I hear the term cloud native storage, what does that really mean? You know, back a few years ago, containers were stateless, I didn't have my persistent storage, it was super challenging as to how we deal with this. And now we have some options, but what is the goal of what we're doing here? >> There really is no notion of a stateless application, right? Especially when it comes to enterprise applications. What cloud native storage means is, to us at least, it signifies a couple of things. First of all, the consumer of storage is not a machine anymore, right? Typical storage systems are designed to provide storage to either a virtual machine or a hardware server. The consumer of storage is now a container that's running inside of a machine. And in fact, an application is never just one container, it's many containers running on different systems so it's a distributed problem. So what cloud native storage means is the following things. Providing container granular data services, being application aware, meaning that you're providing services to many containers that are running on different systems, and facilitating the data life cycle management of those applications from a Kubernetes way, right? The user experience is now driven through Kubernetes as opposed to a storage admin driving that functionality so it's these three things that make a platform cloud native. >> I want to dig into the operator concept for a little bit here, as it applies to storage. So, first, Operators. I first heard of this a couple years back with the CoreOS folks, who are now part of Red Hat and it's a piece of technology that came into the Kubernetes ecosystem, seems to be very well adopted, they talked about it today on the keynote. And I'd love to hear a little bit more about the ecosystem. But first I want to figure out what it is and in my head, I didn't quite understand it and I'm like, well, okay, automation and life cycle, I get it. There's a bunch of things, Puppet and Chef and Ansible and all sorts of things there. There's also things that know about cloud like Terraform, or Cloudform, or Halloumi, all these sort of things here. But this seems like this is a framework around life cycle, it might be a little higher in the semantic level or knows a little bit more about what's going on inside Kubernetes. >> I'll just touch on this, so Operators, it's a way to codify business logic into the application, so how to manage, how to install, how to manage the life cycle of the application on top of the Kubernetes cluster. So it's a way of automating. >> Right, but-- >> And just to add to that, you mentioned Ansible, Salt, right? So, as engineers, we're always trying to make our lives easier. And so, infrastructure automation certainly is a concept here. What Operators does is it elevates those same needs to more of an application construct level, right? So it's a piece of intelligent software that is watching the entire run-time of an application as opposed to provisioning infrastructure and stepping out of the way. Think of it as a living being, it is constantly running and reacting to what the application is doing and what its needs are. So, on one hand you have automation that sets things up and then the job is done. Here the job is never done, you're sort of, right there as a side car along with the application. >> Nice, but for any sort of life cycle or for any sort of project like this, you have to have code sharing and contributing, right? And so, Julio, can you tell us a little about that? >> What we do is we're obviously all in on Operators. And so we've invested a great deal in terms of documentation and training and workshops. We have certification programs, we're really helping create the ecosystem and facilitate the whole process. You may be familiar, we announced Operator Framework a year ago, it includes Operator SDKs. So we have an Operator SDK for Helm, for Ansible, for Go. We also have announced Operator Life Cycle Manager which does the install, the maintenance and the whole life cycle management process. And then earlier this year we did introduce also, Operatorhub.io which is a community of our Operators, we have about 150 Operators as part of that. >> How does the Operator Framework relate to OpenShare versus upstream Kubernetes? Is it an OpenShift and Red Hat specific thing, or? >> Yes, so, Operatorhub.io is a listing of Operators that includes community Operators. And then we also have certified Operators. And the community Operators run on any Kubernetes instance. The certified Operators make sure that we run on OpenShift specifically. So that's kind of the distinction between those two. >> I remember a Red Hat summit where you talked about some bits. So, give us a little walk around the show, some of the highlights from Operators, the ecosystem, obviously, we've got Portworx here but there's a broad ecosystem. >> Yeah, so we have a huge huge ecosystem. The ISVs play a big part of this. So we've got Operators database partners, security partners, app monitoring partners, storage partners. Yesterday we had an OpenShift commons event, we showcased five of our big Operator partnerships with Couchbase, with MongoDB, with Portworx obviously, with StorageOS and with Dynatrace. But we have a lot of partners in a lot of different areas that are creating these Operators, are certifying them, and they're starting to get a lot of use with customers so it's pretty exciting stuff. >> Gou, I'd love your viewpoint on this because of course, Portworx, good Red Hat partner but you need to work with all the Kubernetes opt-ins out there so, what's the importance of Operators to your business? >> Yeah, you know. OpenShift, obviously, it's one of the leading platforms for Kubernetes out there and so, the reason that is, it's because it's the expectations that it sets to an enterprise customer. It's that Red Hat experience behind it and so the notion of having an Operator that's certified by Red Hat and Red Hat going through the vetting process and making sure that all of the components that it is recommending from its ecosystem that you're putting onto OpenShift, that whole process gives a whole new level of enterprise experience, so, for us, that's been really good, right? Working with Red Hat, going through the process with them and making sure that they are actually double clicking on everything we submit, and there's a real, we iterate with them. So the quality of the product that's put out there within OpenShift is very high. So, we've deployed these Operators now, the Operator that Portworx just announced, right? We have it running in customers' hands so these are real end users, you'll be talking to Ford later on today. Harvard, for example, and so the level of automation that it has provided to them in their platform, it's quite high. >> I was kind of curious to shift maybe to the conference here that you all have a long history. With organizations and both of you personally in the Kubernetes world and cloud native world. We're here at KubeCon CloudNativeCon, North America, 2019. It's pretty big. And I see a lot of folks here, a lot of vendors, a lot of engineers, huge conference, 12,000 people. I mean, any perspective? >> So I've been at Red Hat a little over six years and I was at the very first KubeCon many years ago in San Francisco, I think we had about 200 people there. So this show has really grown over the years. And we're obviously big supporters, we've participated in KubeCon in Shanghai and Barcelona, we're obviously here. We're just super excited about seeing the ecosystem and the whole community grow and expand, so, very exciting. >> Gou? >> Yeah, I mean, like Julio mentioned, right? So, all the way from DockerCon to where we are today and I think last year was 8000 people in Seattle and I think there're probably I've heard numbers like 12? So it's also equally interesting to see the maturity of the products around Kubernetes. And that level of consistency and lack of fracture, right? From mainstream Kubernetes to how it's being adopted in OpenShift, there's consistency across the different Kubernetes platforms. Also, it's very interesting to see how on-prem and public cloud Kubernetes are coexisting. Four years ago we were kind of worried on how that would turn out, but I think it's enabling those hybrid-cloud workloads and I think today in this KubeCon we see a lot of people talking about that and having interest around it. >> That's a really great point there. Julio, want to give you the final word, for people that aren't yet engaged in the ecosystem of Operators, how can they learn more and get involved? >> Yeah, so we're excited to work with everybody, our ecosystem includes customers, partners, contributors, so as long as you're all in on Operators, we're ready to help. We've got tools, we've documentation, we have workshops, we have training, we have certification programs. And we also can help you with go to market. We're very fortunate to have a huge customer footprint, and so for those partners that have solutions, databases, storage solutions, there's a lot of joint opportunities out there that we can participate in. So, really excited to do that. >> Julio, Gou, thank you so much, you have a final word, Gou? >> I was just going to say, so, to follow up on the Operator comment on the certification that Julio mentioned earlier, so the Operator that we have, we were able to achieve level five certification. The level five signifies just the amount of automation that's built into it, so the concept of having Operators help people deploy these complex applications, that's a very important concept in Kubernetes itself. So, glad to be a Red Hat partner. >> That's actually a really good point, we have an Operator maturity model, level one, two, three, four, five. Level one and two are more your installations and upgrades. But the really highly capable ones, the fours and fives, are really to be commended. And Portworx is one of those partners. So we're excited to be here with them. >> That is a powerful statement, we talk about the complexity and how many pieces are in there. Everybody's looking to really help cross that chasm, get the vast majority of people. We need to allow environments to have more automation, more simplicity, a story I heard loud and clear at AnsibleFest earlier this year and through the partner ecosystem. It's good to see progress, so congratulations and thank you both for joining us. >> Thank you, thank you. >> Thank you. >> All right, for John Troyer, I'm Stu Miniman, back with lots more here from KubeCon CloudNativeCon 2019, thanks for watching theCUBE. (electronic music)

Published Date : Nov 19 2019

SUMMARY :

brought to you by Red Hat, I'm Stu Miniman, and happy to welcome to the program it's a big theme we have here at the show. And the community side, it's to evangelize to catch up with the Red Hat storage team. and we bring DR backup functionality to OpenShift. it was super challenging as to how we deal with this. and facilitating the data life cycle management that came into the Kubernetes ecosystem, into the application, so how to manage, and stepping out of the way. and facilitate the whole process. So that's kind of the distinction between those two. the ecosystem, obviously, we've got Portworx here and they're starting to get a lot of use with customers and so the notion of having an Operator in the Kubernetes world and cloud native world. and the whole community grow and expand, So it's also equally interesting to see the maturity for people that aren't yet engaged in the ecosystem And we also can help you with go to market. so the Operator that we have, the fours and fives, are really to be commended. and thank you both for joining us. back with lots more here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

IBMORGANIZATION

0.99+

JulioPERSON

0.99+

Julio TapiaPERSON

0.99+

SeattleLOCATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

two guestsQUANTITY

0.99+

San DiegoLOCATION

0.99+

fiveQUANTITY

0.99+

last yearDATE

0.99+

San Diego, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

ShanghaiLOCATION

0.99+

Gou RaoPERSON

0.99+

BarcelonaLOCATION

0.99+

GouPERSON

0.99+

PortworxORGANIZATION

0.99+

FordORGANIZATION

0.99+

KubeConEVENT

0.99+

8000 peopleQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

North AmericaLOCATION

0.99+

first timeQUANTITY

0.98+

YesterdayDATE

0.98+

DynatraceORGANIZATION

0.98+

TelCoORGANIZATION

0.98+

CouchbaseORGANIZATION

0.98+

firstQUANTITY

0.98+

a year agoDATE

0.98+

OpenShiftTITLE

0.98+

Four years agoDATE

0.98+

three thingsQUANTITY

0.97+

one containerQUANTITY

0.97+

over six yearsQUANTITY

0.97+

KubernetesTITLE

0.97+

DockerConEVENT

0.97+

Operatorhub.ioORGANIZATION

0.96+

CloudNativeConEVENT

0.96+

12QUANTITY

0.96+

about 200 peopleQUANTITY

0.96+

fivesQUANTITY

0.95+

about 150 OperatorsQUANTITY

0.95+

Operator FrameworkTITLE

0.95+

2019DATE

0.93+

CloudNativeCon 2019EVENT

0.93+

earlier this yearDATE

0.93+

Rick Nucci, Guru | Boomi World 2019


 

>> Narrator: Live from Washington, D.C., it's theCUBE covering Boomi World 19. Brought to you by Boomi. >> Welcome back to theCUBE, the leader in live tech coverage. I'm Lisa Martin, John Furrier is my co-host, and we are at Boomi World 2019 in Washington, D.C. Very pleased to be joined by the founder of Boomi and the co-founder and CEO of Guru, Rick Nucci. Hey, Rick. >> Hello. >> Lisa: Welcome to theCUBE. >> Thanks for having me, this is very cool setup. >> Lisa: Yeah, isn't it?! >> Rick: Yeah. >> So this is a founder of Boomi. It's pretty cool to have a celebrity on our stage. >> Rick: I'm not a celebrity. (laughs) >> (laughs) Talk to us about all that back in the day back in Philadelphia when you had this idea for what now has become a company that has 9,000+ customers in 80+ countries. >> Yeah, I'm beyond proud of this team and just how well they have done and made this business into what it is today. Yeah, way back in 2007, we were really looking at the integration market, and back then, cloud was really an unknown future. It was creeping up the Hype Cycle of the Gartner. Hype Cycle's my favorite thing they do. A lot of people were dismissing it as a fad, and we were early adopters of cloud internally at Boomi. We were early users of Salesforce and NetSuite and just thought and made a bet and a lot of this stuff is luck as any founder will tell you, any honest founder will tell you. And recognize that, hey, if the world were to move to cloud, how would you actually think about the integration problem? Because it would be very different than how you would think about it in the on-prem days when you have everything in your own data center behind your own four walls. In this world, everything's different. Security's a huge deal, the way data moves and has to mediate between firewalls is a big deal. And none of these products are built like this and so, really wanted as a team, and I remember these early conversations and had the willingness to take a big bet and swing for the fences and what I mean by that is really build a product from the ground up in this new paradigm, new cloud, and take a bet and say, hey, if cloud does take off, this will be awesome for Boomi. If not, well, we'll be in the line of all the other startups that have come and gone. And I think we ended up in a good spot. >> Yeah, that's a great point, Rick, about the founders being honest. And a lot of it is hard work, but having a vision and making multiple bets and big bets. I remember, when EC2 came out, it was a startup dream, too, by the way. You could just purchase a data center. But it wasn't fully complete, it was actually growing very fast. More services were coming on, they were web services, so that was API-based concepts back then. When was the crossover point for you guys going, "okay, we got this, the bets are coming in. "We're going to double down, we're going to double down on this." What were some of those moments where you started to get visibility that was a good bet? And what did you do? >> Yeah, what it really was was the rise of SaaS, very specifically, and the rise of business applications that were being re-architected in the cloud. And everybody knew about Salesforce, but there weren't a lot of other things back then. And there was NetSuite and a handful of others, but then, you started to see additional business units start to build cloud, and you had, in the HR space, with success factors in Taleo and marketing automation space with Eloqua and Marketo. CRM space, we all know that story, e-commerce space procurement, and you start to see these best-of-breed products rise up which is amazing, but as that was happening, it was proliferating the integration problem. And so what became really clear to us, I think, as we were going through this and finding product market fit for Boomi, again, back in 2007, 2008, that was the pattern that emerged, like hey, every time someone buys one of these products, they are going to have to integrate 'cause you're talking about employee data, customer data. You have to integrate this with your other systems and that was going to create an opportunity for us and that was where we were like, okay, I think we're onto something. >> You know, to date, we've been doing theCUBE for 10 years. We made a big bet that people, authentic conversation would be a good bet, turns out it worked. We love it, things going great, but now, we're living in a world now that's getting more complex and I want to get your thoughts that Dave Vellante, myself, Stu who have been talking about how clouds changed and we were goofing on the Web 2.0 metaphor by saying, Cloud 1.0, Cloud 2.0. But I want to get your thoughts on how you might see this because, if you say Cloud 1.0 was Amazon, compute storage, AtScale, cloud NATO, all started there. Pretty straightforward if you're going to be born in the cloud, then you could work with some things there, but to bring multicloud and for enterprises to adopt with this integration challenge, Cloud 2.0 unveils some new things like, for instance, network management now is observability. Configuration management is now automation (chuckles). So you start to see things emerge differently in this Cloud 2.0 operating model. How do you see Cloud 2.0? Do you believe that, one, there's a Cloud 2.0 the way I said it, and if so, what is your version of what Cloud 2.0 would look like? >> Yeah, I think, yes, definitely think things are changing and the way that I think about it is that we're continuing to unbundle, and what I mean by unbundle is we're continuing to proliferate... Buyers are willing to buy and, therefore, we're continuing to proliferate relatively narrower and narrower and deeper and deeper capabilities and functionalities. And one big driver of that is AI, specifically, machine learning, and not the hypey stuff, but the real stuff. It's funny, man, when you compare, right now, AI, and what I was just talking about, it's the same thing all over again. It's Hype Cycle crawling up the thing, okay. But now, I think the recipe for good AI products that really do solve problems is that they're very intentionally narrow and they're very deep because they're gathering good training data and they're built to solve a very specific problem. So I think-- >> Like domain expertise, domain-specific-- >> Exactly, industry expertise, domain expertise, use case. If you're gathering training data about a knowledge worker, the data you'll gather is very different if you're a salesperson or an HR professional or an engineer. And I think the AI companies that are getting it right, are really dialed in and focused on that, so as a result, you see this proliferation of things that might be layered on top of big platforms like CRM's and technologies like Slack, which is creating a place for all this to come together, but you're seeing this unbundling where you're getting more and more kind of almost microservices, not quite, but very fine-tuned, specific things coming together. >> So machine learning, I totally agree with you, it's definitely hype, but the hardcore machine learning has a math side to it and a cognition side, cognitive learning thing. But, also, data is a common thread here. I mentioned domain-specific. >> Rick: All about the data. >> So, if data's super important, you want domain expertise which I agree with, but also there's now a horizontal scalability with observation data. The more data you have, the better at machine learning. It may or may not, depending on what the context is, so you have contextual data, this is a (chuckles) hard thing. What's your view on this because this is where people maybe get caught around the axis of machine learning hype and not nearly narrowing on what their data thinking is. >> Rick: 100%. >> What's your--? >> 100%, I think people will tend to fall in the trap of focusing on the algorithms that they're building and not recognizing that, without the data, the algorithms are useless. Right? >> Lisa: Right. >> And that it's really about how, as a ML problem that you're trying to tackle. Are you gathering data that's good, high-quality, scalable, accurate, protected, and safe? Because now, for different reasons, but again, just like when we were moving to cloud, security and privacy are utmost important because, for any AI to do its job well, it has to gather a lot of data out of the enterprise and store it and train off of that. >> It's interesting a lot of the cloud play. I mean sales was just a unicorn right out of the gate and they were a pioneer, that's what it is. They were cloud before cloud was cloud as we know it today. But you see a lot of things like the marketing automation cloud platform. It's a marketing cloud, I got a sales cloud. Almost seem too monolithic and you see people trying to unbundle that. I think you're right. Or break it apart 'cause the data is stuck in this full-stack model because, if you agree with your sets, horizontal scalability and vertical integration is the architecture. Technically, that's half-stack. (chuckles) >> Yes, yes. >> John: So half-stack developers are evaluable now. >> Totally, and yes, I like that term. The other problem that I think you're getting at is tendency isolation of that data. A lot of things were built with that in mind, meaning that the best AI you're going to build is only going to be what you can derive from one customer's set of data. Whereas, now, people are designing things intentionally such that the more customers that are using the thing, the better and smarter it gets. And so, to your point about monolithic, I think the opportunity that the next wave of startups have is that they can design in that world and that just means that their technology will get better faster 'cause it'll be able to learn from more data and-- >> This hasn't been changing a lot in cloud. I want to get your thoughts because you guys at Boomi here are on a single-tenant instance model because the collective intelligence of the data benefits everybody as more people come in. That's a beautiful fly, we'll feel a lot like Amazon model to me. But the old days, multi-tenancy was the holy grail. Maybe that came from the telcos or whatever, hosting world. What's your view on single-tenant instance on a SaaS business versus, say, multiten... There's trade-offs and pros and cons. What's your opinion, where do you lean on this one? >> Yeah, I mean we, both Boomi and Guru, so two eras worth or whatever. You have to have some level of tenancy isolation for some level of what you do. And, at Boomi, what we did is we separated the sensitive, private data. Boomi has customers processing payroll through its product, so very, very sensitive stuff absolutely has to be protected and isolated per tenant, and Boomi and Guru is signing up for that, and the clauses that we sign to are security agreements. But what you can decouple from that is more of the metadata or the attributes about that data and that customer, so Boomi, you're referring to, launched way back when Boomi Suggest which basically learned. As all the people were building data maps, connecting different things together, Boomi could learn from all that and go, oh, you're trying to do this. Well, these however many other customers, let me suggest how these maps are drawn, and Guru, we're following a very similar pattern, so Guru, we store knowledge which also tends to be IP for a company and so, yes, we absolutely adhere to the fact that only a handful of our employees can ever see that stuff, and that's 'cause they're in devops, and they needed to keep things running, but all the tenants are protected from one another. No one could ever leak to another one. But there are things about organization and structure and tagging and learnings you can get that are not that sensitive stuff that does make the product better from an AI perspective the more people that use it. And so, I don't know that I'm giving you one or another, but I think it does come down to how you intentionally design your data to it. >> John: Decoupling is the critical piece. >> Absolutely. >> This is the cloud architecture. Decouple, use API's to connect highly cohesive elements, and the platform can be cohesive if shared. >> Absolutely, and you can still get all the benefits of scalability and elastic growth and, yeah, 100%. >> Along that uncoupling line, tell us a little bit briefly about what Guru is and then I want to talk about some of the use cases. I know I'm a big Slack user; you probably are too, John. Talk to us about what you're doing there, but just give our folks a sense of what Guru is and all that good stuff. >> Sure, I mean Guru's, in some ways, like Boomi, rethinking a very old problem, in this case, it's knowledge management. That's a concept we've talked about for a long time and I think, these days, it has really become something that does impact a company's ability to scale and grow reliably, so very specifically, what we do is we bring the knowledge that employees need to do their job to them when they need it. So imagine if you're a customer support agent and you're supporting Spotify, you're an employee of Spotify. And I write in and I want to know about the new Hulu partnership. As an agent, you use Guru to look up and give me that answer and you don't have to go to a portal, you don't have to go to some other place to do that. Guru's sitting there right next to your ticket or your chat as you're having it in real time, saying, hey, there's asking about Hulu. This is the important things you want to know and talk about. And then the other half of that is, we make sure that that doesn't go still. The classic problem with knowledge products is the information, when you're talking about something like product knowledge, changes all the time. And the world we live in is moving faster and faster and faster, so we used to ship product once a year, once every two years. Now we ship product every month, sometimes couple times a month. >> Can you get a Guru bot for our journalism and our Cube hosts? We can be real time. >> Hey! >> I would be happy to do that. >> That'd be great! >> (laughs) Guru journalist. >> Actually, you're able to set it right in there where your ears are-- >> Lisa: I'll take it. >> Just prompting you, exactly. So, and then you asked about Slack, that's a really great partner for us. They were an early investor in the company. They're a customer, but together, if you think about where a lot of knowledge exchange happens in Slack, it's, hey, I need to know something. I think I can go slack John 'cause I think he'll know the answer. He knows about this. And you're like the 87th person who's asked me that same thing over again. Well, with Guru being integrated into Slack, you can just say, "Guru, give them the answer." And you don't have to repeat yourself. And that expert fatigue problem is a real thing. >> John: That's a huge issue. >> Absolutely. >> And, as your company grows and more and more people are, oh, poor John's getting buried for being the expert, one of the reasons he got you there. Now he's getting burned out and buried from it. And so we seek to solve that problem and then, post-Guru, a company will scale faster, they'll onboard their employees faster, they'll launch products better, 'cause everyone will know what to talk about-- >> It's like a frequently asked questions operating system. >> Rick: Exactly. >> At a moment's notice. >> Technology, right? And making it living 'cause all those FAQ's change all the time. >> And that's the important part too is keeping it relevant, 24 by 7. >> Rick: Absolutely. >> Which is difficult. >> Contextual data analysis is really hard. What's the secret sauce? >> The secret sauce is that we live where you work. The secret sauce is that we focus very specifically on specific workflows like that customer support agent and so, by knowing what you're doing and what ticket you're working on and what chat you're having with a customer, Guru can be anticipatory over time and start to say, "hey, you probably "want to talk to him about this," and bring that answer to you. It's because we live where you work. And that was frankly accidental in a lot of ways. We were trying to solve the problem of knowledge living where you work, and then what we realized is, wow, there's a lot of interesting stuff that we can learn and give back to the customer about what problems they're solving and when they're using Guru and why, and that only makes the product better. So that's really, I think, the thing that, if you ask our typical customers, really gets them excited. They'll say, hey, because of Guru, I feel more confident when I'm on the phone, that I'm always going to give the right answer. >> That's awesome. >> I love hearing customers talk about or even have business leaders talk about some of the accidental discoveries or capabilities, but just how, over time, more and more and more value gets unlocked if you can actually, really extract value from that data. Last question, Rick, I need to know what's in a name? The name Boomi, the name Guru? >> Yes, well, I'll start with the less exciting answer which I always get asked about, which is Boomi, which is a Hindi word that means "earth" or "from the earth". And, sometimes, if you're ordering at the Indian restaurant, you'll see B-H-O-M-I and that might be the vegetables on the menu. That name came from an early employee of the company. I wish I could say that it had a connection to business (laughs). It really doesn't, it just was like, it looks cool, and people tend to remember the name. And honestly, there have been so many moments in the early, early days where we were like, should we change the name, it doesn't really. And we're like you know what? People tend to, it sticks with them, it's kind of exciting, and we kept it. Guru, on the flip side, one of our early employees came up with that name too, and I think she was listening to me talk about what we were doing and she's like, oh, that thing is like a guru to you. And so the brand promise is that you feel like a guru in your area of expertise within a company and that our product plays a relatively small role in you having that, feeling confident about that expertise. >> I love that, awesome. Rick, thank you so much for joining John and me on theCUBE today, we appreciate it. >> Thank you. >> John: Thanks. >> For John Furrier, I'm Lisa Martin. You're watching theCUBE from Boomi World 2019. Thanks for watching. (upbeat electronic music)

Published Date : Oct 2 2019

SUMMARY :

Brought to you by Boomi. and the co-founder and CEO of Guru, Rick Nucci. It's pretty cool to have a celebrity on our stage. Rick: I'm not a celebrity. back in Philadelphia when you had this idea and had the willingness to take a big bet And what did you do? and that was where we were like, and we were goofing on the Web 2.0 metaphor and not the hypey stuff, but the real stuff. so as a result, you see this proliferation of things it's definitely hype, but the hardcore machine learning and not nearly narrowing on what their data thinking is. of focusing on the algorithms that they're building as a ML problem that you're trying to tackle. and you see people trying to unbundle that. is only going to be what you can derive Maybe that came from the telcos or whatever, hosting world. and the clauses that we sign to are security agreements. and the platform can be cohesive if shared. Absolutely, and you can still get all the benefits and all that good stuff. This is the important things you want to know and talk about. and our Cube hosts? So, and then you asked about Slack, one of the reasons he got you there. change all the time. And that's the important part too is What's the secret sauce? and that only makes the product better. The name Boomi, the name Guru? and that might be the vegetables on the menu. John and me on theCUBE today, we appreciate it. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

RickPERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

LisaPERSON

0.99+

BoomiORGANIZATION

0.99+

HuluORGANIZATION

0.99+

2007DATE

0.99+

PhiladelphiaLOCATION

0.99+

Rick NucciPERSON

0.99+

100%QUANTITY

0.99+

2008DATE

0.99+

AmazonORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

Washington, D.C.LOCATION

0.99+

Cloud 2.0TITLE

0.99+

Cloud 1.0TITLE

0.99+

80+ countriesQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

bothQUANTITY

0.99+

9,000+ customersQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

Boomi WorldTITLE

0.98+

once a yearQUANTITY

0.98+

oneQUANTITY

0.98+

NetSuiteTITLE

0.98+

GuruORGANIZATION

0.97+

StuPERSON

0.97+

todayDATE

0.97+

singleQUANTITY

0.96+

7QUANTITY

0.95+

TaleoORGANIZATION

0.93+

HindiOTHER

0.92+

24QUANTITY

0.91+

couple times a monthQUANTITY

0.91+

EloquaORGANIZATION

0.9+

Boomi World 2019EVENT

0.89+

87th personQUANTITY

0.88+

AtScaleORGANIZATION

0.87+

one customerQUANTITY

0.86+

MarketoORGANIZATION

0.86+

Boomi World 2019TITLE

0.86+

EC2TITLE

0.85+

Liran Zvibel, WekaIO | CUBEConversations, June 2019


 

>> from our studios in the heart of Silicon Valley. HOLLOWAY ALTO, California It is a cube conversation. >> Hi! And welcome to the Cube studios from the Cube conversation, where we go in depth with thought leaders driving innovation across the tech industry on hosted a Peter Burress. What are we talking about today? One of the key indicators of success and additional business is how fast you can translate your data into new value streams. That means sharing it better, accelerating the rate at which you're running those models, making it dramatically easier to administrate large volumes of data at scale with a lot of different uses. That's a significant challenge. Is going to require a rethinking of how we manage many of those data assets and how we utilize him. Notto have that conversation. We're here with Le'Ron v. Bell, who was the CEO of work a Iot leering. Welcome back to the Cube. >> Thank you very much for having >> me. So before we get to the kind of a big problem, give us an update. What's going on at work a Iot these days? >> So very recently we announced around CIA financing for the company. Another 31.7 a $1,000,000 we've actually had a very unorthodox way of raising thiss round. Instead of going to the traditional VC lead round, we actually went to our business partners and joined forces with them into building a stronger where Collier for customers we started with and video that has seen a lot of success going with us to their customers. Because when Abel and Video to deploy more G pews so they're customers can either solve bigger problems or solve their problems faster. The second pillar off the data center is networking. So we've had melon ox investing in the company because there are the leader ofthe fast NETWORKINGS. So between and Vidia, melon, ox and work are yo u have very strong pillars. Iran compute network and storage performance is crucial, but it's not the only thing customers care about, so customers need extremely fast access to their data. But they're also accumulating and keeping and storing tremendous amount of it. So we've actually had the whole hard drive industry investing in us, with Sigi and Western Digital both investing in the company and finally one off a very successful go to market partner, Hewlett Pocket enterprise invested in us throw their Pathfinder program. So we're showing tremendous back from the industry, supporting our vision off, enabling next generation performance, two applications and the ability to scale to any workload >> graduations. And it's good money. But it's also smart money that has a lot of operational elements and just repeat it. It's a melon ox, our video video, H P E C Gate and Western Digital eso. It's It's an interesting group, but it's a group that will absolutely sustain and further your drive to try to solve some of these key data Orient problems. But let's talk about what some of those key day or data oriented problems where I set up front that one of the challenges that any business that has that generates a lot of it's value out of digital assets is how fast and how easily and with what kind of fidelity can I reuse and process and move those data assets? How are how is the industry attending? How's that working in the industry today, and where do you think we're going? >> So that's part on So businesses today, through different kind of workloads, need toe access, tremendous amount of data extremely quickly, and the question of how they're going to compare to their cohort is actually based on how quickly and how well they can go through the data and process it. And that's what we're solving for our customers. And we're now looking into several applications where speed and performance. On the one hand, I have to go hand in hand with extreme scale. So we see great success in machine learning, where in videos in we're going after Life Sciences, where the genomic models, the cryo here microscopy the computational chemistry all are now accelerated. And for the pharmacy, because for the research interested to actually get to conclusion, they serve to sift through a lot of data. We are working extremely well at financial analytics, either for the banks, for the hedge funds for the quantitative trading Cos. Because we allow them to go through data much, much quicker. Actually, only last week I had the grades to rate the customer where we were able to change the amount of time they go through one analytic cycle from almost two hours, four minutes. >> This is in a financial analytics >> Exactly. And I think last time I was here was telling you about one of their turn was driving companies using us taking, uh, time to I poke another their single up from two weeks to four hours. So we see consistent 122 orders of monk to speed time in wall clock. So we're not just showing we're faster for a benchmark. We're showing our customer that by leveraging our technology, they get results significantly faster. We're also successful in engineering around chip designed soft rebuild fluid dynamics. We've announced Melon ox as an idiot customer. The chip designed customers, so they're not only a partner, they have brought our technology in house, and they're leveraging us for the next chips. And recently we've also discovered that we are great help for running Noah scale databases in the clouds running ah sparkles plank or Cassandra over work. A Iot is more than twice faster than running over the Standard MPs elected elastic clock services. >> All right, so let's talk about this because your solving problems that really only recently have been within range of some of the technology, but we still see some struggling. The way I described it is that storage for a long time was focused on persisting data transactions executed. Make sure you persisted Now is moved to these life life sciences, machine learning, genomics, those types of outpatients of five workloads we're talking about. How can I share data? How can I deploy and use data faster? But the historian of the storage industry still predicated on this designs were mainly focused on persistent. You think about block storage and filers and whatnot. How is Wecker Io advancing that knowledge that technology space of, you know, reorganizing are rethinking storage for the types, performance and scale that some of these use cases require. >> This is actually a great question. We actually started the company. We We had a long legacy at IBM. We now have no Andy from, uh, metta, uh, kind of prints from the emcee. We see what happens. Page be current storage portfolio for the large Players are very big and very convoluted, and we've decided when we're starting to come see that we're solving it. So our aim is to solve all the little issues storage has had for the last four decades. So if you look at what customers used today, if they need the out most performance they go to direct attached. This's what fusion I awards a violin memory today, these air Envy me devices. The downside is that data is cannot be sure, but it cannot even be backed up. If a server goes away, you're done. Then if customers had to have some way of managing the data they bought Block san, and then they deployed the volume to a server and run still a local file system over that it wasn't as performance as the Daz. But at least you could back it up. You can manage it some. What has happened over the last 15 years, customers realized more. Moore's law has ended, so upscaling stopped working and people have to go out scaling. And now it means that they have to share data to stop to solve their problems. >> More perils more >> probably them out ofthe Mohr servers. More computers have to share data to actually being able to solve the problem, and for a while customers were able to use the traditional filers like Aneta. For this, kill a pilot like an eyes alone or the traditional parlor file system like the GP affair spectrum scale or luster, but these were significantly slower than sand and block or direct attached. Also, they could never scale matter data. You were limited about how many files that can put in a single, uh, directory, and you were limited by hot spots into that meta data. And to solve that, some customers moved to an object storage. It was a lot harder to work with. Performance was unimpressive. You had to rewrite our application, but at least he could scale what were doing at work a Iot. We're reconfiguring the storage market. We're creating a storage solution that's actually not part of any of these for categories that the industry has, uh, become used to. So we are fasted and direct attached, they say is some people hear it that their mind blows off were faster, the direct attached, whereas resilient and durable as San, we provide the semantics off shirt file, so it's perfect your ability and where as Kayla Bill for capacity and matter data as an object storage >> so performance and scale, plus administrative control and simplicity exactly alright. So because that's kind of what you just went through is those four things now now is we think about this. So the solution needs to be borrow from the best of these, but in a way that allows to be applied to work clothes that feature very, very large amounts of data but typically organized as smaller files requiring an enormous amount of parallelism on a lot of change. Because that's a big part of their hot spot with metadata is that you're constantly re shuffling things. So going forward, how does this how does the work I owe solution generally hit that hot spot And specifically, how are you going to apply these partnerships that you just put together on the investment toe actually come to market even faster and more successfully? >> All right, so these are actually two questions. True, the technology that we have eyes the only one that paralyzed Io in a perfect way and also meditate on the perfect way >> to strangers >> and sustains it parla Liz, um, buy load balancing. So for a CZ, we talked about the hot sport some customers have, or we also run natively in the cloud. You may get a noisy neighbor, so if you aren't employing constant load balancing alongside the extreme parallelism, you're going to be bound to a bottleneck, and we're the only solution that actually couples the ability to break each operation to a lot of small ones and make sure it distributed work to the re sources that are available. Doing that allows us to provide the tremendous performance at tremendous scale, so that answers the technology question >> without breaking or without without introducing unbelievable complexity in the administration. >> It's actually makes everything simpler because looking, for example, in the ER our town was driving example. Um, the reason they were able to break down from two weeks to four hours is that before us they had to copy data from their objects, George to a filer. But the father wasn't fast enough, so they also had to copy the data from the filer to a local file system. And these copies are what has added so much complexity into the workflow and made it so slow because when you copy, you don't compute >> and loss of fidelity along the way right? OK, so how is this money and these partnerships going to translate into accelerated ionization? >> So we are leveraging some off the funds for Mohr Engineering coming up with more features supporting Mohr enterprise applications were gonna leverage some of the funds for doing marketing. And we're actually spending on marketing programs with thes five good partners within video with melon ox with sick it with Western Digital and with Hewlett Packard Enterprise. But we're also deploying joint sales motion. So we're now plugged into in video and plugged, anted to melon ox and plugging booked the Western Digital and to Hillary Pocket Enterprise so we can leverage their internal resource now that they have realized through their business units and the investment arm that we make sense that we can actually go and serve their customers more effectively and better. >> Well, well, Kaio is introduced A road through the unique on new technology into makes perfect sense. But it is unique and it's relatively new, and sometimes enterprises might go well. That's a little bit too immature for me, but if the problem than it solves is that valuable will bite the bullet. But even more importantly, a partnership line up like this has got to be ameliorating some of the concerns that your fearing from the marketplace >> definitely so when and video tells the customers Hey, we have tested it in our laps. Where in Hewlett Packard Enterprise? Till the customer, not only we have tested it in our lab, but the support is going to come out of point. Next. Thes customers now have the ability to keep buying from their trusted partners. But get the intellectual property off a nor company with better, uh, intellectual property abilities another great benefit that comes to us. We are 100% channel lead company. We are not doing direct sales and working with these partners, we actually have their channel plans open to us so we can go together and we can implement Go to Market Strategy is together with they're partners that already know howto work with them. And we're just enabling and answering the technical of technical questions, talking about the roadmap, talking about how to deploy. But the whole ecosystem keeps running in the fishing way it already runs, so we don't have to go and reinvent the whales on how how we interact with these partners. Obviously, we also interact with them directly. >> You could focus on solving the problem exactly great. Alright, so once again, thanks for joining us for another cube conversation. Le'Ron zero ofwork I Oh, it's been great talking to you again in the Cube. >> Thank you very much. I always enjoy coming over here >> on Peter Burress until next time.

Published Date : Jun 5 2019

SUMMARY :

from our studios in the heart of Silicon Valley. One of the key indicators of me. So before we get to the kind of a big problem, give us an update. is crucial, but it's not the only thing customers care about, How are how is the industry attending? And for the pharmacy, because for the research interested to actually get to conclusion, in the clouds running ah sparkles plank or Cassandra over But the historian of the storage industry still predicated on this And now it means that they have to share data to stop to solve We're reconfiguring the storage market. So the solution needs to be borrow and also meditate on the perfect way actually couples the ability to break each operation to a lot of small ones and Um, the reason they were able to break down from two weeks to four hours So we are leveraging some off the funds for Mohr Engineering coming up is that valuable will bite the bullet. Thes customers now have the ability to keep buying from their You could focus on solving the problem exactly great. Thank you very much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Western DigitalORGANIZATION

0.99+

Liran ZvibelPERSON

0.99+

IBMORGANIZATION

0.99+

two weeksQUANTITY

0.99+

Mohr EngineeringORGANIZATION

0.99+

100%QUANTITY

0.99+

two weeksQUANTITY

0.99+

Western DigitalORGANIZATION

0.99+

two questionsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

CIAORGANIZATION

0.99+

Peter BurressPERSON

0.99+

GeorgePERSON

0.99+

June 2019DATE

0.99+

122 ordersQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

four hoursQUANTITY

0.99+

last weekDATE

0.99+

four minutesQUANTITY

0.99+

MohrORGANIZATION

0.99+

SigiORGANIZATION

0.99+

Hewlett PocketORGANIZATION

0.99+

two applicationsQUANTITY

0.99+

five good partnersQUANTITY

0.98+

second pillarQUANTITY

0.98+

bothQUANTITY

0.98+

31.7QUANTITY

0.98+

AndyPERSON

0.98+

CollierORGANIZATION

0.98+

todayDATE

0.98+

H P E C GateORGANIZATION

0.97+

more than twiceQUANTITY

0.97+

Le'RonPERSON

0.96+

Hillary Pocket EnterpriseORGANIZATION

0.95+

BellPERSON

0.95+

four thingsQUANTITY

0.95+

Melon oxORGANIZATION

0.95+

oneQUANTITY

0.94+

five workloadsQUANTITY

0.92+

each operationQUANTITY

0.92+

CubeORGANIZATION

0.92+

AbelORGANIZATION

0.91+

Western Digital esoORGANIZATION

0.91+

$1,000,000QUANTITY

0.89+

almost two hoursQUANTITY

0.89+

singleQUANTITY

0.89+

IoPERSON

0.88+

last 15 yearsDATE

0.86+

HOLLOWAY ALTO, CaliforniaLOCATION

0.86+

OneQUANTITY

0.85+

one analytic cycleQUANTITY

0.81+

CassandraPERSON

0.78+

last four decadesDATE

0.77+

Kayla BillPERSON

0.74+

KaioORGANIZATION

0.69+

MoorePERSON

0.67+

VidiaORGANIZATION

0.59+

NoahCOMMERCIAL_ITEM

0.59+

VideoORGANIZATION

0.56+

IranLOCATION

0.54+

WeckerORGANIZATION

0.54+

AnetaORGANIZATION

0.53+

Block sanORGANIZATION

0.5+

IotTITLE

0.5+

DazORGANIZATION

0.44+

WekaIOPERSON

0.41+

PathfinderTITLE

0.32+

Dr. Shannon Vallor, Santa Clara University | Technology Vision 2018


 

>> Hey welcome back, everybody. Jeff Frick here with the CUBE. We're at the Accenture Technology Vision 2018, actually, the preview event, 'about 200 people. The actual report comes out in a couple of days. A lot of interesting conversations about what are the big trends in 2018 in Accenture. Surveyed Paul Daugherty and team and really excited. Just was a panel discussion to get into a little bit of the not exactly a technology, but really the trust and ethics conversations. We're joined by Dr. Shannon Vallor. She's a professor at Santa Clara University. Dr. Vallor, great to see you. >> Great to be here, thank you! >> So you were just on the panel, and of course there was a car guy on the panel. So everybody loves this talk about cars and autonomous vehicles. You didn't get enough time. (chuckles) So we've got a little more time, which is great. >> Great! >> But one of the things that you brought up that I think was pretty interesting is really, kind of a higher-level view of what role technology plays in our life before. And you said before it was ancillary, it was a toy, it was a gimmick. It was a cool new car, a status symbol, or whatever. But now technology is really defining who we are, what we do, how we interact, not only with the technology of other people. It's really taken such a much more fundamental role with a bunch more new challenges. >> Yeah, and fundamentally that means that these new technologies are helping to determine how our lives go, not just whether we have the latest gadget or status symbol. Previously, as I said, we tended to take on technologies as ornaments to our life, as luxuries to enrich our life. Increasingly, they are the medium through which we live our lives, right? They're the ways that we find the people we want to marry. They're the ways that we access resources, capital, healthcare, knowledge. They're the ways that we participate as citizens in a democracy. They are entering our bodies. They're entering our homes. And the level of trust that's required to really welcome technology in this way without ambivalence or fear, it's a kind of trust that many technology companies weren't prepared to earn. >> Jeff: Right, Right. >> Because it goes much deeper than simply having to behave in a lawful manner, or satisfy your shareholders, right? It means actually having to think about whether your technologies are helping people live better lives, and whether you're earning the trust that your marketing department, your engineers, your salespeople are out there trying to get from your customers. >> Right. And it's this really interesting. When you talked about a refrigerator, I just love that example 'cause most people would never let their next door neighbor look into their refrigerator. >> Shannon: Or their medicine cabinet, right? >> Or their medicine cabinet, right. And now you want to open that up to automatic replenishment. And it's interesting 'cause I don't think a lot of companies that came into the business with the idea that they were going to have this intimate relationship with their customers to a degree, and a personal responsibility to that data. They just want to sell them some good stuff and move on >> Sure. >> to the next customer. >> Yes. >> So it's a very different mindset. Are they adjusting? How are the legacy folks dealing with this? >> Well, the good news is, is that there are a lot more conversations happening about technology and ethics within industry circles. And you even see large organizations coming together to try to lead in an effort to develop more ethical approaches to technology design and development. So, for example, the big five leaders in AI have come together to form the partnership for AI and social good. And this is a really groundbreaking movement that could potentially lead other industry participants to say, "Hey we need to get on board with this, "and we have to start thinking >> Right. >> "about what ethical leadership looks like for us," as opposed to just a sort of PR kind of thing. Yeah, we throw the word "ethics" on a few websites or slides and then we're good, right? >> Right. >> It has to go much deeper than that. And that's going to be a challenge. But it has to be at a level where rank and file workers and project managers have procedures that they know how to go through that involve ethical analysis, prediction, and preparing ethical responses to failures or conflicts that might arise. >> Right, there's just so many layers to this that we could go on for a long time. >> Sure. >> But the autonomous band has kicked up. >> Yes, yes! >> But one of the things is when you're collecting the data for a specific purpose, and you put all the efficacy in as to why and how, and what you're going to treat, what you don't know is how that data might be used by someone else next week, >> Yes. >> next year, >> Yes. >> ten years from now. >> Absolutely. >> And you can't really know because there's maybe things that you aren't aware of. So a very difficult challenge. >> And I think we have to just start thinking in terms of different kinds of metaphors. So data up until now has been seen as something that had value and very little risk associated with it. Now our attitudes are starting to shift, and we're starting to understand that data carries not just value, not just the ability to monetized, but immense power. And that power can be both constructive or destructive. Data is like jet fuel, right? It can do great things. >> Right. >> But you've got to store it carefully. You have to make sure that the people handling it are properly trained. That they know what can go wrong. >> Right. >> Right? That they've got safety regimes in place. No one who handles jet fuel treats it the way that some companies treat data today. But today, data can cause disasters on a scale similar to a chemical explosion. People can die, lives can be ruined, and people can lose their life savings over a breach or a misuse of data that causes someone to be unjustly accused of fraud or a crime. So we have to start thinking about data as something much more powerful than we have in the past. >> Jeff: Right. >> And you have the responsibility to handle it appropriately. >> Right, but we're still so far away, right? We're still sending money to the Nigerian prince who needs help getting out of the airport at Newark Airport. I mean, even just the social, >> Yes. >> the social factors still haven't caught up. And then you've got this kind of whole API economy where so many apps are connected to so many apps. >> Right. >> So even, where is the data? >> Yeah. >> And that's before you even get into a plane flying over international borders while you send an email, I mean. >> Right, yes. >> The complexity is crazy! >> Yep, and we're never going to get a handle on all of it. So one of the things I like to tell people is, it's important not to let the perfect become the enemy of the good, right? >> Jeff: Right. >> So the idea is, yes, the problem is massive. Yes, it's incredibly complex. Can we address every possible risk? Can we forestall every possible disaster? No. Can we do much better than we're doing now? Absolutely. So, I think, the important thing is not to focus on how massive the problem or the complexities are, but think about how can we move forward from here to get ourselves in a better and more responsible position. And there's lots of ways to do that. Lots of companies are already leading the way in that direction. So I think that there's so much progress to be made that we don't have to worry too much about the progress that we might never get around to making. >> Right, right. But then there's this other interesting thing that's going on that we've seen with kind of the whole "fake news," right? Which is algorithms are determining what we see. >> Shannon: Yes. >> And if you look at the ad tech model as kind of where the market has taken over the way that that operates, >> Shannon: Yep. >> there's no people involved. So then you have things happen like what happened with YouTube, where advertisers' stuff is getting put into places where they don't want it. >> Yeah. >> But there's really no people, there's no monitoring. >> Yes. >> So how do you see that kind of evolving? 'Cause on one hand, you want more social responsibility and keeping track of things. On the other hand, so much is moving to software, automation, and giving people more of what they want, not necessarily what they need. >> Well, and that means that we have to do a much better job of investing in human intelligence. We have to, for every new form of artificial intelligence, we need an even more powerful provision of human intelligence to guide it, to provide oversight. So what I like to say is, AI is not ready for solo flight, right? And a lot of people would like that to be the case because, of course, you can save money if you can put an automated adjudication system in there and take the people out. But we've seen over and over again that that leads again and again to disaster and to huge reputational losses to companies, often huge legal liabilities, right? So we have to be able to get companies to understand that they are really protecting themselves and their long-term health if they invest in human expertise and human intelligence to support AI, to support data, to support all of the technologies that are giving these companies greater competitive advantage and profitability. >> But does the delta in the machine scale versus human scale just become unbearable? Or can we use the machine scale to filter out the relatively small number of things that need a person to get involved. I mean. >> Yeah, and the-- >> How do you see some kind of some best practices? >> Yeah, so the answer depends on the industry, depends upon the application. So there's no one size fits all solution. But what we can often do is recognize that typically human and AI function best together, right? So we can figure out the ways in which the AI can amplify the human expertise and wisdom, and the human expertise can fill in some of the gaps that still exist in artificial intelligence. Some of the things that AIs just don't see, just don't recognize, just aren't able to value or predict. And so when we figure out the ways that human and artificial intelligence can compliment each other in a particular stetting, then we can get the most reliable results, and often the fairest and safest results. They might not always be the most efficient from the narrow standpoint of speed and profit, right? >> Jeff: Right, right. >> So they have able to step back and say at the end of the day, quality matters, trust matters. And just as if we put together a shoddy project on the cheap and put it out there, it's going to come back to bite us. If we put shoddy AI in place of important human decisions that affect human lives, it's going to come back to bite us. So we need to invest in the human expertise and the human wisdom, which has that ethical insight to round out what AI still lacks. >> So do you think the execution of that trust building becomes the next great competitive advance? I mean, >> Yeah. >> nobody talks about that right? Data's the new oil, >> Sure! And blah, blah, blah, blah, blah. And software defined, AI driven automation, but that's not necessarily only to the goal in road, right? There's issues. >> Right. >> So is trust, you think? >> Absolutely. >> The next great competitive differentiator? >> Absolutely. I think in the long run it will be. If you look at, for example, the way that companies like Facebook and Equifax have really damaged, in pretty profound ways, the public perception of them as trustworthy actors in, not just the corporate space, right? But in the political space for Facebook, in the economic space for Equifax. And we have to be able to recognize that those associations of a major company with that level of failure are really lasting, right? Those things don't get forgotten in one news cycle. So I think we have to recognize that today people don't know who to trust, right? It used to be that you could trust the big names, the big Fortune 500 companies. >> The blue chips, right. >> The blue chips, right. >> Right. >> And then it was the little fly by night companies that you didn't really know whether you could trust, and maybe you'd be more cautious in dealing with them. Now the public has no way of understanding which companies will genuinely fulfill the trust in the relationship >> Right. >> that the customer gives them. And so there's a huge opportunity from a competitive standpoint for companies to step up and actually earn that trust and say, in a way that can be backed up by action and results, "Your data's safe with us," right? "Your property's safe with us. "Your bank account is safe with us. "Your personal privacy is safe with us. "Your votes are safe with us. "Your news is safe with us." >> Right. >> Right? And that's the next step. >> But everyone is so cynical that, unfortunately Walter Cronkite is dead, right? >> Sure. >> We don't trust politicians anymore. We don't trust news anymore. We don't trust, now more and more, the companies. So it's a really kind of rough world in the trust space. >> Yeah! >> So do you see any kind of (chuckles) silver lining? I mean, how do we execute in this kind of crazy world where you just don't know? >> Well, what I like to say is that you have to be cautiously optimistic about this because society simply doesn't keep going without some level of trust, right? Markets depend on trust. Democracy depends on trust. Neighborhoods depend on trust, right? >> Jeff: Right. >> So either trust comes back into our lives at some deep level or everything falls apart. Frankly, those are the only choices. So if nature abhors a vacuum, and right now we have a vacuum of trust, then there's a huge opportunity for people to start stepping into that space and filling that void. So I'd like to focus on the positive potential here rather than the worst case scenario, right? The worst case scenario is, we keep going as things have been going and trust in our most important institutions continues to crumble. Well, that just ends in societal collapse >> Right, right. >> one way or the other. If we don't want to do that, and I presume that if there's anything we can all agree on, it's that that's not where we want to go. >> Right. >> Then now is the time for companies, if need be, to come together and say, "We have to step into this space "and create new trusted institutions and practices "that will help stabilize society and drive progress "in ways that aren't just reflected in GDP "but are reflected in human wellbeing, "happiness, a sense of security, a sense of hope. "A sense that technology actually does gives us a future "that we want to to be happy about moving into." >> Right, right. >> Right? >> So I'll give you the last word. >> Sure. >> We'll end on a positive note. What are some examples of companies or practices that you see out there as kind of shining lights that other people should be either aware of, emulate. Let's talk about the positive before we >> Sure. cut you lose. >> Well, one thing that I mentioned already is the AI partnership that has come together with companies that are really leading the conversation along with a lot of other organizations like AI Now, which is an organization on the East Coast that's doing a lot of fantastic work. There are a lot of companies supporting research into ethical development, design, and implementation of new technologies. That's something we haven't seen before, right? This is something that's only happened in the last two or three years. It's an incredibly positive development. Now we just have to make sure that the recommendations that are developed by these groups are actually taken onboard and implemented. And it'll be up to many of the industry leaders to set an example of how that can be done because they have the resources >> Right. >> and the ability to lead in that way. I think one of the other things that we can look at is that people are starting to become less naive about technology. Perhaps the silver lining of the loss of trust is the ability of consumers to be a little wiser, a little more appropriately critical and skeptical, and to figure out ways that they can, in fact, protect their interests. That they can actually seek out and determine who earns their trust. >> Right. >> Where their data is safest. And so I'm optimistic that there will be a sort of meeting, if you will, of the public interest and the interests of technology developers who really need the public to be on board, right? >> Jeff: Right. >> You can't make a better world if society doesn't want to come along with you. >> Jeff: Right, right. >> So my hope is, and I'm cautiously optimistic about that, that these forces will come together and create a future for us that we actually want to move into. >> All right, good. I don't want to leave on a sad note! >> Great, yes. >> Dr. Shannon Vallor, she's positive about the future. It's all about trust. Thanks for taking a few minutes. >> Thank you. >> I'm Jeff Frick, she's Dr. Shannon. Thanks for watching. We'll catch you next time. (upbeat techno music)

Published Date : Feb 14 2018

SUMMARY :

but really the trust and ethics conversations. So you were just on the panel, But one of the things that you brought up They're the ways that we find the people we want to marry. It means actually having to think about whether I just love that example that came into the business with the idea How are the legacy folks dealing with this? to say, "Hey we need to get on board with this, as opposed to just a sort of PR kind of thing. that they know how to go through that we could go on for a long time. And you can't really know not just the ability to monetized, but immense power. You have to make sure that the people handling it that causes someone to be unjustly accused And you have the responsibility I mean, even just the social, the social factors still haven't caught up. And that's before you even get into a plane flying So one of the things I like to tell people is, that we don't have to worry too much about the progress But then there's this other interesting thing So then you have things happen On the other hand, so much is moving to software, Well, and that means that we have to do a much better job that need a person to get involved. and the human expertise can fill in some of the gaps So they have able to step back and say but that's not necessarily only to the goal in road, right? So I think we have to recognize that you didn't really know whether you could trust, that the customer gives them. And that's the next step. in the trust space. you have to be cautiously optimistic about this So I'd like to focus on the positive potential here and I presume that if there's anything we can all agree on, if need be, to come together and say, Let's talk about the positive before we in the last two or three years. and the ability to lead in that way. and the interests of technology developers if society doesn't want to come along with you. that these forces will come together and create a future I don't want to leave on a sad note! Dr. Shannon Vallor, she's positive about the future. We'll catch you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

EquifaxORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

VallorPERSON

0.99+

ShannonPERSON

0.99+

Walter CronkitePERSON

0.99+

Paul DaughertyPERSON

0.99+

Shannon VallorPERSON

0.99+

2018DATE

0.99+

next yearDATE

0.99+

next weekDATE

0.99+

YouTubeORGANIZATION

0.99+

todayDATE

0.99+

Newark AirportLOCATION

0.99+

AccentureORGANIZATION

0.98+

oneQUANTITY

0.98+

Santa Clara UniversityORGANIZATION

0.98+

bothQUANTITY

0.97+

five leadersQUANTITY

0.96+

about 200 peopleQUANTITY

0.95+

Accenture Technology Vision 2018EVENT

0.91+

three yearsQUANTITY

0.91+

East CoastLOCATION

0.9+

one wayQUANTITY

0.89+

one news cycleQUANTITY

0.87+

Dr.PERSON

0.86+

CUBEORGANIZATION

0.82+

NigerianOTHER

0.72+

yearsQUANTITY

0.57+

thingsQUANTITY

0.57+

twoQUANTITY

0.53+

lastDATE

0.52+

tenDATE

0.49+

500QUANTITY

0.31+

VisionEVENT

0.25+

Parvesh Sethi, HPE Pointnext | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain it's theCUBE. Covering HPE Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> We're back in Madrid everybody. This is theCUBE, the leader in live tech coverage. My name is Dave Vallante and I'm here with my co-host Peter Burris. This is day one of HPE Discover Madrid. Parvesh Sethi is here, he's the Senior Vice President and General Manager of the global client services at HPE Point Next. Parvesh thanks for very much for coming back on theCUBE. >> Good to be here. >> Dave: Last time we saw you, you were 30 days into the job. >> That's right. >> Maybe 45 days. So how's the first six, seven months been? >> It's been busy. It's actually been very good. I administered the transformation change that's taking place within the company. It's actually been really good to also working with the clients on the hybrid IT journey side of the house. And since last we spoke, we also did the CTP acquisition, which has been very well received as well. >> Well I love it, when you guys go and talk about transformations to customers. We're experts. >> Parvesh: Yes. >> We live this. >> Parvesh: Live this everyday. >> Does that enter into the discussions with your customers? It must right? >> Yeah I think it gives us a lot of credibility. Especially when you take a look at the journey they're on. And we talk a lot about hybrid IT today, making it simple. And one of the things we always talk to them about is that hybrid IT is not just infrastructure cloud. You really have to take a look at the full spectrum of the services that had to be delivered. It could be as a service providers, could be subscribing to a platform, and hosting it on-prem, off-prem, private dedicated infrastructure, or public cloud. Just a mix of those and being able to decide as to what are the characteristics that you should look at, and what will decide as to what goes into public cloud, private cloud, or where should those services come from. >> What do you tell the skeptics? You know the, why should I do hybrid cloud? Why don't I just put everything in the cloud? Do you get those questions, or is it more customers saying hey, help me with my hybrid problem? What's the-- >> Almost every single client meeting that I've been in. Everyone acknowledges the world is hybrid IT. And I have not met a single client yet who says all of their workloads are going into public cloud. I think a lot of it depends on what they want to achieve. If they want a lot of elasticity and if they need SLAs, or if they want to bring the workload back in, security compliance or organizational cultural governance processes, performance characteristics. A lot of those factors come into play as to deciding what goes where. And I think almost everyone says that it's never going to be 100% this or that just based on the characteristics that would really dictate where the workload or the application says. >> And that's the characteristics of the data. Is that fair? 'Cause it used to be, oh security. And you know public cloud, gives you fine security. Maybe not exactly the way you want it done, but is it more the realization about, you just can't move all the data into the cloud? Or you can't force your business into the cloud? What are customers saying there? >> I think part of it also comes into, for example, governance as well. If there's HIPAA compliance workloads as an example, that may dictate your decision in a certain way. But you're right though, I mean security used to be one of the big concerns, but it's more about now a person has decided they want to move a certain workload over, it's really more about how do you get them comfortable, how do you de-risk that move? And this is where thinking through the journey roadmap really becomes critical. But just because of that one aspect, it's not necessarily stopping people from moving, but it's really baking that into the design criteria as to how you move it. >> Well while we're on security, I mean, in the last five years it's obviously become a board level topic. People have I think come to the recognition. Maybe the recognition, maybe the spending hasn't shifted, but the mindset's shifted that we can't just create a moat, you know, they're gonna get in. Once they get in we have to respond. We need analytics and response mechanisms and so. How are they coming to you for help? What are they asking you for, and how are you helping? >> So I think it certainly comes into more into place can not be an afterthought. It's really more about security in and the governance has to be kind of baked in from the front end of it. So everything that we do, whether it's any solution that we're doing from IoT perspective all the way to the hybrid IT, from an architecture blueprint perspective we have made sure the security's front and center of everyone of those designs as well as the discussion criteria with the client. And so when you start looking at it's not about security partial assessment. It's also kind of looking at designing security from a, you know, architecture blueprint perspective. And making sure that if somebody's talking about hybrid IT architecture or an IoT use case, that security's front and center of the design criteria. >> If you think about the challenges that your sales, well let's step back. If you think about the challenges that everybody has at conceiving of how best to associate data, workload, and cloud implementation. Hybrid, on-premise, off-premise, wherever it is. There are, you have to have a common framework, what used to be called a computing model. A way of thinking of how you address the problem. That your sales people has to have, your support people have to have, you have to have, your customer has to have. Are there like two or three things that you're telling your people to look forward, or look for and working with their customers to help provide those clues. So crucial to getting everybody on the same page early as to where workloads are gonna end up. Where data is gonna end up. >> That's a great question, and one of the things that we're making sure that our folks are not just talking about the hardware piece of it. It's really more about before the hardware discussion takes place, making sure that we completely align on the workload strategy. As part of the workload strategy, you know we will do workshops, and we'll make sure that we totally understand in terms of what is it they're trying to accomplish in terms of the workload migrations. And before we even get to the migration topic, we really go through this criteria in terms of assessing the workloads. Which workloads are more suited to go into cloud environment. And in areas which we may need to re-architect the application or re-write it. We also kind of put those into a specific category and take a look at making sure that is the performance criteria more, is it security is it more about the TCO, and more and more you're starting to see it's not really a brokerage discussion. It's really more of strategic sourcing discussion because you're more and more are starting to talk about what is the best source to get the service from. Because there's no shortage of choices that they have today where they can provide these services from. So it's really more of about understanding what they're trying to achieve. And then understanding the sourcing policy. Understanding the alignment between the IT and the governance piece of it, the whole business side of it, and the IT side of it. And then it's really more about the supply chain management. You heard about One'Sphere today. But it's really more about how do you take this complexity out of the hybrid IT environment, and making sure that you can provide the automation and that capability to provide it as easy of environment for them to have a single plane of glass. So those are the key pieces of the framework that we try to make sure everyone is on same page. >> You mentioned cloud technology partners. We heard about One'Sphere today, that's obviously the CTP is part of that announcement. Small company, but very high quality customer base. It's very specialized. Take us through the rationale for the acquisition, kind of what the value is to your customers, and where it's headed. >> I think last time when we spoke we talked about our overall strategy. One of the key pillars is really around making hybrid IT simple. And we know when we talk about hybrid IT it cannot be just the on-prem part of the storyboard. You have to talk about the public cloud side of it as well. And this is where the CTP acquisition comes into play to really plug a hole. I mean we had some capability in house, but not to the extent of what CTP brings to the table. I mean they are premiere partner to AWS, premiere partner to Google, silver partner to Microsoft Azure. And so having that kind of credibility and the recognition in the US and North America, certainly gives us more credibility with our customers talking about the hybrid IT story. And then taking that skillset, assets, and the IP, we want to take that and leverage of our channel community, as well as our install base, as well as of our capabilities in Europe as well Asia, and help scale that globally is really a way how we're gonna leverage this skillset and asset set. >> So we're in beautiful Madrid, Spain at the EMEA Discover. Cloud is a global phenomenon, but it's not uniform. From your perspective of providing services to customers that have global needs as well as local needs, take us through how Europe is different. Start from the observation that we've got North American cloud players, public cloud players, we've got Asian cloud players, we have not an obvious European cloud player. How is it different on a global basis, and what is HP doing to mass those differences, HPE doing, to mass those differences from your global and local customers. >> So I think one of the things you are finding here the need, and we talked about this earlier today, the need for as a consumption models. And you're seeing that the trend globally. And more and more people, more and more customers are talking about not wanting to necessarily own, but how do they pay for what they use. And so one of the things we do is from a framework perspective we've really deployed a very consistent framework, uniformed transformational framework, UTF. And we did apply for a patent for it as well. But the idea there is to leverage a common methodology, common framework to take a client through in terms of how to go about this cloud journey. Everyone is on a different place in terms of the cloud adoption, their digital transformation journey. But through the experiences that we have, I mean we do well over 10,000 engagements a year. Leveraging that IP, we have really built like full interconnected journey roadmaps. And so a client, you can take any client, whether a service provider or enterprise, they're somewhere on that journey roadmap. And they may be in a different place, but being able to talk to them, leveraging that common IP and say look, this is where you're at today, here's the roadmap that you can take to get to your desired end state. And that has really resonated with the clients. And if they truly don't want to own the infrastructure, and they just want to pay as you go, this is where the whole HPE, GreenLake announcements have really come into play. So I think those teams when you take a look at the performance characteristics, organization governance issues. Because one of the things that we find is 70% plus of the clients that we talk to, they have not been able to really maximize the full potential of what hybrid IT gives them. And one of the major hurdles we see, and doesn't matter whether you talk to a client in North America or EMEA or APJ. It's really the lack of focus on management of change. It's the organizational, the cultural barriers that get in the way. It's the competencies, the organizational processes that get in the way. So those are the pieces we want to make sure as part of the UTF framework, IT is just one of the principles. And of the other domains, management of change is one of the key elements that we see, which is common across all the client base that we talk to. >> When you go back to the early part of this decade, and you observe sort of the big, remember the big data meme it sort of exploded in 2010, 2011, 2012. It ended up being a very, complex of course, but also very services led, engagements because it was so complex. IoT is somewhat similar, it's very data oriented, it's very complex. So talk about services and the relationship with IoT, the opportunity for you and how you're helping add value to customers. >> Now that's a great question also Dave. I think when you take a look at the IoT. I think we're starting to get past that half cycle. And a lot players will talk about they got hundred plus proof of concepts going in their lab, but they just have not been able to bring it into the mainstream. And so one of the things we're talking to clients about is starting to move away from the terms like proof of concept. Focus on proof of value. Because at the end of the day, if you cannot help your line of business accelerate time to value, no matter how great of a concept you have, it's never going to see the day of light. So this is where the point next services really come into play with the whole advisory led motion because it's still very much a services led motion today. Working with the clients around how they can really help shorten the time to value. Accelerate time to value. And if we can take even one or two use cases they have in their labs today. And show them how they can get to 50, 60 million dollars of savings like one of the oil and gas customers we were just talking to. Same thing we see in the retail manufacturing. Is just taking some of the spoof points, and say this is how you can actually bring them into the mainstream, and make sure they also start to have the business alignment. That's one of the common things we hear from the CXOs here this week is the business alignment between the IT and the OT side if they're talking to the IoT use cases. Because without the business alignment, believe me you're not gonna be able to get the management of change that you're seeking to derive. >> So do you expect or are you seeing yet new business models. You were talking about the cost savings, but what about sort of the new business models emerging from those discussions and opportunities. >> Definitely I mean if you take a look at whether it's the hospitality suite, you know Kathy talked about main stage about even the retail experience that we're just starting to be very different. So when you look at the new value that's being created, you know a lot of us who travel to get here, when we check into the hotel, a number of places now, you can check in digitally, 24 hours in advanced. You never have to stand in line for a queue. Don't have to flash up your credit card because the hotel's have really now started to leverage the digital transformation where 24 hours in advanced you can check in online. They'll give you a digital key so on your phone when you walk into the hotel, as soon as you're within a threshold you get onto your wifi network and you see a personalized message. And it has also the directions to your room. And when you get to your room, you use the digital key to get in. Think about the possibilities it creates to launch new services for not just the hotel, but it's also affiliates, the partners for pushing specific targeted advertising offers while you're in Madrid here or some other place. So you're starting to see these new value creations even though behind the scenes you still have them integrate a lot of their digital critical business systems whether it's CRM, reservation systems, or smart buildings. You have to still make sure the security's in play. And so it is really you checking in, not someone else. As well as making sure the room is available. But it's really more focused on the business outcome. And this is one of the things that you're seeing even in a portfolio shift, it's no longer talking about some implementation services, integration services. When we sit down with a client it's really more focused around what outcome are we delivering. It's not talking about, look we can sell you x numbers of servers, or we can sell you devices. More about here's the business outcome that we'll deliver for you. And this is what you're gonna be able to do with that additional value creation. >> Do you mean I might be able to not have to wait in line a half hour when I check into a Las Vegas hotel in the future? >> Parvesh: Absolutely. >> Peter: No that will never happen. (laughing) >> No definitely, I mean you see improvements every single year. And hopefully, whether you walk into a retail shop, be able to experience differently walking from home into a branch store and what that experience will look like, it'll be very very different than what some of the people experience today. >> Lots of changes coming. All sort of based on the data, Parvesh thanks very much for coming on theCUBE, it was great to see you. >> Absolutely it's great to be here, thank you so much. >> You're welcome alright keep it right there everybody we'll be back with our next guest Dave Vallante for Peter Burris. This is theCUBE, we're live from HPE Discover Madrid 2017. (electronic music)

Published Date : Nov 28 2017

SUMMARY :

Brought to you by Hewlett Packard Enterprise. and General Manager of the global client services you were 30 days into the job. So how's the first six, seven months been? I administered the transformation change Well I love it, when you guys go and talk And one of the things we always talk to them about is that just based on the characteristics that would really Maybe not exactly the way you want it done, but it's really baking that into the design criteria but the mindset's shifted that we can't just It's really more about security in and the governance you have to have, your customer has to have. and making sure that you can provide the automation that's obviously the CTP is part of that announcement. and the recognition in the US and North America, Start from the observation that we've got North American And so one of the things we do is the opportunity for you and how you're helping Because at the end of the day, if you cannot help So do you expect or are you seeing yet And it has also the directions to your room. Peter: No that will never happen. And hopefully, whether you walk into a retail shop, All sort of based on the data, Parvesh thanks very much This is theCUBE, we're live from HPE Discover Madrid 2017.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Dave VallantePERSON

0.99+

EuropeLOCATION

0.99+

DavePERSON

0.99+

KathyPERSON

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AsiaLOCATION

0.99+

30 daysQUANTITY

0.99+

2010DATE

0.99+

2011DATE

0.99+

45 daysQUANTITY

0.99+

PeterPERSON

0.99+

twoQUANTITY

0.99+

MadridLOCATION

0.99+

100%QUANTITY

0.99+

2012DATE

0.99+

oneQUANTITY

0.99+

USLOCATION

0.99+

24 hoursQUANTITY

0.99+

Las VegasLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

North AmericaLOCATION

0.99+

Parvesh SethiPERSON

0.99+

ParveshPERSON

0.99+

HPORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

Madrid, SpainLOCATION

0.98+

OneQUANTITY

0.98+

todayDATE

0.98+

this weekDATE

0.98+

seven monthsQUANTITY

0.98+

HIPAATITLE

0.97+

One'SphereORGANIZATION

0.97+

EMEALOCATION

0.97+

HPEORGANIZATION

0.96+

three thingsQUANTITY

0.96+

50, 60 million dollarsQUANTITY

0.96+

first sixQUANTITY

0.96+

one aspectQUANTITY

0.95+

ParveshORGANIZATION

0.95+

single clientQUANTITY

0.94+

AsianOTHER

0.94+

North AmericanOTHER

0.93+

two use casesQUANTITY

0.93+

EuropeanOTHER

0.93+

HPE PointnextORGANIZATION

0.91+

over 10,000 engagements a yearQUANTITY

0.9+

single planeQUANTITY

0.87+

earlier todayDATE

0.85+

theCUBEORGANIZATION

0.83+

70% plusQUANTITY

0.79+

HPE PointORGANIZATION

0.76+

APJLOCATION

0.74+

last five yearsDATE

0.73+

early partDATE

0.73+

pillarsQUANTITY

0.73+

hundred plus proofQUANTITY

0.72+

2017DATE

0.7+

Microsoft AzureORGANIZATION

0.67+

HPEEVENT

0.66+

day oneQUANTITY

0.66+

a half hourQUANTITY

0.66+

VMworld 2017 Preview


 

>> Announcer: From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here are your hosts, Dave Vellante and Stu Miniman. >> 2010 was the first year we brought theCUBE to VMworld. At that time, VMware was a $2.5 billion company with former Microsoft exec Paul Maritz at the helm. Two years earlier, in a stunning development, VMware fired co-founder and CEO Diane Greene, which sent the company's stock tumbling almost 25%. Under pressure from investors, Joe Tucci, the chairman of EMC, made the move after a rocky four-year relationship with Ms. Greene. EMC purchased VMware in 2004 for $635 million. The Maritz years were marked by a strategy to move the company beyond the hypervisor into new areas of growth, including desktop virtualization and applications, which were met with mixed market responses. To Maritz's credit, however, the company continued to expand its presence in the data center, and under his leadership remained highly competitive with Microsoft, who was seen at the time as VMware's main rival. In 2012, the company named long-time Intel and then recently EMC exec, Pat Gelsinger as its CEO. Gelsinger inherited a roughly $4.5 billion company, staring into the teeth of the oncoming cloud megatrend. Gelsinger quickly embarked on a strategy to refocus on the core business, buoyed by a restructuring of many of the VMware assets that EMC and VMware folded into a new company called Pivotal. Gelsinger made several attempts to maintain and expand VMware's total available market with a public cloud play called vCloud Air, which ultimately failed. On the plus side of the ledger, however, Gelsinger led VMware's software-defined data center strategy grabbing pieces of its value chain that were historically left for the ecosystem. Of course, the most notable being NSX, the company's software-defined networking product, and vSan, a software storage play. Fast forward to 2017, and add to these developments the momentum of VMware's cloud management and orchestration offerings, its security and other multi-cloud services, and you now have a nearly $8 billion revenue company growing at 10% per anum, with a $40 billion market cap, and a new owner, namely, Michael Dell and company. Hello, everyone. My name is Dave Vellante and I'm here with Stu Miniman, and this is our VMworld 2017 preview. Stu, thanks for joining me. >> Dave, can't believe it's bene eight years we've been doing theCUBE at VMworld. >> Right, and we have been tracking this, Stu, and now, as we were saying, we see new owners, Michael Dell, Dell buying EMC, and of course VMware maintaining the vast majority of the ownership. Stu, what has changed since Michael Dell purchased VMware? What's changed in terms of Dell, its ownership, and also in the past year? >> Yeah, so it's been one of the top questions. Last year, John Furrier and I interviewed Michael Dell, and there were still everybody trying to say after the acquisition happened, "Aren't you going to just sell of VMware because VMware "needs to be independent, "they need to be able to partner with everyone?" And Michael was basically like, just lit a fire underneath him, and he's like, "People that think I'm going to sell it "don't understand the business plan "and they don't understand math." Everybody thought, "Oh, you got to sell them off "to be able to pay down the debt," and he's like, "No. "VMware has been called the jewel of this acquisition "of EMC, the largest acquisition in tech history." And that relationship of VMware is something that's still playing out. One piece of it, you mentioned vSAN, one of the success stories, there was the failure of EVO:RAIL, which was kind of the first generation solution put together sold through a whole lot of partners. They took that whole product and marketing team and put them together with EMC and created the VxRail team, which now reports up to Chad Sakac. On the Dell/EMC side, VxRail doing quite well, vSAN doing phenomenally well. They claim to have the most number of customers for any product in the hyper-converged infrastructure space. Lots of different solutions out there. So, some of that blending of how Dell/EMC and VMware, we see a little bit of that, but still, VMware partners with everyone. VMworld, still, Dave, is probably the largest infrastructure ecosystem out there, and even if we look at cloud, it's one of the more robust ecosystems out there. The only one probably rivals it these days is Amazon. >> Stu, isn't Dell's ownership of VMware somewhat more threatening to server vendors in particular than EMCs? Especially Cisco, IBM, HPE, large volume movers of VMware licenses, how has that affected the dynamic in the ecosystem? >> Yeah, Dave, we've talked in previous years. I was at EMC back at the beginning of the VMware relationship. EMC really didn't know what it was getting when it got VMware. It was less dollars were going to go into servers because we consolidate with virtualization, and less dollars to servers should mean more dollars to storage, good for EMC. Well, Dell, number one thing that Michael Dell wants to do is sell Dell servers. So, of course, if I'm someone else in that ecosystem, if I'm selling other servers, if I'm selling storage that doesn't run on Dell gear and not part of that Dell ecosystem, absolutely it could be a threat. Micheal has maintained the they're going to keep VMware, allow them to have their independence, and I haven't heard too many rumblings from the ecosystem that they've messed up the apple cart from VMware's standpoint. >> Okay, last year the talk was that Pat Gelsinger was on his way out. >> Stu Miniman: Yeah. >> You see Pat Gelsinger doesn't appear to be on his way out. There's earnings momentum, which we'll talk about, but thoughts on management? >> Yeah, so, right, Dave. Number one thing is we thought Pat would be out. Things are doing better from a stock market. You talked about the growth, 10% per anum right now is solid VMware. We've seen a number of moves and changes, people that, there have been a lot of people that have left. There's new people that have come in. There are areas that are doing quite well, and virtualization is still a mainstay of the data center. One of the things we'll talk about, I know, is that Amazon relationship, which we expect to hear a lot about at the show. Amazon's one of the Global Diamond partners, which, a year ago if you had said that Amazon was one of the top partners up there with the likes of Hewlett Packard Enterprise, OVH took over the vCloud Air business, which is, as you said, it failed from VMware's standpoint. They still have a number of partners. Companies like Rackspace, OVH that took over that vCloud Air business, and lots of service providers are doing quite well selling VMware lots of places. And virtualization still is the foundational layer for most infrastructure. >> So VMware pre-announced earnings to the upside and future growth ahead of expectations, so the stock got a nice pop out of that. What's driving that momentum? >> The two areas you talked about first. vSAN is doing quite well. It's driving a lot of adoption and trying to get VMware to be a little bit more sticky and really kind of slowly expand as opposed to big chunks. We talked about when Pat first went in as CEO, it was, VMware had to play a similar game to what Intel did, Dave, which is how do they expand what they're doing without really ostracizing their ecosystem. And, to their credit, they've done a pretty good job of that. They baked in some backup solutions, but lots of backup solutions, you and I were at the vMon conference earlier this year. VM's still doing a very solid business inside of VMware's ecosystem. Lots of other players that play well there. NSX is really starting to hit its stride, that networking piece, but where a few years ago we were talking about it was VMware versus Cisco, well, they seem to be kind of settling into their swim lanes. Cisco still has their core networking business. Cisco's trying to become more of a software company. Cisco actually recently bought Springpath, which was their hyper-converged product, but today that's far behind what vSAN's doing, revenue, users, and everything like that. AirWatch was another acquisition. Sanjay Poonen really helped drive that forward. So the mobility play, VMware's doing well. A lot of the emerging areas, we've been waiting to see where VMware goes with them. Things that I look at like containerization, server lists, open stack VMware had some plays there. They are really kind of nascent at this point and haven't really exploded. I always look at this show, are we seeing many developers there? Lots of the shows we go to have a big developer group. We'll have a little bit of developers, but it's really still a small piece of the overall picture. There's still lots of virtualization admins, people looking at where VMware fits into cloud, and that's kind of where it sits today. >> Let's talk about the competitive dynamic, which is totally different. I mean, back when we first started covering VMworld with theCUBE, 2010, it was really Citrix, Microsoft, Citrix with VDI. You mentioned AirWatch, which kind of flipped the dynamic a little bit. Quite a bit, actually. But Microsoft was the key virtualization competitor. Now it's like competitors, partners, you've got Google Cloud, now, of course, Diane Greene running Google Cloud, which is kind of ironic. We can talk about that. Microsoft with Azure, AWS, which is, we expect to hear a lot from VMware at VMworld 2017 about the AWS relationship. Certainly, IBM with its cloud. Nutanix, which launched at VMworld several years ago, is now more competitive. You mentioned Cisco. They're clearly more competitive with NSX. How do you describe the competitive landscape? What should we be watching at this year's show? >> Yeah, Dave, first of all, you talked about how VMware grew from kind of the $2.5 billion to more like an $8 billion, so of course they're bumping into, kind of going over some of their swim lanes a little bit, and the market has matured. Absolutely, hyper convergence for the last few years has been one of the hot spots, not only for VMware, first when they launched vSAN, it actually was the tide that rose for a lot of their competitors out there. Nutanix, SimpliVity, many of these companies said that they actually stopped a lot of their outbound marketing for about a year because all the people that called up looking at vSAN went to those solutions. Now vSAN's hitting its stride. It's doing really well. I highlighted how VxRail is doing great revenue on the Dell/EMC side, and there's still lots of partners that VMware has. So hyper converge, absolutely something that we'll see there. Cloud, big piece. I mentioned Rackspace, OVH, all the service providers. The vCloud Air network is still kind of there. So how VMware is getting into the service providers, how they're getting into the cloud, I know we'll talk a little bit more about the cloud piece. Last year it was the Cloud Foundation suite, which takes vSAN, and NSX, and vSphere, puts it all together with a management, and that's something that VMware wants to be able to put on prem in a service provider or in AWS. So really, wherever you go, VMware is going to be there and stretch that, but it's like a four-node star configuration. It doesn't natively go into Amazon. That's been a lot of the lift that's been happening over the last year to try to get that VMware on AWS working, and I hear it's not 100% baked yet by the time we get to the show, but working out a lot of those details. But cloud, hyper-converged, some of the new ones. VDI will still come up too, I'm sure. >> How about Docker? Where do they fit in the competitive landscape? >> Yeah, it's interest, remember, I remember the last year we had the show in San Francisco we had Ben Golub, a CEO at Docker, on the program there. Ben's no longer the CEO. They switched CEO's. We had theCUBE at DockerCon this year. Containers, absolutely very important. VMware has something called VMware Integrated Containers. I hear a little bit about it, but most people, if they're saying, "I'm doing virtualization," they're probably doing it on Linux. So Red Hat Summit this year, heard a lot about containers. We're going to have theCUBE at Kubecon, which is the Kubernetes show, later this year. So we know VMware plays a little bit with Docker. I'd love to see VMware saying how they fit into the Kubernetes piece a little bit more. We heard of the Cloud Foundry Summit earlier this year, how Pivotal kind of fits into that environment and they've got a way to be able to spread across multiple environments there. But VMware tends to play in a little bit more traditional applications. And, Dave, when you talk about a competitive standpoint, that's what I look at for VMware. The biggest threat to them is they don't own the application, so Microsoft, Oracle, IBM, and all those cloud-native apps that are getting put in the public cloud, like Google, and Amazon, and Microsoft, does that leave VMware behind? Does VMware, I heard it many times last year, become the new Legacy? >> Well, and, but they're clearly positioned as an infrastructure player, so let's talk about that. I mean, cloud has become the new, infrastructure and service, become the new big competitive threat to on-prem infrastructure. Wikibon has done some research on the true private cloud. Interestingly, I mean, true private cloud essentially is a moniker representation of public cloud-like attributes on prem, bringing cloud, cloud models, to the data, for example, and Wikibon has forecast that as the largest market. I think I've got some data here. It shows that true private cloud over time will be a $230 billion market, whereas infrastructures and service in the public cloud will be about 150 billion. So you expect that true private cloud is going to overtake that. It's growing faster. The CAG here is 33% versus public IAS at 15%, but the big thing is staff. >> Yeah. >> Staffing, getting taken out essentially, getting out of non-differentiated heavy lifting, but what is VMware's cloud strategy generally, but specifically with regard to bringing the cloud model to the data on prem? >> Yeah, so when we created the true private cloud definition, we said,"Vvirtualization alone is not cloud, "and therefore, what do we need? "We really need to have that automation, "that orchestration." And VMware had done a number of acquisitions, they're putting the suite of solutions together, and it's more than just saying, "Oh, I have six different software products; "here's a bundle." How do we fully integrate that? And that's what the Cloud Foundation suite's what VMware put together so that I can have it in a virtual private cloud in Amazon. And it's something basically VMware manages it, but it's Amazon's data center, and that's plugged into the public clouds. I can do the similar sort of thing in the service providers and that's why, with our forecast, Dave, we show in about five years, true private cloud should have more revenue than public cloud. Big reason is because there's a whole lot of Legacy out there and moving from all of my, most companies hundreds if not thousands of applications, getting all of them to the public cloud is tough. Having them in a virtualized environment and being able to slide them over to this kind of environment makes a lot of sense. I can do that. And the shift of my workloads and my applications going to microservices really starting to break apart some of the the pieces is something that a lot of times that's going to take five to 10 years. So, in the meantime, we're going to shift kind of Legacy to private cloud while we're picking off the things that we can with the public cloud. And VMware with their Cloud Foundation suite and their solutions that they're putting together, networking as, really, the inter fabric with NSX, vSAN making it easy to make those applications a little bit more portable between different types of infrastructure, but that's really, VMware is they put their cloud play, and they have a very large set of partners that they're working with in this space. >> So, Stu, how should we look at the VMware AWS deal? Is it AWS's attempt to get a piece of the true private cloud action on prem? Is it VMware's initiative to try to actually get a cloud strategy that has teeth, and works, and has longevity? How should we think about that? >> Yeah, it's, of course, a little bit of both. At its core, I think it's Amazon looks at 500,000 VMware customers that have data center deployments and they're going to stick a straw into that environment and say, "Come try out the first taste of our services," and once you get on the Amazon services which, by the way, they're launching, what, three new features every week, I think. I was at the Amazon Summit in New York City recently and it was like, "Oh, it's a regional summit," there were like three main announcements. No, I got the email. There were like 12 announcements and each one of them were kind of cool and things like that. So it absolutely is how do I get customers comfortable with moving to this new model. I think one of the things that Microsoft did really well is when they pushed everybody to Office 365, they said, "SaaS is the way you should always think "about buying your applications going forward, not, "I'm going to deploy a server for my Outlook, "I'm going to deploy infrastructure for my SharePoint." It's, "I'm going to buy Office 365 and that's just "the way it's done." So they made it the okay. Now VMware, it's really dangerous, in a way, saying, working with Amazon, now we're saying, "Hey, playing on Amazon's safe. "The water's nice." And once they get in that water and you have access to all of those cool things that Amazon keeps putting out, which, by the way, Dave, the week after they announced the partnership of VMware and AWS, what Amazon announced was, "There's a really easy "migration service that, if you have "a VMware Ware environment, "you just kind of click this button." And I'm pretty sure it's for free. "You can now be completely on AWS "and you don't have to pay for VMware licensing anymore. "Wouldn't that be nice?" >> So, okay, so the way you've phrased it or framed it, is it sounds like that VMware, with its half a million customers, has more to lose than AWS in this deal. Is that the right way to think about it or is this not a zero-sum game? >> I don't think it's a zero-sum game when, you brought up the true private cloud. The data center still, there's room for some growth with VMware, even if people are 90% virtualized now, there's some room for growth there. Public cloud, though, has a strong growth engine, so now VMware has a play there. Rather than saying, "It's the book seller, don't go there," they want to have a play. Michael Dell, Dave, I'm sure we're going to ask him, say, "Hey, what do you think the world's going to look like "in five years? "You've got your Azure Stack partnership "that you're lining up with your server division "and with EMC, you've got Amazon that VMware's playing with, "you've got your data center; "how does that go?" And, of course, Michael being the smart businessman that he is, is going to say, "Uh, yeah, you're going to buy Dell "no matter what solution you go with, "and I'm going to have a strong position "in all of them." but it definitely is, we're in a bit of a transitional phase as to how this is going to look. We've, for years, been arguing how big does public cloud get, what applications go where. I do think that this has the potential to accelerate a little bit from VMware's standpoint. VMware customers getting in this environment, trying out some of the new things. I know lots of people that were in the virtualization community that are now playing in the public cloud, getting certified, doing the same things that they did a decade ago to get on public cloud. So, as those armies of certified people kind of move over in the skillset, we have a generational shift going on and lots of people are going to be like, "Hey, I don't want to spend 12 to 18 months "building a temple for my data anymore. "I can just spin this up really fast and move." It's interesting, Dave, Cycle Computing, one of the earliest customers that we interviewed at Amazon, was just acquired by one of the other cloud guys, not Amazon. So companies that know, that was an HPC company that was, rather than spend 18 months and $10 million, we can do the same thing in, like, a few weeks and $10,000. >> They're super computing in the cloud. All right, let's wrap with what to expect at VMworld 2017. Obviously it's going to be a lot of people there. They're your peeps. A lot of partying going on. It's like, it used to be Labor Day kicked off the fall selling season, and for years it's been VMworld. What should we look for this year? >> Yeah, so, I'm excited, Dave. It's always, this community, they spend like the whole summer getting ready for it. I'm actually going to be sitting on a panel at Opening Acts, which is, the VMunderground group does on Sunday. So the event really, it doesn't start Monday, Dave, it actually, a lot of people are already flying in by the time this video goes up. They're doing things Saturday. On Sunday there's three panels. I'm sitting on one on buzz words in IT, so to things like cloud and server lists. Are those meaningful or are those a total waste of our time? So that kind of gets us started. You mentioned lot of good parties at the show always. There's the vExpert community. I was a vExpert for a number of years back when it was, you know, hundred, couple hundred people. I think there's now 1,500 vExperts worldwide. We've got a bunch of hosts coming in to help us, including John Troyer who created the vExpert program, Keith Townsend, Justin Warren, excited to have them. Lisa Martin's going to be co-hosting, along with you, me, John Furrier and Peter Burris. So we've got a big team. We've got two sets. We've got a great lineup at theCUBE. Two sets, three days in the VMvillage, which this year is on the first floor right outside of the Expo Hall. So it's one of those things I don't expect to sleep a lot. I expect to see a lot of people, bump into 'em on the show floor, stop by theCUBE, see the parties, and definitely see 'em in the after parties. >> Great. Well, as Stu says, we have two sets going on, so please stop by and see us. Stu, thanks very much for helping me with this VMworld preview. We'll see you in Vegas next week. Thanks for watching, everybody. See you in Las Vegas. This is theCUBE. (electronic music)

Published Date : Aug 22 2017

SUMMARY :

Announcer: From the SiliconANGLE Media office of many of the VMware assets that EMC and VMware Dave, can't believe it's bene eight years and also in the past year? and he's like, "People that think I'm going to sell it Micheal has maintained the they're going to keep VMware, was on his way out. You see Pat Gelsinger doesn't appear to be on his way out. One of the things we'll talk about, I know, so the stock got a nice pop out of that. Lots of the shows we go to have a big developer group. Let's talk about the competitive dynamic, how VMware grew from kind of the $2.5 billion We heard of the Cloud Foundry Summit earlier this year, I mean, cloud has become the new, the things that we can with the public cloud. and they're going to stick a straw into that environment Is that the right way to think about it and lots of people are going to be like, the fall selling season, and for years it's been VMworld. You mentioned lot of good parties at the show always. Well, as Stu says, we have two sets going on,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Lisa MartinPERSON

0.99+

MichaelPERSON

0.99+

EMCORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

AWSORGANIZATION

0.99+

Diane GreenePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

John TroyerPERSON

0.99+

CiscoORGANIZATION

0.99+

Justin WarrenPERSON

0.99+

VMwareORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Ben GolubPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MaritzPERSON

0.99+

Paul MaritzPERSON

0.99+

2012DATE

0.99+

2004DATE

0.99+

Stu MinimanPERSON

0.99+

GreenePERSON

0.99+

Joe TucciPERSON

0.99+

VegasLOCATION

0.99+

GelsingerPERSON

0.99+

Sanjay PoonenPERSON

0.99+

90%QUANTITY

0.99+

12 announcementsQUANTITY

0.99+

fiveQUANTITY

0.99+

$230 billionQUANTITY

0.99+

StuPERSON

0.99+

Last yearDATE

0.99+

18 monthsQUANTITY

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

12QUANTITY

0.99+

MondayDATE

0.99+

$2.5 billionQUANTITY

0.99+

$10,000QUANTITY

0.99+

$10 millionQUANTITY

0.99+

OVHORGANIZATION

0.99+

IntelORGANIZATION

0.99+

Janet George, Western Digital | Women in Data Science 2017


 

>> Male Voiceover: Live from Stanford University, it's The Cube covering the Women in Data Science Conference 2017. >> Hi, welcome back to The Cube, I'm Lisa Martin and we are live at Stanford University at the second annual Women in Data Science Technical Conference. It's a one day event here, incredibly inspiring morning we've had. We're joined by Janet George, who is the chief data scientist at Western Digital. Janet, welcome to the show. >> Thank you very much. >> You're a speaker at-- >> Very happy to be here. >> We're very happy to have you. You're a speaker at this event and we want to talk about what you're going to be talking about. Industrialized data science. What is that? >> Industrialized data science is mostly about how data science is applied in the industry. It's less about more research work, but it's more about practical application of industry use cases in which we actually apply machine learning and artificial intelligence. >> What are some of the use cases at Western Digital for that application? >> One of the use case that we use is, we are in the business of creating new technology nodes and for creating new technology nodes we actually create a lot of data. And with that data, we actually look at, can we understand pattern recognition at very large scale? We're talking millions of wafers. Can we understand memory holes? The shape, the type, the curvature, circularity, radius, can we detect these patterns at scale? And then how can we detect if the memory hole is warped or deformed and how can we have machine learning do that for us? We also look at things like correlations during the manufacturing process. Strong correlations, weak correlations, and we try to figure out interactions between different correlations. >> Fantastic. So if we look at big data, it's probably applicable across every industry. How has it helped to transform Western Digital, that's been an institution here in Silicon Valley for a while? >> We in Western Digital we move mountains of data. That's just part of our job, right? And so we are the leaders in storage technology, people store data in Western Digital products, and so data's inherently very familiar to us. We actually deal with data on a regular basis. And now we've started confronting our data with data science. And we started confronting our data with machine learning because we are very aware that artificial intelligence, machine learning can bring a different value to that data. We can look at the insides, we can develop intelligence about how we build our storage products. What we do with our storage. Failure analysis is a huge area for us. So we're really tapping into our data to figure out how can we make artificial intelligence and machine learning ingrained in the way we do work. >> So from a cultural perspective, you've really done a lot to evolve the culture of Western Digital to apply the learnings, to improve the values that you deliver to all of your customers. >> Yes, believe it or not, we've become a data-driven company. That's amazing, because we've invested in our own data, and we've said "Hey, if we are going to store the world's data, we need to lead, from a data perspective" and so we've sort of embraced machine learning and artificial intelligence. We've embraced new algorithms, technologies that's out there we can tap into to look at our data. >> So from a machine learning, human perspective, in storage manufacturing, is there still a dependence on human insight where storage manufacturing devices are concerned, or are you seeing the machine learning really, in this case, take more of a lead? >> No, I think humans play a huge role, right? Because these are domain experts. We're talking about Ph.D.'s in material science and device physics areas so what I see is the augmentation between machine learning and humans, and the domain experts. Domain experts will not be able to scale. When the scale of wafer production becomes very large. So let's talk about 3 million wafers. How is a machine going to physically look at all the failure patterns on those wafers? We're not going to be able to scale just having domain expertise. But taking our core domain expertise and using that as training data to build intelligence models that can inform the domain expert and be smart and come up with all the ideas, that's where we want to be. >> Excellent. So you talked a little bit about the manufacturing process. Who are some of the other constituents that you collaborate with as chief data scientist at Western Digital that are demanding access to data, marketing, etcetera, what are some of those key collaborators for your group? >> Many of our marketing department, as well as our customer service department, we also have collaborations going on with universities, but one of the things we found out was when a drive fails, and it goes to our customer, it's much better for us to figure out the failure. So we've started modeling out all the customer returns that we've received, and look at that and see "How can we predict the life cycle of our storage?" And get to those return possibilities or potential issues before it lands in the hands of customers. >> That's excellent. >> So that's one area we've been focusing quite a bit on, to look at the whole life cycle of failures. >> You also talked about collaborating with universities. Share a little bit about that in terms of, is there a program for internships for example? How are you helping to shape the next generation of computer scientists? >> We are very strongly embedded in universities. We usually have a very good internship program. Six to eight weeks, to 12 weeks in the summer, the interns come in. Ours is a little different where we treat our interns as real value add. They come in, and they're given a hypothesis, or problem domain that they need to go after. And within six to eight weeks, and they have access to tremendous amounts of data, so they get to play with all this industry data that they would never get to play with. They can quickly bring their academic background, or their academic learning to that data. We also take really hard research-ended problems or further out problems and we collaborate with universities on that, especially Stanford University, we've been doing great collaborations with them. I'm super encouraged with Feliz's work on computer vision, and we've been looking into things around deep neural networks. This is an area of great passion for me. I think the cognitive computing space is just started to open up and we have a lot to learn from neural networks and how they work and where the value can be added. >> Looking at, just want to explore the internship topic for a second. And we're at the second annual Women in Data Science Conference. There's a lot of young minds here, not just here in person, but in many cities across the globe. What are you seeing with some of the interns that come in? Are they confident enough to say "I'm getting access to real world data I wouldn't have access to in school", are they confident to play around with that, test out a hypothesis and fail? Or do they fear, "I need to get this right right away, this is my career at stake?" >> It's an interesting dichotomy because they have a really short time frame. That's an issue because of the time frame, and they have to quickly discover. Failing fast and learning fast is part of data science and I really think that we have to get to that point where we're really comfortable with failure, and the learning we get from the failure. Remember the light bulb was invented with 99% negative knowledge, so we have to get to that negative knowledge and treat that as learning. So we encourage a culture, we encourage a style of different learning cycles so we say, "What did we learn in the first learning cycle?" "What discoveries, what hypothesis did we figure out in the first learning cycle, which will then prepare our second learning cycle?" And we don't see it as a one-stop, rather more iterative form of work. Also with the internships, I think sometimes it's really essential to have critical thinking. And so the interns get that environment to learn critical thinking in the industry space. >> Tell us about, from a skills perspective, these are, you can share with us, presumably young people studying computer science, maybe engineering topics, what are some of the traditional data science skills that you think are still absolutely there? Maybe it's a hybrid of a hacker and someone who's got, great statistician background. What about the creative side and the ability to communicate? What's your ideal data scientist today? What are the embodiments of those? >> So this is a fantastic question, because I've been thinking about this a lot. I think the ideal data scientist is at the intersection of three circles. The first circle is really somebody who's very comfortable with data, mathematics, statistics, machine learning, that sort of thing. The second circle is in the intersection of implementation, engineering, computer science, electrical engineering, those backgrounds where they've had discipline. They understand that they can take complex math or complex algorithms and then actually implement them to get business value out of them. And the third circle is around business acumen, program management, critical thinking, really going deeper, asking the questions, explaining the results, very complex charts. The ability to visualize that data and understand the trends in that data. So it's the intersection of these very diverse disciplines, and somebody who has deep critical thinking and never gives up. (laughs) >> That's a great one, that never gives up. But looking at it, in that way, have you seen this, we're really here at a revolution, right? Have you seen that data science traditionalist role evolve into these three, the intersection of these three elements? >> Yeah, traditionally, if you did a lot of computer science, or you did a lot of math, you'd be considered a great data scientist. But if you don't have that business acumen, how do you look at the critical problems? How do you communicate what you found? How do you communicate that what you found actually matters in the scheme of things? Sometimes people talk about anomalies, and I always say "is the anomaly structured enough that I need to care about?" Is it systematic? Why should I care about this anomaly? Why is it different from an alert? If you have modeled all the behaviors, and you understand that this is a different anomaly than I've normally seen, and you must care about it. So you need to have business acumen to ask the right business questions and understand why that matters. >> So your background in computer science, your bachelor's Ph.D.? >> Bachelor's and master's in computer science, mathematics, and statistics, so I've got a combination of all of those and then my business experience comes from being in the field. >> Lisa: I was going to ask you that, how did you get that business acumen? Sounds like it was by in-field training, basically on-the-job? >> It was in the industry, it was on-the-job, I put myself in positions where I've had great opportunities and tackled great business problems that I had to go out and solve, very unique set of business problems that I had to dig deep into figuring out what the solutions were, and so then gained the experience from that. >> So going back to Western Digital, how you're leveraging data science to really evolve the company. You talked about the cultural evolution there, which we both were mentioning off-camera, is quite a feat because it's very challenging. Data from many angles, security, usage, is a board level, boardroom conversation. I'd love to understand, and you also talked about collaboration, so talk to us a little bit about how, and some of the ways, tangible ways, that data science and your team have helped evolve Western Digital. Improving products, improving services, improving revenue. >> I think of it as when an algorithm or a machine learning model is smart, it cannot be a threat. There's a difference between being smart and being a threat. It's smart when it actually provides value. It's a threat when it takes away or does something you would be wanting to do, and here I see that initially there's a lot of fear in the industry, and I think the fear is related to "oh, here's a new technology," and we've seen technologies come in and disrupt in a major way. And machine learning will make a lot of disruptions in the industry for sure. But I think that will cause a shift, or a change. Look at our phone industry, and how much the phone industry has gone through. We never complain that the smart phone is smarter than us. (laughs) We love the fact that the smartphone can show us maps and it can send us in the right, of course, it sends us in the wrong direction sometimes, most of the time it's pretty good. We've grown to rely on our cell phones. We've grown to rely on the smartness. I look at when technology becomes your partner, when technology becomes your ally, and when it actually becomes useful to you, there is a shift in culture. We start by saying "how do we earn the value of the humans?" How can machine learning, how can the algorithms we built, actually show you the difference? How can it come up with things you didn't see? How can it discover new things for you that will create a wow factor for you? And when it does create a wow factor for you, you will want more of it, so it's more, to me, it's most an intent-based progress, in terms of a culture change. You can't push any new technology on people. People will be reluctant to adapt. The only way you can, that people adopt to new technologies is when they the value of the technology instantly and then they become believers. It's a very grassroots-level change, if you will. >> For the foreseeable future, that from a fear perspective and maybe job security, that at least in the storage and manufacturing industry, people aren't going to be replaced by machines. You think it's going to maybe live together for a very long, long time? >> I totally agree. I think that it's going to augment the humans for a long, long time. I think that we will get over our fear, we worry that the humans, I think humans are incredibly powerful. We give way too little credit to ourselves. I think we have huge creative capacity. Machines do have processing capacity, they have very large scale processing capacity, and humans and machines can augment each other. I do believe that the time when we had computers and we relied on our computers for data processing. We're going to rely on computers for machine learning. We're going to get smarter, so we don't have to do all the automation and the daily grind of stuff. If you can predict, and that prediction can help you, and you can feed that prediction model some learning mechanism by reinforced learning or reading or ranking. Look at spam industry. We just taught the Spam-a-Guccis to become so good at catching spam, and we don't worry about the fact that they do the cleansing of that level of data for us and so we'll get to that stage first, and then we'll get better and better and better. I think humans have a natural tendency to step up, they always do. We've always, through many generations, we have always stepped up higher than where we were before, so this is going to make us step up further. We're going to demand more, we're going to invent more, we're going to create more. But it's not going to be, I don't see it as a real threat. The places where I see it as a threat is when the data has bias, or the data is manipulated, which exists even without machine learning. >> I love though, that the analogy that you're making is as technology is evolving, it's kind of a natural catalyst >> Janet: It is a natural catalyst. >> For us humans to evolve and learn and progress and that's a great cycle that you're-- >> Yeah, imagine how we did farming ten years ago, twenty years ago. Imagine how we drive our cars today than we did many years ago. Imagine the role of maps in our lives. Imagine the role of autonomous cars. This is a natural progression of the human race, that's how I see it, and you can see the younger, young people now are so natural for them, technology is so natural for them. They can tweet, and swipe, and that's the natural progression of the human race. I don't think we can stop that, I think we have to embrace that it's a gift. >> That's a great message, embracing it. It is a gift. Well, we wish you the best of luck this year at Western Digital, and thank you for inspiring us and probably many that are here and those that are watching the livestream. Janet George, thanks so much for being on The Cube. >> Thank you. >> Thank you for watching The Cube. We are again live from the second annual Women in Data Science conference at Stanford, I'm Lisa Martin, don't go away. We'll be right back. (upbeat electronic music)

Published Date : Feb 3 2017

SUMMARY :

it's The Cube covering the Women in I'm Lisa Martin and we are going to be talking about. data science is applied in the industry. One of the use case How has it helped to in the way we do work. apply the learnings, to to look at our data. that can inform the a little bit about the the things we found out quite a bit on, to look at the helping to shape the next started to open up and we but in many cities across the globe. That's an issue because of the time frame, the ability to communicate? So it's the intersection of the intersection of I always say "is the So your background in computer science, comes from being in the field. problems that I had to You talked about the how can the algorithms we built, that at least in the I do believe that the time of the human race, Well, we wish you the We are again live from the second annual

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JanetPERSON

0.99+

Lisa MartinPERSON

0.99+

Janet GeorgePERSON

0.99+

Western DigitalORGANIZATION

0.99+

SixQUANTITY

0.99+

LisaPERSON

0.99+

99%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

12 weeksQUANTITY

0.99+

Stanford UniversityORGANIZATION

0.99+

third circleQUANTITY

0.99+

first circleQUANTITY

0.99+

twenty years agoDATE

0.99+

second circleQUANTITY

0.99+

eight weeksQUANTITY

0.99+

sixQUANTITY

0.99+

The CubeTITLE

0.99+

ten years agoDATE

0.99+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

eight weeksQUANTITY

0.98+

three circlesQUANTITY

0.98+

Women in Data Science Technical ConferenceEVENT

0.98+

this yearDATE

0.98+

FelizPERSON

0.97+

StanfordLOCATION

0.97+

three elementsQUANTITY

0.97+

oneQUANTITY

0.97+

Women in Data Science Conference 2017EVENT

0.97+

bothQUANTITY

0.96+

Women in Data Science ConferenceEVENT

0.96+

many years agoDATE

0.96+

second learning cycleQUANTITY

0.96+

Women in Data ScienceEVENT

0.96+

one day eventQUANTITY

0.96+

first learning cycleQUANTITY

0.94+

first learning cycleQUANTITY

0.93+

todayDATE

0.91+

one areaQUANTITY

0.91+

Women in Data Science conferenceEVENT

0.89+

secondQUANTITY

0.88+

millions of wafersQUANTITY

0.87+

firstQUANTITY

0.86+

one-stopQUANTITY

0.86+

about 3 million wafersQUANTITY

0.84+

-a-GuccisORGANIZATION

0.81+

The CubeORGANIZATION

0.77+

UniversityORGANIZATION

0.6+

second annualQUANTITY

0.56+

2017DATE

0.51+

CubePERSON

0.36+

James Hamilton - AWS Re:Invent 2014 - theCUBE - #awsreinvent


 

(gentle, upbeat music) >> Live from the Sands Convention Center in Las Vegas, Nevada, it's theCUBE, at AWs re:Invent 2014. Brought to you by headline sponsors Amazon and Trend Micro. >> Okay, welcome back everyone, we are here live at Amazon Web Services re:Invent 2014, this is theCUBE, our flagship program, where we go out to the events and extract synth from the noise. I'm John Furrier, the Founder of SiliconANGLE, I'm joined with my co-host Stu Miniman from wikibon.org, our next guest is James Hamilton, who is Vice President and Distinguished Engineer at Amazon Web Services, back again, second year in a row, he's a celebrity! Everyone wants his autograph, selfies, I just tweeted a picture with Stu, welcome back! >> Thank you very much! I can't believe this is a technology conference. (laughs) >> So Stu's falling over himself right now, because he's so happy you're here, and we are too, 'cause we really appreciate you taking the time to come on, I know you're super busy, you got sessions, but, always good to do a CUBE session on kind of what you're workin' on, certainly amazing progress you've done, we're really impressed with what you guys've done other this last year or two, but this year, the house was packed. Your talk was very well received. >> Cool. >> Every VC that I know in enterprise is here, and they're not tellin' everyone, there's a lot of stuff goin' on, the competitors are here, and you're up there in a whole new court, talk about the future. So, quickly summarize what you talked about in your session on the first day. What was the premise, what was the talks objective, and what was some of the key content? >> Gotcha, gotcha. My big objective was the cloud really is fundamentally different, this is not another little bit of nomenclature, this is something that's fundamentally different, it's going to change the way our industry operates. And what I wanted to do was to step through a bunch of examples of innovations, and show how this really is different from how IT has been done for years gone by. >> So the data center obviously, we're getting quotes after quotes, obviously we're here at the Amazon show so the quotes tend to be skewed towards this statement, but, I'm not in the data center business seems to be the theme, and, people generally aren't in the data center business, they're doing a lot of other things, and they need the data centers to run their business. With that in mind, what are the new innovations that you see coming up, that you're working on, that you have in place, that're going to be that enabler for this new data center in the cloud? So that customers can say hey, you know, I just want to get all this baggage off my back, I just run my business agile and effectively. Is it the equipment, is it the software, is it the chips? What're you doing there from an innovation standpoint? >> Yeah, what I focused on this year, and I think it's a couple important areas are networking, because there's big cost problems in networking, and we've done a lot of work in that area that we think is going to help customers a lot; the second one's database, because databases, they're complicated, they're the core of all applications, when applications run into trouble, typically it's the database at the core of it, so those are the two areas I covered, and I think that's two of the most important areas we're working right now. >> So James, we've looked back into people that've tried to do this services angle before, networking has been one of the bottlenecks, I think one of the reasons XSBs failed in the '90s, it was networking and security, grid computing, even to today. So what is Amazon fundamentally doing different today, and why now is it acceptable that you can deliver services around the world from your environment? What's different about networking today? >> It's a good question. I think it's a combination of private links between all of the regions, every major region is privately linked today. That's better cost structure, better availability, lower latency, scaling down to the data center level we run all custom Amazon designed gear, all custom Amazon designed protocol stacks. And why is that important? It's because cost of networking is actually climbing, relative to the rest of compute, and so, we need to do that in order to get costs under control and actually continue to be able to draw up costs. Second thing is customers need more networking-- more networking bandwidth per compute right now, it's, East/West is the big focus of the industry, because more bandwidth is required, we need to invest more, fast, that's why we're doing private gear. >> Yeah, I mean, it's some fascinating statistics, it's not just bandwidth, you said you do have up to 25 terabytes per second between nodes, it's latency and jitter that are hugely important, especially when you go into databases. Can you talk about just architecturally, what you do with availability zones versus if I'm going to a Google or a Microsoft, what does differentiate you? >> It is a little bit different. The parts that are the same are: every big enterprise that needs highly available applications is going to run those applications across multiple data centers, that's, so-- The way our system works is you choose the region to get close to your users, or to get close to your customers, or to be within a jurisdictional boundary. From down below the region, normally what's in a region is a data center, and customers usually are replicating between two regions. What's different in the Amazon solution, is we have availability zones within region; each availability zone is actually at least one data center. Because we have multiple data centers inside the same region it enables customers to do realtime, synchronous replication between those data centers. And so if they choose to, they can run multi-region replication just like most high end applications do today, or, they can run within an AZ, synchronous multiplication to multiple data centers. The advantage of that, is it takes less administrative complexity, if there's a failure, you never lose a transaction, where in multi-region replication, it has to be asynchronous because of the speed of light. >> Yeah, you-- >> Also, there's some jurisdictional benefits too, right? Say Germany, for instance, with a new data center. >> Yep. Yeah, many customers want to keep their data in region, and so that's another reason why you don't necessarily want to replicate it out in order to get that level of redundancy, you want to have multiple data centers in region, 100% correct >> So, how much is it that you drive your entire stack yourself that allows you to do this, I think about replication solutions, you used SRDF as an example. I worked for that, I worked for EMC for 10 years, and just doing a two site replication is challenging, >> It's hard. >> A multi site is differently, you guys, six data centers and availabilities on a bungee, you fundamentally have a different way of handling replication. >> We do, the strategy inside Amazon is to say multi-region replication is great, but because of the latency between regions, they're a long way apart, and the reality of speed of light, you can't run synchronous. If data centers are relatively close together in the same region, the replication can be done synchronously, and what that means is if there's a failure anywhere, you lose no transactions. >> Yeah. So, there was a great line you had in your session yesterday, that networking has been anti-Moore's law when it comes to pricing. Amazon is such a big player, everybody watches what you do, you buy from the ODMs, you're changing the supply chain. What's your vision as to where networking needs to go from a supply chain and equipment standpoint? >> Networking needs to be the same place where servers went 20 years ago, and that is: it needs to be on a Moore's law curve where, as we get more and more transistors on a chip, we should get lower and lower costs in a server, we should get lower and lower costs in a network. Today, an ASIC is always, which is the core of the router, is always around the same price. Each generation we add more ports to that, and so effectively we got a Moore's law price improvement happening where that ASIC stays the same price, you just keep adding ports. >> So, I got to jump in and ask ya about Open Compute, last year you said it's good I guess, I'm a fan, but we do our own thing, still the case? >> Yeah, absolutely. >> Still the case, okay doing your own thing, and just watching Open Compute which is a like a fair for geeks. >> Open Compute's very cool, the thing is, what's happening in our industry right now is hyper-specialization, instead of buying general purpose hardware that's good for a large number of customers, we're buying hardware that's targeted to a specific workload, a specific service, and so, we're not--I love what happens with Open Compute, 'cause you can learn from it, it's really good stuff, but it's not what we use; we want to target our workloads precisely. >> Yeah, that was actually the title of the article I wrote from everything I learned from you last year was: hyper-specialization is your secret sauce, so. You also said earlier this week that we should watch the mobile suppliers, and that's where service should be in the future, but I heard a, somebody sent me a quote from you that said: unfortunately ARM is not moving quite fast enough to keep up with where Intel's going, where do you see, I know you're a fan of some of the chip manufacturers, where's that moving? >> What I meant with watch ARM and understanding where servers are going, sorry, not ARM, watch mobile and understand where servers is going is: power became important in mobile, power becomes important in servers. Most functionalities being pulled up on chip, on mobile, same thing's happening in server land, and so-- >> What you're sayin' is mobile's a predictor >> Predicting. >> of the trends in the data center, >> Exactly, exactly right. >> Because of the challenges with the form factor. >> It's not so much the form factor, but the importance of power, and the importance of, of, well, density is important as well, so, it turns out the mobile tends to be a few years ahead, but all the same kinds of innovations that show up there we end up finding them in servers a few years later. >> Alright, so James, we've been, at Wikibon have a strong background in the storage world, and David Floyer our CTO said: one of the biggest challenges we had with databases is they were designed to respond to disk, and therefore there were certain kind of logging mechanisms in place. >> It's a good point. >> Can you talk a little bit about what you've done at Amazon with Aurora, and why you're fundamentally changing the underlying storage for that? >> Yeah, Aurora is applying modern database technology to the new world, and the new world is: SSDs at the base, and multiple availability zones available, and so if you look closely at Aurora you'll see that the storage engine is actually spread over multiple availability zones, and, what was mentioned in the keynote, it's a log-structured store. Log-structured stores work very very nicely on SSDs, they're not wonderful choices on spinning magnetic media. So this, what we're optimized for is SSDs, and we're not running it on spinning disk at all. >> So I got to ask you about the questions we're seeing in the crowd, so you guys are obviously doing great on the scale side, you've got the availability zones which makes a lot of sense certainly the Germany announcement, with the whole Ireland/EU data governance thing, and also expansion is great. But the government is moving fast into some enterprises, >> It's amazing. >> And so, we were talking about that last night, but people out there are sayin' that's great, it's a private cloud, the governments implementing a private cloud, so you agree, that's a private cloud or is that a public-- >> (laughing) It's not a private cloud; if you see Amazon involved, it's not a private cloud. Our view of what we're good at, and the advantages cloud brings to market are: we run a very large fleet of servers in every region, we provide a standard set of services in all those regions, it's completely different than packaged software. What the CIA has is another AWS region, it happens to be on their site, but it is just another AWS region, and that's the way they want it. >> Well people are going to start using that against you guys, so start parsing, well if it's private, it's only them then it's private, but there's some technicalities, you're clarifying that. >> It's definitely not a private cloud, the reason why we're not going to get involved with doing private clouds is: product software is different, it's innefficient, when you deliver to thousands of customers, you can't make some of the optimizations that we make. Because we run the same thing everywhere, we actually have a much more reliable product, we're innovating more quickly, we just think it's a different world. >> So James, you've talked a lot that scale fundamentally changes the way you architect and build things; Amazon's now got over a billion customers, and it's got so many services, just adding more and more, Wikibon, actually Dave Vellante, wrote a post yesterday said that: we're trying to fundamentally change the economic model for enterprise IT, so that services are now like software, when Microsoft would print an extra disk it didn't cost anything. When you're building your environment, is there more strain on your environment for adding that next thousand customers or that next big service or, did it just, do you have the substrate built that's going to help it grow for the future? >> It's a good question, it varies on the service. Usually what happens is we get better year over year over year, and what we find is, once you get a service to scale, like S3 is definitely at scale, then growth, I won't say it's easy, but it's easier to predict because you're already on a large base, and we already know how to do it fairly well. Other services require a lot more thought on how to grow it, and end up being a lot more difficult. >> So I got some more questions for ya, go on to some of the personal questions I want to ask you. Looking at this booth right here, it's Netflix guys right there, I love that service, awesome founder, just what they do, just a great company, and I know they're a big customer. But you mentioned networks, so at the Google conference we went to, Google's got some chops, they have a developer community rockin' and rollin', and then it's pretty obvious what they're doin', they're not tryin' to compete with Amazon because it's too much work, but they're goin' after the front end developer, Rails, whatnot, PHP, and really nailing the back end transport, you see it appearing, really going after to enable a Netflix, these next generation companies, to have the backbone, and not be reliant on third party networks. So I got to ask you, so as someone who's a tinkerer, a mechanic if you will of the large scale stuff, you got to get rid of that middleman on the network. What's your plans, you going to do peering? Google's obviously telegraphing they're comin' down that road. Do you guys meet their objective? Same product, better, what's your strategy? >> Yeah, it's a great question. The reason why we're running private links between our regions is the same reason that Google is, it's lower cost, that's good, it's much, much lower latency, that's really good, and it's a lot less jitter, and that's extremely important, and so it's private links, peering, customers direct connecting, that's all the reality of a modern cloud. >> And you see that, and do you have to build that in? Almost like you want to build your own chips, I'd imagine on the mobile side with the phone, you can see that, everyone's building their own chips. You got to have your own network stuff. Is that where you guys see the most improvement on the network side? Getting down to that precise hyper-specialized? >> We're not doing our own chips today, and we don't, in the networking world, and we don't see that as being a requirement. What we do see as a requirement is: we're buying our own ASICs, we're doing our own designs, we're building our own protocol stack; that's delivering great value, and that is what's deployed, private networking's deployed in all of our data centers now >> Yeah, I mean, James I wonder, you must look at Google, they do have an impressive network, they've got the undersea cables, is there anything you, that you look at them and saying: we need to move forward and catch up to them on certain, in certain pieces of the network? >> I don't think so, I think when you look at any of the big providers, they're all mature enough that they're doing, at that level, I think what we do has to be kind of similar. If private links are a better solution, then we're all going to do it, I mean. >> It makes a lot of sense, 'cause it, the impact on inspection, throttling traffic, that just creates uncertainty, so. I'm a big fan, obviously, of that direction. Alright, now a personal question. So, in talking to your wife last night, getting to know you over the years here, and Stu is obviously a big fan. There's a huge new generation of engineers coming into the market, Open Compute, I bring that up because it's such a great initiative, you guys obviously have your own business reasons to do your own stuff, I get that. But there's a whole new culture of engineering coming out, a new home brew computer club is out there forming right now my young son makes his own machines, assembling stuff. So, you're an inspiration to that whole group, so I would like you to share just some commentary to this new generation, what to do, how to approach things, what you've learned, how do you come over, on top of failure, how do you resolve that, how do you always grow? So, share some personal perspective. >> Yeah, it's an interesting question. >> I know you're humble, but, yeah. >> Interesting question. I think being curious is the most important thing possible, if anybody ever gets an opportunity to meet somebody that's the top of any business, a heart surgeon, a jet engine designer, an auto mechanic, anyone that's in the top of their business is always worth meeting 'cause you can always learn from them. One of the cool things that I find with my job is: because it spans so many different areas, it's amazing how often I'll pickup a tidbit one day talking to an expert sailor, and the next day be able to apply that tidbit, or that idea, solving problems in the cloud. >> So just don't look for your narrow focus, your advice is: talk to people who are pros, in whatever their field is, there's always a nugget. >> James a friend of mine >> Stay curious! >> Steve Todd, he actually called that Venn diagram innovation, where you need to find all of those different pieces, 'cause you're never going to know where you find the next idea. So, for the networking guys, there's a huge army of CCIEs out there, some have predicted that if you have the title administrator in your name, that you might be out of a job in five years. What do you recommend, what should they be training on, what should they be working toward to move forward to this new world? >> The history of computing is one of the-- a level of abstraction going up, never has it been the case those jobs go away, the only time jobs have ever gone away is when someone stated a level of abstraction that just wasn't really where the focus is. We need people taking care of systems, as the abstraction level goes up, there's still complexity, and so, my recommendation is: keep learning, just keep learning. >> Alright so I got to ask you, the big picture now, ecosystems out here, Oracle, IBM, these big incumbents, are looking at Amazon, scratching their head sayin': it's hard for us to change our business to compete. Obviously you guys are pretty clear in your positioning, what's next, outside of the current situation, what do you look at that needs to be built out, besides the network, that you see coming around the corner? And you don't have to reveal any secrets, just, philosophically, what's your vision there? >> I think our strategy is maybe a little bit, definitely a little bit different from some of the existing, old-school providers. One is: everyone's kind of used to, Amazon passes on value to customers. We tend to be always hunting and innovating and trying to lower costs, and passing on the value to customers, that's one thing. Second one is choice. I personally choose to run my XQL because I like the product I think it's very good value, some of our customers want to run Oracle, some of our customers want to run my XQL, and we're absolutely fine doing that, some people want to run SQL server. And so, the things that kind of differentiate us is: enterprise software hasn't dropped prices, ever, and that's just the way we were. Enterprise software is not about choice, we're all about choice. And so I think those are the two big differences, and I think those ones might last. >> Yeah, that's a good way to look at that. Now, back to the IT guy, let's talk about the CIO. Scratchin' his head sayin': okay, I got this facilities budget, and it's kind of the-- I talked to once CIO, hey says: I spend more time planning meetings around facilities, power, and cooling, than anything else on innovation, so. They have challenges here, so what's your advice, as someone who's been through a lot of engineering, a lot of large scale, to that team of people on power and cooling to really kind of go to the next level, and besides just saying okay throw some pots out there, or what not, what should they be doing, what's their roadmap? >> You mean the roadmap for doing a better job of running their facilities? >> Yeah, well there's always pressure for density, there's power's a sacred (laughs) sacred resource right now, I mean power is everything, power's the new oil, so, power's driving everything, so, they have to optimize for that, but you can't generate more power, and space, so, they want smaller spaces, and more efficiency. >> The biggest gains that are happening right now, and the biggest innovations that have been happening over the last five years in data centers is mostly around mechanical systems, and driving down the cost of cooling, and so, that's one odd area. Second one is: if you look closely at servers you'll see that as density goes up, the complexity and density of cooling them goes up. And so, getting designs that are optimized for running at higher temperatures, and certified for higher temperatures, is another good step, and we do both. >> So, James, there's such a diverse ecosystem here, I wonder if you've had a chance to look around? Anything cool outside of what Amazon is doing? Whether it's a partner, some startup, or some interesting idea that's caught your attention at the show. >> In fact I was meeting with western--pardon me, Hitachi Data Systems about three days ago, and they were describing some work that was done by Cycle Computing, and several hundred thousand doors-- >> We've had Cycle-- >> Jason came on. >> Oh, wow! >> Last year, we, he was a great guest. >> No, he was here too, just today! >> Oh, we got him on? Okay. >> So Hitachi's just, is showing me some of what they gained from this work, and then he showed me his bill, and it was five thousand six hundred and some dollars, for running this phenomenally big, multi-hundred thousand core project, blew me away, I think that's phenomenal, just phenomenal work. >> James, I really appreciate you coming in, Stu and I really glad you took the time to spend with our audience and come on theCUBE, again a great, pleasurable conversation, very knowledgeable. Stay curious, and get those nuggets of information, and keep us informed. Thanks for coming on theCUBE, James Hamilton, Distinguished Engineer at Amazon doing some great work, and again, the future's all about making it smaller, faster, cheaper, and passing those costs, you guys have a great strategy, a lot of your fans are here, customers, and other engineers. So thanks for spending time, this is theCUBE, I'm John Furrier with Stu Miniman, we'll be right back after this short break. (soft harmonic bells)

Published Date : Nov 13 2014

SUMMARY :

Brought to you by headline sponsors and extract synth from the noise. Thank you very much! 'cause we really appreciate you taking the time to come on, So, quickly summarize what you talked about in your session it's going to change the way our industry operates. I'm not in the data center business seems to be the theme, and I think that's two of the most and why now is it acceptable that you can deliver services private links between all of the regions, what you do with availability zones versus The parts that are the same are: Say Germany, for instance, with a new data center. and so that's another reason why So, how much is it that you you fundamentally have a different way We do, the strategy inside Amazon is to say everybody watches what you do, that ASIC stays the same price, you just keep adding ports. Still the case, okay doing your own thing, and so, we're not--I love what happens with Open Compute, where do you see, I know you're a fan of and understanding where servers are going, and the importance of, of, well, one of the biggest challenges we had with databases and so if you look closely at Aurora you'll see that So I got to ask you about the and the advantages cloud brings to market are: using that against you guys, so start parsing, when you deliver to thousands of customers, that scale fundamentally changes the way and we already know how to do it fairly well. and really nailing the back end transport, and it's a lot less jitter, and that's extremely important, Is that where you guys see the most improvement and that is what's deployed, I think when you look at any of the big providers, getting to know you over the years here, and the next day be able to apply that tidbit, or that idea, talk to people who are pros, in whatever their field is, some have predicted that if you have never has it been the case those jobs go away, besides the network, that you see coming around the corner? and that's just the way we were. I talked to once CIO, hey says: I mean power is everything, power's the new oil, so, and the biggest innovations that have been happening that's caught your attention at the show. he was a great guest. Oh, we got him on? and it was five thousand six hundred and some dollars, Stu and I really glad you took the time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

JamesPERSON

0.99+

Dave VellantePERSON

0.99+

James HamiltonPERSON

0.99+

Steve ToddPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

Trend MicroORGANIZATION

0.99+

JasonPERSON

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

StuPERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

100%QUANTITY

0.99+

NetflixORGANIZATION

0.99+

last yearDATE

0.99+

Last yearDATE

0.99+

yesterdayDATE

0.99+

Hitachi Data SystemsORGANIZATION

0.99+

HitachiORGANIZATION

0.99+

second yearQUANTITY

0.99+

two regionsQUANTITY

0.99+

todayDATE

0.99+

five yearsQUANTITY

0.99+

two areasQUANTITY

0.99+

oneQUANTITY

0.99+

EMCORGANIZATION

0.99+

TodayDATE

0.99+

bothQUANTITY

0.99+

Each generationQUANTITY

0.98+

this yearDATE

0.98+

MooreORGANIZATION

0.98+

CIAORGANIZATION

0.98+

Las Vegas, NevadaLOCATION

0.98+

two siteQUANTITY

0.98+

Second oneQUANTITY

0.98+

Open ComputeTITLE

0.98+

AWs re:Invent 2014EVENT

0.98+

earlier this weekDATE

0.98+

20 years agoDATE

0.97+

last nightDATE

0.97+

WikibonORGANIZATION

0.97+

ARMORGANIZATION

0.97+

secondQUANTITY

0.97+

Sands Convention CenterLOCATION

0.97+

first dayQUANTITY

0.97+

over a billion customersQUANTITY

0.96+

OneQUANTITY

0.96+

two big differencesQUANTITY

0.96+

Second thingQUANTITY

0.95+

one thingQUANTITY

0.95+

sixQUANTITY

0.95+

James Hamilton, AWS | AWS Re:Invent 2013


 

(mellow electronic music) >> Welcome back, we're here live in Las Vegas. This is SiliconANGLE and Wikibon's theCUBE, our flagship program. We go out to the events, extract the signal from the noise. We are live in Las Vegas at Amazon Web Services re:Invent conference, about developers, large-scale cloud, big data, the future. I'm John Furrier, the founder of SiliconANGLE. I'm joined by co-host, Dave Vellante, co-founder of Wikibon.org, and our guest is James Hamilton, VP and Distinguished Engineer at Amazon Web Services. Welcome to theCUBE. >> Well thank you very much. >> You're a tech athlete, certainly in our book, is a term we coined, because we love to use sports analogies You're kind of the cutting edge. You've been the business and technology innovating for many years going back to the database days at IBM, Microsoft, and now Amazon. You gave a great presentation at the analyst briefing. Very impressive. So I got to ask you the first question, when did you first get addicted to the notion of what Amazon could be? When did you first taste the Cool-Aide? >> Super good question. Couple different instances. One is I was general manager of exchange hosts and services and we were doing a decent job, but what I noticed was customers were loving it, we're expanding like mad, and I saw opportunity to improve by at least a factor of two I'm sorry, 10, it's just amazing. So that was a first hint that this is really important for customers. The second one was S3 was announced, and the storage price pretty much froze the whole industry. I've worked in storage all my life, I think I know what's possible in storage, and S3 was not possible. It was just like, what is this? And so, I started writing apps against it, I was just blown away. Super reliable. Unbelievably priced. I wrote a fairly substantial app, I got a bill for $7. Wow. So that's really the beginnings of where I knew this was going to change the world, and I've been, as you said, addicted to it since. >> So you also mentioned some stats there. We'll break it down, 'cause we love to talk about the software defined data center, which is basically not even at the hype stage yet. It's just like, it's still undefined, but software virtualization, network virtualization really is pushing that movement of the software focus, and that's essentially you guys are doing. You're talking about notifications and basically it's a large-scale systems problem. That you guys are building a global operating system as Andy Jassy would say. Well, he didn't say that directly, he said internet operating system, but if you believe that APIs are critical services. So I got to ask you that question around this notion of a data center, I mean come on, nobody's really going to give up their data center. It might change significantly, but you pointed out the data center costs are in the top three order, servers, power circulation systems, or cooling circulation, and then actual power itself. Is that right, did I get that right? >> Pretty close, pretty close. Servers dominate, and then after servers if you look at data centers together, that's power, cooling, and the building and facility itself. That is the number two cost, and the actual power itself is number three. >> So that's a huge issue. When we talk like CIOs, it's like can you please take the facility's budget off my back? For many reasons, one, it's going to be written off soon maybe. All kinds of financial issues around-- >> A lot of them don't see it, though, which is a problem. >> That is a problem, that is a problem. Real estate season, and then, yes. >> And then they go, "Ah, it's not my problem" so money just flies out the window. >> So it's obviously a cost improvement for you. So what are you guys doing in that area and what's your big ah-ha for the customers that you walk in the door and say, look, we have this cloud, we have this system and all those headaches can be, not shifted, or relieved if you will, some big asprin for them. What's the communication like? What do you talk to them about? >> Really it depends an awful lot on who it is. I mean, different people care about different things. What gets me excited is I know that this is the dominate cost of offering a service is all of this muck. It's all of this complexity, it's all of this high, high capital cost up front. Facility will run 200 million before there's servers in it. This is big money, and so from my perspective, taking that way from most companies is one contribution. Second contribution is, if you build a lot of data centers you get good at it, and so as a consequence of that I think we're building very good facilities. They're very reliable, and the costs are plummeting fast. That's a second contribution. Third contribution is because... because we're making capacity available to customers it means they don't have to predict two years in advance what they're going to need, and that means there's less wastage, and that's just good for the industry as a whole. >> So we're getting some questions on our crowd chat application. If you want to ask a question, ask him anything. It's kind of like Reddit. Go to crowdchat.net/reinvent. The first question came in was, "James, when do you think ARM will be in the data center?" >> Ah ha, that's a great question. Well, many people know that I'm super excited about ARM. It's early days, the reason why I'm excited is partly because I love seeing lots of players. I love seeing lots of innovation. I think that's what's making our industry so exciting right now. So that's one contribution that ARM brings. Another is if you look at the history of server-side computing, most of the innovation comes from the volume-driven, usually on clients first. The reason why X86 ended up in such a strong position is so many desktops we running X86 processors and as a consequence it became a great server processor. High R&D flow into it. ARM is in just about every device that everyone's carrying around. It's almost every disk drive, it's just super broadly deployed. And whenever you see a broadly deployed processor it means there's an opportunity to do something special for customers. I think it's good for the industry. But in a precise answer to your question, I really don't have one right now. It's something that we're deeply interested in and investigating deeply, but at this point it hasn't happened yet, but I'm excited by it. >> Do you think that... Two lines of questioning here. One is things that are applicable to AWS, other's just your knowledge of the industry and what you think. We talked about that yesterday with OCP, right? >> Yep. >> Not a right fit for us, but you applaud the effort. We should talk about that, too, but does splitting workloads up into little itty, bitty processors change the utilization factor and change the need for things like virtualization, you know? What do you think? >> Yeah, it's a good question. I first got excited about the price performance of micro-servers back in 2007. And at that time it was pretty easy to produce a win by going to a lower-powered processor. At that point memory bandwidth wasn't as good as it could be. It was actually hard on some workloads to fully use a processor. Intel's a very smart company, they've done great work on improving the memory bandwidth, and so today it's actually harder to produce a win, and so you kind of have workloads in classes. At the very, very high end we've got database workloads. They really love single-threaded performance, and performance really is king, but there are lots of highly parallel workloads where there's an opportunity for a big gain. I still think virtualization is probably something where the industry's going to want to be there, just because it brings so many operational advantages. >> So I got to ask the question. Yesterday we had Jason Stowe on, CEO of Cycle Computing, and he had an amazing thing that he did, sorry, trumping it out kids say, but it's not new to you, but it's new to us. He basically created a supercomputer and spun up hundreds of thousands of cores in 30 minutes, which is like insane, but he did it for like 30 grand. Which would've cost, if you try to provision it to the TUCO calculator or whatever your model, it'd be months and years, maybe, and years. But the thing that he said I want to get your point on and I'm going to ask you questions specifically on is, Spot instances were critical for him to do that, and the creativity of his solutions, so I got to ask you, did you see Spot pricing instances being a big deal, and what impact has that done to AWS' vision of large scale? >> I'm super excited by Spot. In fact, it's one of the reasons I joined Amazon. I went through a day of interviews, I met a bunch of really smart people doing interesting work. Someone probably shouldn't have talked to me about Spot because it hadn't been announced yet, and I just went, "This is brilliant! "This is absolutely brilliant!" It's taking the ideas from financial markets, where you've got high-value assets, and saying why don't we actually sell it off, make a market on the basis of that and sell it off? So two things happen that make Spot interesting. The first is an observation up front that poor utilization is basically the elephant in the room. Most folks can't use more than 12% to 15% of their overall server capacity, and so all the rest ends up being wasted. >> You said yesterday 30% is outstanding. It's like have a party. >> 30% probably means you're not measuring it well. >> Yeah, you're lying. >> It's real good, yeah, basically. So that means 70% or more is wasted, it's a crime. And so the first thing that says is, that one of the most powerful advertisements for cloud computing is if you bring a large number of non-correlated workloads together, what happens is when you're supporting a workload you've got to have enough capacity to support the peak, but you only get to monetize the average. And so as the peak to average gets further apart, you're wasting more. So when you bring a large number of non-correlated workloads together what happens is it flattens out just by itself. Without doing anything it flattens out, but there's still some ups and downs. And the Spot market is a way of filling in those ups and downs so we get as close to 100%. >> Is there certain workloads that fit the spot, obviously certain workloads might fit it, but what workloads don't fit the Spot price, because, I mean, it makes total sense and it's an arbitrage opportunity for excess capacity laying around, and it's price based on usage. So is there a workload, 'cause it'll be torrent up, torrent down, I mean, what's the use cases there? >> Workloads that don't operate well in an interrupted environment, that are very time-critical, those workloads shouldn't be run in Spot. It's just not what the resource is designed for. But workloads like the one that we were talking to with Cycle Computing are awesome, where you need large numbers of resources. If the workload needs to restart, that's absolutely fine, and price is really the focus. >> Okay, and question from crowd chat. "Ask James what are his thoughts "on commodity networking and merchant silicon." >> I think an awful lot about that. >> This guy knows you. (both laughing) >> Who's that from? >> It's your family. >> Yeah, exactly! >> They're watching. >> No, network commoditization is a phenomenal thing that the whole industry's needed that for 15 years. We've got a vertical ecosystem that's kind of frozen in time. Vertically-integrated ecosystem kind of frozen in time. Costs everywhere are falling except in networking. We just got to do something, and so it's happening. I'm real excited by that. It's really changing the Amazon business and what we can do for customers. >> Let's talk a little bit about server design, because I was fascinated yesterday listening to you talk how you've come full circle. Over the last decade, right, you started with what's got to be stripped down, basic commodity and now you're of a different mindset. So describe that, and then I have some follow-up questions for you. >> Yeah, I know what you're alluding to. Is years ago I used to argue you don't want hardware specialization, it's crazy. It's the magic's in software. You want to specialize software running on general-purpose processors, and that's because there was a very small number of servers out there, and I felt like it was the most nimble way to run. However today, in AWS when we're running ten of thousands of copies of a single type of server, hardware optimizations are absolutely vital. You end up getting a power-performance advantage at 10X. You can get a price-performance advantage that's substantial and so I've kind of gone full circle where now we're pulling more and more down into the hardware, and starting to do hardware optimizations for our customers. >> So heat density is a huge problem in data centers and server design. You showed a picture of a Quanta package yesterday. You didn't show us your server, said "I can't you ours," but you said, "but we blow this away, "and this is really good." But you describe that you're able to get around a lot of those problems because of the way you design data centers. >> Yep. >> Could you talk about that a little bit? >> Sure, sure, sure. One of the problems when you're building a server it could end up anywhere. It could end up in a beautiful data center that's super well engineered. It could end up on the end of a row on a very badly run data center. >> Or in a closet. >> Or in a closet. The air is recirculating, and so the servers have to be designed with huge headroom on cooling requirements, and they have to be able to operate in any of those environments without driving warranty costs for the vendors. We take a different approach. We say we're not going to build terrible data centers. We're going to build really good data centers and we're going to build servers that exploit the fact those data centers are good, and what happens is more value. We don't have to waste as much because we know that we don't have to operate in the closet. >> We got some more questions coming here by the way. This is awesome. This ask me anything crowd chat thing is going great. We got someone, he's from Nutanix, so he's a geek. He's been following your career for many years. I got to ask you about kind of the future of large-scale. So Spot, in his comment, David's comment, Spot instances prove that solutions like WMare's distributed power management are not valuable. Don't power off the most expensive asset. So, okay, that brings up an interesting point. I don't want to slam on BMWare right now, but I just wanted to bring to the next logical question which is this is a paradigm shift. That's a buzz word, but really a lot's happening that's new and innovative. And you guys are doing it and leading. What's next in the large-scale paradigm of computing and computer science? On the science-side you mentioned merchant silicon. Obviously that's, the genie's out of the bottle there, but what's around the corner? Is it the notifications at the scheduling? Was it virtualization, is it compiler design? What are some of the things that you see out on the horizon that you got your eyes on? >> That's interesting, I mean. I've got, if you name your area, and I'll you some interesting things happening in the area, and it's one of the cool things of being in the industry right now. Is that 10 years ago we had a relatively static, kind of slow-pace. You really didn't have to look that far ahead, because of anything was coming you'd see it coming for five years. Now if you ask me about power distribution, we've got tons of work going on in power distribution. We're researching different power distribution topologies. We're researching higher voltage distribution, direct current distribution. Haven't taken any of those steps yet, but we're were working in that. We've got a ton going on in networking. You'll see an announcement tomorrow of a new instance type that is got some interesting characteristics from a networking perspective. There's a lot going on. >> Let's pre-announce, no. >> Gary's over there like-- >> How 'about database, how 'about database? I mean, 10 years ago, John always says database was kind of boring. You go to a party say, oh welcome to database business, oh yeah, see ya. 25 years ago it was really interesting. >> Now you go to a party is like, hey ah! Have a drink! >> It a whole new ballgame, you guys are participating. Google Spanner is this crazy thing, right? So what are your thoughts on the state of the database business today, in memory, I mean. >> No, it's beautiful. I did a keynote at SIGMOD a few years ago and what I said is that 10 years ago Bruce Linsey, I used to work with him in the database world, Bruce Linsey called it polishing the round ball. It's just we're making everything a little, tiny bit better, and now it's fundamentally different. I mean what's happening right now is the database world, every year, if you stepped out for a year, you wouldn't recognize it. It's just, yeah, it's amazing. >> And DynamoDB has had rapid success. You know, we're big users of that. We actually built this app, crowd chat app that people are using on Hadoop and Hbase, and we immediately moved that to DynamoDB and your stack was just so much faster and scalable. So I got to ask you the-- >> And less labor. >> Yeah, yeah. So it's just been very reliable and all the other goodness of the elastic B socket and SQS, all that other good stuff we're working with node, et cetera So I got to ask you, the area that I want your opinion around the corner is versioning control. So at large-scale one of the challenges that we have is as we're pushin' new code, making sure that the integrated stack is completely updated and synchronized with open-source projects. So where does that fit into the scaling up? 'Cause at large scale, versioning control used to be easy to manage, but downloading software and putting in patches, but now you guys handle all that at scale. So that, I'm assuming there's some automation involved, some real tech involved, but how are you guys handling the future of making sure the code is all updated in the stack? >> It's a great question. It's super important from a security perspective that the code be up to date and current. It's super important from a customer perspective and you need to make sure that these upgrades are just non-disruptive. One customer, best answer I heard was yesterday from a customer was on a panel, they were asked how did they deal with Amazon's upgrades, and what she said is, "I didn't even know when they were happening. "I can't tell when they're happening." Exactly the right answer. That's exactly our goal. We monitor the heck out of all of our systems, and our goal, and boy we take it seriously, is we need to know any issue before a customer knows it. And if you fail on that promise, you'll meet Andy really quick. >> So some other paradigm questions coming in. Floyd asks, "Ask James what his opinion of cloud brokerage "companies such as Jamcracker or Graviton. "Do they have a place, or is it wrong thinking?" (James laughs) >> From my perspective, the bigger and richer the ecosystem, the happier our customers all are. It's all goodness. >> It's Darwinism, that's the answer. You know, the fit shall survive. No, but I think that brings up this new marketplace that Spot pricing came out of the woodwork. It's a paradigm that exists in other industries, apply it to cloud. So brokering of cloud might be something, especially with regional and geographical focuses. You can imagine a world of brokering. I mean, I don't know, I'm not qualified to answer that. >> Our goal, honestly, is to provide enough diversity of services that we completely satisfy customer's requirements, and that's what we intend to do. >> How do you guys think about the make versus buy? Are you at a point now where you say, you know what, we can make this stuff for our specific requirements better than we can get it off the shelf, or is that not the case? >> It changes every few minutes. It really does. >> So what are the parameters? >> Years ago when I joined the company we were buying servers from OEM suppliers, and they were doing some tailoring for our uses. It's gotten to the point now where that's not the right model and we have our own custom designs that are being built. We've now gotten to the point where some of the components in servers are being customized for us, partly because we're driving sufficient volume that it's justified, and partly because the partners that the component suppliers are happy to work with us directly and they want input from us. And so it's every year it's a little bit more specialized and that line's moving, so it's shifting towards specialization pretty quickly. >> So now I'm going to be replaced by the crowd, gettin' great questions, I'm going to be obsolete! No earbud, I got it right here. So the question's more of a fun one probably for you to answer, or just kind of lean back and kind of pull your hair out, but how the heck does AWS add so much infrastructure per day? How do you do it? >> It's a really interesting question. I kind of know how much infrastructure, I know abstractly how much infrastructure we put out every day, but when you actually think about this number in context, it's mind boggling. So here's the number. Here's the number. Every day, we deploy enough servers to support Amazon when it was a seven billion dollar company. You think of how many servers a seven billion dollar e-commerce company would actually require? Every day we deploy that many servers, and it's just shocking to me to think that the servers are in the logistics chain, they're being built, they're delivered to the appropriate data centers, there's back positions there, there's networking there, there's power there. I'm actually, every day I'm amazed to be quite honest with you. >> It's mind-boggling. And then for a while I was there, okay, wait a minute. Would that be Moors' Law? Uh no, not even in particular. 'Cause you said every day. Not every year, every day. >> Yeah, it really is. It's a shocking number and one, my definition of scale changes almost every day, where if you look at the number of customers that are trusting with their workloads today, that's what's driving that growth, it's phenomenal! >> We got to get wrapped up, but I got to ask the Hadoob World SQL over Hadoob question solutions. Obviously Hadoob is great, great for storing stuff, but now you're seeing hybrids come out. Again this comes back down to the, you can recognize the database world anymore if you were asleep for a year. So what's your take on that ecosystem? You guys have a lasting map or a decent a bunch of other things. There's some big data stuff going on. How do you, from a database perspective, how do you look at Hadoob and SQL over Hadoob? >> I personally love 'em both, and I love the diversity that's happening in the database world. There's some people that kind of have a religion and think it's crazy to do anything else. I think it's a good thing. Map reduce is particularly, I think, is a good thing, because it takes... First time I saw map reduce being used was actually a Google advertising engineer. And what I loved about his, I was actually talking to him about it, and what I loved is he had no idea how many servers he was using. If you ask me or anyone in the technology how many servers they're using, they know. And the beautiful thing is he's running multi-thousand node applications and he doesn't know. He doesn't care, he's solving advertising problems. And so I think it's good. I think there's a place for everything. >> Well my final question is asking guests this show. Put the bumper sticker on the car leaving re:Invent this year. What's it say? What does the bumper sticker say on the car? Summarize for the folks, what is the tagline this year? The vibe, and the focus? >> Yeah, for me this was the year. I mean, the business has been growing but this is the year where suddenly I'm seeing huge companies 100% dependent upon AWS or on track to be 100% dependent upon AWS. This is no longer an experiment, something people want to learn about. This is real, and this is happening. This is running real businesses. So it's real, baby! >> It's real baby, I like, that's the best bumper... James, distinguished guest now CUBE alum for us, thanks for coming on, you're a tech athlete. Great to have you, great success. Sounds like you got a lot of exciting things you're working on and that's always fun. And obviously Amazon is killing it, as we say in Silicon Valley. You guys are doing great, we love the product. We've been using it for crowd chats. Great stuff, thanks for coming on theCUBE. >> Thank you. >> We'll be right back with our next guest after this short break. This is live, exclusive coverage with siliconANGLE theCUBE. We'll be right back.

Published Date : Nov 14 2013

SUMMARY :

I'm John Furrier, the founder of SiliconANGLE. So I got to ask you the first question, and the storage price pretty much froze the whole industry. So I got to ask you that question around and the actual power itself is number three. can you please take the facility's budget off my back? A lot of them don't see it, That is a problem, that is a problem. so money just flies out the window. So what are you guys doing in that area and that's just good for the industry as a whole. "James, when do you think ARM will be in the data center?" of server-side computing, most of the innovation and what you think. and change the need for things and so you kind of have workloads in classes. and the creativity of his solutions, so I got to ask you, and so all the rest ends up being wasted. It's like have a party. And so as the peak to average and it's an arbitrage opportunity that's absolutely fine, and price is really the focus. Okay, and question from crowd chat. This guy knows you. that the whole industry's needed that for 15 years. Over the last decade, right, you started with It's the magic's in software. because of the way you design data centers. One of the problems when you're The air is recirculating, and so the servers I got to ask you about kind of the future of large-scale. and it's one of the cool things You go to a party say, oh welcome of the database business today, in memory, I mean. is the database world, every year, So I got to ask you the-- So at large-scale one of the challenges that we have is that the code be up to date and current. So some other paradigm questions coming in. From my perspective, the bigger and richer the ecosystem, It's Darwinism, that's the answer. diversity of services that we completely It really does. the component suppliers are happy to work with us So the question's more of a fun one that the servers are in the logistics chain, 'Cause you said every day. where if you look at the number of customers the Hadoob World SQL over Hadoob question solutions. and think it's crazy to do anything else. Summarize for the folks, what is the tagline this year? I mean, the business has been growing It's real baby, I like, that's the best bumper... This is live, exclusive coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Erik KaulbergPERSON

0.99+

2017DATE

0.99+

Jason ChamiakPERSON

0.99+

Dave VolontePERSON

0.99+

Dave VellantePERSON

0.99+

RebeccaPERSON

0.99+

Marty MartinPERSON

0.99+

Rebecca KnightPERSON

0.99+

JasonPERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Greg MuscurellaPERSON

0.99+

ErikPERSON

0.99+

MelissaPERSON

0.99+

MichealPERSON

0.99+

Lisa MartinPERSON

0.99+

Justin WarrenPERSON

0.99+

Michael NicosiaPERSON

0.99+

Jason StowePERSON

0.99+

Sonia TagarePERSON

0.99+

AysegulPERSON

0.99+

MichaelPERSON

0.99+

PrakashPERSON

0.99+

JohnPERSON

0.99+

Bruce LinseyPERSON

0.99+

Denice DentonPERSON

0.99+

Aysegul GunduzPERSON

0.99+

RoyPERSON

0.99+

April 2018DATE

0.99+

August of 2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

AustraliaLOCATION

0.99+

EuropeLOCATION

0.99+

April of 2010DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

JapanLOCATION

0.99+

Devin DillonPERSON

0.99+

National Science FoundationORGANIZATION

0.99+

ManhattanLOCATION

0.99+

ScottPERSON

0.99+

GregPERSON

0.99+

Alan ClarkPERSON

0.99+

Paul GalenPERSON

0.99+

GoogleORGANIZATION

0.99+

JamcrackerORGANIZATION

0.99+

Tarek MadkourPERSON

0.99+

AlanPERSON

0.99+

AnitaPERSON

0.99+

1974DATE

0.99+

John FerrierPERSON

0.99+

12QUANTITY

0.99+

ViaWestORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

2015DATE

0.99+

James HamiltonPERSON

0.99+

John FurrierPERSON

0.99+

2007DATE

0.99+

Stu MinimanPERSON

0.99+

$10 millionQUANTITY

0.99+

DecemberDATE

0.99+