Mike Thompson & Ali Zafar | AWS re:Invent 2022
(intro upbeat music) >> Hello everyone and welcome to our continued coverage of AWS re:Invent here on theCUBE. My name is Savannah Peterson and I am very excited about the conversation coming up. Not only are we joined by two brilliant minds in the cloud, one of them happens to be a CUBE alumni. Please welcome Mike from AMD and Ali from Dropbox. Ali, welcome back to the show, how you been? >> Thanks Savannah. I'm doing great and really excited to be back on theCUBE. It was great discussion last time and really excited for both re:Invent and also to see how this video turns out. >> Hey, that makes two of us and probably three of us. How are you doing today, Mike? >> Doing great. It's really nice to be getting back to in-person events again and to be out solving problems with customers and partners like Dropbox. >> I know, isn't it? We've all missed each other. Was a lonely couple of years. Mike, I'm going to open it up with you. I'm sure a lot of people are curious. What's new at AMD? >> Well, there's a lot that's new at AMD, so I'll share a subset of what's new and what we've been working on. We've expanded our global coverage in Amazon EC2 with new regions and instance types. So users can deploy any application pretty much anywhere AWS has a presence. Our partner ecosystems for solutions and services has expanded quite a bit. We're currently focused on enabling partners and solutions that focus on cloud cost optimization, modernizing infrastructure, and pushing performance to the limit, especially for HPC. But the biggest buzz, of course, is around AMD's new fourth generation of our EPYC CPU Genoa. It's the world's fastest data center CPU with transformative energy efficiency and that's a really interesting combination, highest performance and most efficient. So on launch day, AWS announced their plans to roll out AMD EPYC Genoa processor-based EC2 instances. So we're pretty excited about that and that's what we'll be working on in the near term. >> Wow, that's a big deal and certainly not a casual announcement. Obviously, power and efficiency hot topics here at re:Invent but also looking at the greater impact on the planet is a big conversation we've been having here as well. So this is exciting and timely and congratulations to you and the team on all that seems to be going on. Ali, what's going on at Dropbox? >> Yeah, thanks Savannah. The Q3 2022 was actually a very strong quarter for Dropbox during a very difficult macroeconomic backdrop. Our focus has continued to be on innovation and this is around both new products and also driving multi-product adoption which is paying a lot of dividends for us, so essentially, bringing products like Dropbox Sign, DocSend, Capture, and other exciting products to our customers. On the infra side, it's all about how do we scale our infrastructure to meet the business needs, right? How do we keep up with the accelerated growth during the pandemic and also leveraging both AMD and AWS for investments in our public cloud? >> Let's talk about the cloud a bit. You are both cloud experts and I'm glad that you brought that up. We'll keep it there with Ali. When, why, and how should users leverage public cloud? >> Yeah, so Dropbox is hybrid cloud which means we are running applications both in private and public cloud and within a unique position to leverage the best of both worlds. And Savannah, this is a decision we continue to reevaluate on a regular basis. And there are really three key factors that come into play here. First is scale and scale, are we operating at a scale where customization is cost-efficient for us? Next is uniqueness. Is our workload unique compared to what the public cloud supports? And lastly, innovation. Do we have the expertise to innovate faster than public cloud or not? So based on these three key factors, we try and balance all of them and then come up with the best option for us at Dropbox. And kind of elaborating over here, things like international storage, we're leveraging public cloud, things like AI and ML, we're leveraging public cloud, but when we talk about Magic Pocket, which is our multi-exabyte storage system, that has the scale which is why we are doing that on our own private cloud. >> Wow, I think you just gave everybody a fantastic framework for thinking about their decision matrix there if nothing else. Mike, is there anything that you'd like to add to that? Anything that AMD considers when contemplating public cloud versus private? >> Yeah, so there's really three main drivers that I see when users consider when, why, and how should they leverage public cloud. Three main drivers: establishing a global footprint, accelerating product release cycles, and efficiently rightsizing infrastructure. So customers looking to establish a global footprint often turn to public cloud deployments to quickly reach their clients in workforces around the world, most importantly with minimal capital expense. I understand Dropbox uses public cloud to establish their global presence scaling out from their core data centers in North America. And then a lot of industries have tremendous pressure to accelerate product release cycles. With public cloud, organizations can immediately deploy new applications without a long site and hardware acquisition cycle and then the associated ongoing maintenance and operational overhead. And the third thing is customers that need to rightsize and dynamically scale their infrastructure and application deployments are drawn to public cloud, for example, customers that have cyclical compute or application load peaks can efficiently deploy in the cloud without overdeploying their on-prem infrastructure for most of the year which is off-peak during those off-peak times. That infrastructure idle time is a waste of resources and OPEX. So scalable rightsizing draws a lot of users to cloud deployment. >> Yeah, wow. I think there's a lot of factors to consider but also it seems like a pretty streamlined process for navigating that or at least you two both made it sound that way. Another hot topic in the space right now is security. Mike, let's start with you a little bit. What are the most important security issues for AMD right now that you can talk about? >> Yeah, sure. So, well, first of all, AWS provides a wide variety of really good security services to protect customers that are working in the cloud. Like from a processor technology perspective, there's three main security aspects to consider, two of which are common practice today and one of which AMD brings significant differentiation and value. The first two are protecting data at rest and data in transit. And these two are part of the prevalent security models of today where AMD provides distinct value and differentiation is in protecting data in use. So EPYC Milan and Genoa processors support a function called SEV-SNP and this enables users to reside and their applications to reside within their own cryptographic context and environment with data integrity protection to accomplish what's called comprehensive confidential computing. Ethics confidential computing solution is hardware-based. So it's easy to leverage, there's no code rewrite required unlike comparable solutions that are software-based that require recoding to a proprietary SDK and come with a significant performance trade-off. So with EPYC processors, you can protect your data at rest, in transit, and most importantly, in use. >> Everybody needs to protect their data everywhere it is. So I love that. That's fantastic to hear and I'm sure gives your customers a lot of confidence. What about over at Dropbox? What security issues are you facing, Ali? >> Yeah, so the first company value at Dropbox is actually being worthy of trust, and what this really means from a security perspective is how do we keep all of our users content safe? And this means keeping everything down to all of the infrastructure hardware secure. So partnering with AMD, which is one of our strongest partners out there, the new security features that AMD have and the hardware are critical for us and we are able to take advantage of some of these best security practices within our compute infrastructure by leveraging AMD's secure ship architecture. >> How important, you just touched on it a little bit, and I want to ask, how important are partnerships like the one you have with each other as you innovate at scale? Ali, you're nodding, I'm going to go to you first. >> Yeah, so like I mentioned, the partnership with with AMD is one of the strongest that we have and it just goes beyond like a regular partnership where it's just buy and sell. We talk about technology together, we talk about innovation together, we talk about partnership together, and for us, as I look look at our hybrid cloud strategy, we would not be able to get the benefits in terms of efficiency, scale, or liability performance without having a strong partner like AMD. >> That's awesome. Mike, anything you want to add there? >> I'd reiterate some of what Ali had to say. One of my favorite parts about my job is getting together with partners and customers to figure out how to optimize their applications and deployments around the world to get the most efficient use of the cloud infrastructure for servers that are based on AMD technology. In many cases, we can find 10% or better performance or cost optimization by working closely with partners like Dropbox. And then in addition, if we keep in lock step together to look at what's coming on the roadmap, by the time the latest and greatest technology is finally deployed, our customers and our partners are ready to take advantage of it. So that's the fun part of the job and I really appreciate the Dropbox's cooperation, optimizing their infrastructure, and using AMD products >> Well, what a synergistic relationship of mutual admiration and support. We love to hear it here in the tech world. Mike, last question for you. What's next for AMD? >> Well, heading into 2023, considering the current challenge macroeconomic environment and geopolitical instability, doing more with less will be top of mind for many CFOs and CEOs in 2023. And AMD can help accomplish that. AMD's EPYC processors, leadership performance, and lower EC2 retail costs can help users reduce costs without impacting performance, or the flip side of that, they can scale capacity without increasing costs. And because of EPYC's higher core counts, really high core density, applications can be deployed with fewer servers or smaller instances that has both economic and environmental benefits that reduce usage costs as well as environmental impacts. And that allows customers to optimize their application and infrastructure spend. And then the second thing that I've seen over the last couple of years and I see this trajectory continuing is increased geographic distribution of our colleagues and workforces is here to stay, people work from everywhere. In modern cross platform, collaboration platforms, that bring teams, tools, and content together have a really important role to play to enable that new, more flexible style of working. And those tools need to be really agile and easy to use. I think Dropbox is really well positioned to enable this new style of working. AMD's really happy to work closely with Dropbox to enable these modern work styles, both on premises, hybrid, and fully in the public cloud. >> Well, it sounds like a very exciting and optimistically, bright future for you all at AMD. We love to hear that here at theCUBE. Ali, what about you? What is 2023 going to hold for Dropbox? >> Yeah, so I think we're going to continue on this journey of transformation where our focus is on new products and also multi-product adoption. And from a cloud perspective, how do we continue to evolve our hybrid cloud so that we remain a competitive advantage for our business and also for our customers? I think right now, Savannah, we're in a very unique position to utilize some of the best AMD technology that's out there and that's both on premise and in the cloud. Some of the AMD Epic processors delivered the performance that we need for our hybrid cloud and we want to continue to leverage these also in public cloud which is the EC2 instances that are powered by AMD in the long run. So overall, Dropbox is looking forward to continue to evaluate some of the AMD's Genoa CPUs that are coming out but also want to continue to grow our EC2 footprint powered by AMD in the long run. >> Fantastic. Well, it sounds like this second showing here on theCUBE is just the tee up for your third and we'll definitely have to have Mike back on for the second time around to hear how things are going. Thank you both so much for taking the time today to join me here. Mike and Ali, it was fantastic getting to chat to you and thank you to our audience for tuning into theCUBE's special coverage of AWS re:Invent. My name's Savannah Peterson and I hope we can learn together soon. (outro upbeat music)
SUMMARY :
one of them happens to be a CUBE alumni. and also to see how this video turns out. Hey, that makes two of It's really nice to be getting back Mike, I'm going to open it up with you. and solutions that focus and congratulations to you and the team and this is around both new products and I'm glad that you brought that up. and then come up with the Wow, I think you just gave customers that need to rightsize of factors to consider and their applications to reside That's fantastic to hear and the hardware are critical for us going to go to you first. is one of the strongest that we have Mike, anything you want to add there? and deployments around the world We love to hear it here in the tech world. And that allows customers to What is 2023 going to hold for Dropbox? and we want to continue and I hope we can learn together soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ali | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Dropbox | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Mike Thompson | PERSON | 0.99+ |
Ali Zafar | PERSON | 0.99+ |
North America | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Three main drivers | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
three main drivers | QUANTITY | 0.98+ |
fourth generation | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
second time | QUANTITY | 0.98+ |
three key factors | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.97+ |
Wurden & Bharadwaj | Accelerating Transformation with VMC On AWS
foreign [Music] welcome to this Cube showcase accelerating business transformation with VMware Cloud on aw it's a solution Innovation conversation with two great guests Fred Ward and VP of Commercial Services at AWS and Narayan bardawaj who's the VP and general manager of cloud Solutions at VMware gentlemen thanks for uh joining me on the Showcase great to be here hey thanks for having us on it's a great topic you know we we've been covering this VMware Cloud on AWS since since the launch going back and it's been amazing to watch The Evolution from people saying oh it's the worst thing I've ever seen what's this mean uh and depressed we're we're kind of not really on board with kind of the vision but as it played out as you guys had announced together it did work out great for VMware it did work out great for a divs and it continues two years later and I want to just get an update from you guys on where you guys see this has been going obviously multiple years where is the evolution of the solution as we are right now coming off VMware Explorer just recently and going in to reinvent uh which is only a couple weeks away uh this feels like tomorrow but you know as we prepare a lot going on where are we with the evolution of the solution I mean the first thing I want to say is you know October 2016 was a seminal moment in the history of I.T right when bad girls singer and Andy jassy came together to announce this and I think John you were there at the time I was there it was a great great moment we launched the solution in 2017 the year after that at vmworld back when we called it vmworld I think we've gone from strength to strength one of the things that has really mattered to us is we've learned from AWS also and the process is this notion of working backwards so we're really really focused on customer feedback as we build the service offering now five years old pretty remarkable Journey uh you know in the first years we tried to get across all the regions you know that was a big Focus because there was so much demand for it in the second year we started going really on Enterprise great features we invented this pretty awesome feature called stretch clusters where you could stretch a vsphere cluster using vsan NSX across two azs in the same region pretty phenomenal for lines of availability that applications start started to get with that particular feature and we kept moving forward all kinds of integration with AWS direct connect Transit gateways with our own Advanced networking capabilities uh you know along the way Disaster Recovery we punched out you need two new Services just focused on that and then more recently we launched our outposts partnership we were up on the stage at reinvent again with Pat and Andy announcing AWS outposts and the VMware flavor of that VMware cloud and AWS outposts I think it's been significant growth in our federal sector as well the federal Empire certification more recently so all in all we're super excited we're five years old the customer momentum is really really strong we are scaling the service massively across all GEOS and industries that's great great update and I think one of the things that you mentioned was how the advantages you guys got from that relationship and this has kind of been the theme for AWS man since I can remember from day one Fred you guys do the heavy lifting as as it's always say for the customers here VMware comes on board takes advantage of the AWS and kind of just doesn't miss a beat continues to move their workloads that everyone's using you know vsphere and these are these are Big workloads on AWS what's the AWS perspective on this how do you see it yeah uh it's pretty fascinating to watch how fast customers can actually transform and move when you take the the skill set that they're familiar with and the advanced capabilities that they've been using on-prem and then overlay it on top of the AWS infrastructure that's that's evolving quickly and and building out new hardware and new instances we'll talk about uh but that combined experience between both of us on a jointly engineered solution uh to bring the best security and the best features that really matter for those workloads uh drive a lot of efficiency and speed for the for the customer so it's been well received and the partnership is stronger than ever from an engineering standpoint from a business standpoint and obviously it's been very interesting to look at just how we stay day one in terms of looking at new features and work and and responding to what customers want so pretty pretty excited about just seeing the transformation and the speed that which customers can move to uh BMC yeah that's a great value probably we've been talking about that in context to anyone building on top of the cloud they can have their own super cloud as we call it if you take advantage of all the capex and investment Amazon's made and AWS is made and and continues to make in performance I as and pass all great stuff I have to ask you guys both as you guys see this going to the next level what are some of the differentiations you see around the service compared to other options on the market what makes it different what's the combination you mentioned jointly engineered what are some of the key differentias of the service compared to others yeah I think one of the key things red talked about is this jointly engineered notion right from day one we were the earlier doctors of the AWS Nitro platform right the reinvention of ec2 back five years ago and so we've been you know having a very very strong engineering partnership at that level I think from uh we have a customer standpoint you get the full software-defined data center compute storage networking on ec2 bare metal across all regions you can scale that elastically up and down it's pretty phenomenal just having that consistency Global right on AWS ec2 Global regions now the other thing that's a real differentiator for us customers tell us about is this whole notion of a managed service right and this was somewhat new to VMware this undifferentiated heavy lifting where customers are to provision rack stack Hardware configure the software on top and then upgrade the software and the security patches on top so we took away all of that pain as customers transition to VMware cloud and AWS in fact my favorite story from last year when we were all going through the lock for Jay debacle the industry was just going through that right favorite proof point from customers was before they could even race uh this issue to us we sent them a notification saying uh we already patched all of your systems no action from you the customers were super thrilled I mean these are large Banks many other customers around the world super thrill they had to take no action for a pretty incredible industry challenge that we were all facing that's a great point you know the whole managed service piece brings up the security and you're kind of teasing at it but you know there's always vulnerabilities that emerge when you're doing complex logic and as you grow your Solutions there's more bits you know Fred we were commenting before we came on cameras more bits than ever before and and at the physics layer too as well as the software so you never know when there's going to be a zero day vulnerability out there just it happens we saw one with Fortinet this week um this came out of the woodwork but moving fast on those patches is huge this brings up the whole support angle I wanted to ask you about how you guys are doing that as well because to me we see the value when we when we talk to customers on the cube about this you know it was a real real easy understanding of how what the cloud means to them with VMware now with the AWS but the question that comes up that we want to get more clarity on is how do you guys handle the support together well what's interesting about this is that it's it's done mutually we have dedicated support teams on both sides that work together pretty seamlessly to make sure that whether there's a issue at any layer including all the way up into the app layer as you think about some of the other workloads like sap we'll go end to end and make sure that we support the customer regardless of where the particular issue might be for them uh and on top of that we look at where where we're improving reliability in as a first order of principle between both companies so from an availability and reliability standpoint it's it's top of mind and no matter where the particular item might land we're going to go help the customer resolve that works really well on the VMware side let's spend the feedback there what's the what's some of the updates same scene yeah yeah I think uh look I mean VMware owns and operates the service will be a phenomenal back in relationship with AWS customers call VMware for the service for any issues and then we have a awesome relationship with AWS in the back end for support issues for any hardware issues capacity management that we jointly do right all the hard problems that customers don't have to worry about uh I think on the front end we also have a really good group of solution Architects across the companies that help to really explain the solution do complex things like Cloud migration which is much much easier with VMware on AWS we're presenting that easy button to the public cloud in many ways and so we have a whole technical audience across the two companies that are working with customers every single day you know you had mentioned a list here some of the Innovations the you mentioned the stretch clustering you know getting the GEOS working Advanced Network disaster recovery um you know fed fed ramp public sector certifications outposts all good you guys are checking the boxes every year you got a good good accomplishments list there on the VMware AWS side here in this relationship the question that I'm interested in is what's next what uh recent Innovations are you doing are you making investments in what's on the list this year what items will be next year how do you see the the new things the list of the cosmos people want to know what's next they don't want to see stagnant uh growth here they want to see more action you know as as uh Cloud kind of continues to scale and modern applications Cloud native you're seeing more and more containers more and more you know more CF CI CD pipelining with with modern apps putting more pressure on the system what's new what's the new Innovations absolutely and I think as a five-year-old service offering uh Innovation is top of mind for us every single day so just to call out a few recent innovations that we announced in San Francisco at VMware Explorer um first of all uh our new platform i4i dot metal it's isolate based it's pretty awesome it's the latest and greatest uh all the speeds and beats that you would expect from VMware and AWS at this point in our relationship we announced two different storage options this notion of working from customer feedback allowing customers even more price reductions really take off that storage and park it externally right and you know separate that from compute so two different storage offerings there one is with AWS FSX NetApp on tap which brings in our NetApp partnership as well into the equation and really get that NetApp based really excited about this offering as well and the second storage offering called VMware Cloud Flex story vmware's own managed storage offering beyond that we've done a lot of other Innovations as well I really wanted to talk about VMware Cloud Flex compute where previously customers could only scale by hosts you know host is 36 to 48 cores give or take but with VMware cloudflex compute we are now allowing this notion of a resource defined compute model where customers can just get exactly the vcpu memory and storage that maps to the applications however small they might be so this notion of granularity is really a big innovation that that we are launching in the market this year and then last but not least topper ransomware of course it's a Hot Topic in the industry we are seeing many many customers ask for this we are happy to announce a new ransomware recovery with our VMware Cloud VR solution a lot of innovation there and the way we are able to do machine learning and make sure the workloads that are covered from snapshots backups are actually safe to use so there's a lot of differentiation on that front as well a lot of networking Innovations with project North Star the ability to have layer 4 through layer seven uh you know new SAS services in that area as well keep in mind that the service already supports managed kubernetes for containers it's built in to the same clusters that have virtual machines and so this notion of a single service with a great TCO for VMS and containers is sort of at the heart of our option the networking side certainly is a hot area to keep innovating on every year it's the same same conversation get better faster networking more more options there the flex computes interesting if you don't mind me getting a quick clarification could you explain the address between resource defined versus Hardware defined because this is kind of what we had saw at explore coming out that notion of resource defined versus Hardware defined what's that what does that mean yeah I mean I think we've been super successful in this Hardware defined notion where we're scaling by the hardware unit uh that we present as software-defined data centers right so that's been super successful but we you know customers wanted more especially customers in different parts of the world wanted to start even smaller and grow even more incrementally right lower the cost even more and so this is the part where resource defined starts to be very very interesting as a way to think about you know here's my bag of resources exactly based on what the customer's requested it would be for fiber machines five containers its size exactly for that and then as utilization grows we elastically behind the scenes were able to grow it through policies so that's a whole different dimension it's a whole different service offering that adds value when customers are comfortable they can go from one to the other they can go back to that post-based model if they so choose to and there's a jump off point across these two different economic models it's kind of cloud flexibility right there I like the name Fred let's get into some of the uh examples of customers if you don't mind let's get into some of these we have some time I want to unpack a little bit of what's going on with the customer deployments one of the things we've heard again on the cube is from customers is they like the clarity of the relationship they love the cloud positioning of it and then what happens is they lift and shift the workloads and it's like feels great it's just like we're running VMware on AWS and then they start consuming higher level Services kind of that adoption Next Level happens um and because it's in the cloud so so can you guys take us through some recent examples of customer wins or deployments where they're using VMware Cloud on AWS on getting started and then how do they progress once they're there how does it evolve can you just walk us through a couple use cases sure um there's a well there's a couple one it's pretty interesting that you know like you said as there's more and more bids you need better and better hardware and networking and we're super excited about the I-4 uh and the capabilities there in terms of doubling and or tripling what we're doing around a lower variability on latency and just improving all the speeds but what customers are doing with it like the college in New Jersey they're accelerating their deployment on a on onboarding over like 7 400 students over a six to eight month period and they've really realized a ton of savings but what's interesting is where and how they can actually grow onto additional native Services too so connectivity to any other services is available as they start to move and migrate into this um the the options there obviously are tied to all the Innovation that we have across any Services whether it's containerized and with what they're doing with tanzu or with any other container and or services within AWS so so there's there's some pretty interesting scenarios where that data and or the processing which is moved quickly with full compliance whether it's in like health care or regulatory business is is allowed to then consume and use things for example with text extract or any other really cool service that has you know monthly and quarterly Innovations so there's things that you just can't could not do before that are coming out uh and saving customers money and building Innovative applications on top of their uh their current uh app base in in a rapid fashion so pretty excited about it there's a lot of examples I think I probably don't have time to go into too many here yeah but that's actually the best part is listening to customers and seeing how many net new services and new applications are they actually building on top of this platform now Ryan what's your perspective from the VMware psychics you know you guys have now a lot of head room to offer customers with Amazon's you know higher level services and or whatever's homegrown what is being rolled out because you now have a lot of hybrid too so so what's your what's your take on what what's happening and with customers I mean it's been phenomenal the customer adoption of this and you know Banks and many other highly sensitive verticals are running production grade applications tier one applications on the service over the last five years and so you know I have a couple of really good examples SNP Global is one of my favorite examples large Bank the merch with IHS Market big sort of conglomeration now both customers were using VMware cloud and AWS in different ways and with the uh with the use case one of their use cases was how do I just respond to these Global opportunities without having to invest in physical data centers and then how do I migrate and consolidate all my data centers across the globe of which there were many and so one specific example for this company was how they migrated thousand one thousand workloads to VMware cloud and AWS in just six weeks pretty phenomenal if you think about everything that goes into a cloud migration process people process technology and the beauty of the technology going from VMware point a to VMware point B the the lowest cost lowest risk approach to adopting we have our cloud in AWS so that's uh you know one of my favorite examples there are many other examples across other verticals that we continue to see the good thing is we're seeing rapid expansion across the globe we're constantly entering new markets uh with a limited number of regions and progressing our roadmap it's great to see I mean the data center migrations go from months many many months to weeks it's interesting to see some of those success stories so congratulations another one of the other uh interesting uh and fascinating uh uh benefits is the sustainability Improvement in terms of being green so the efficiency gains that we have both in current uh generation and New Generation processors and everything that we're doing to make sure that when a customer can be elastic they're also saving power which is really critical in a lot of regions worldwide at this point in time they're they're seeing those benefits if you're running really inefficiently in your own data center that is just a not a great use of power so the actual calculators and the benefits to these workloads is are pretty phenomenal just in being more green which I like we just all need to do our part there and and this is a big part of it here it's a huge it's a huge point about sustainability for everyone glad you called that out the other one I would say is supply chain issues another one you see that constrains I can't buy hardware and the third one is really obvious but no one really talks about it it's security right I mean um I remember interviewing Steven Schmidt with that AWS and many years ago this is like 2013 and um you know at that time people saying the Cloud's not secure and he's like listen it's more secure in the cloud than on premise and if you look at the security breaches it's all about the on-premise data center vulnerabilities not so much Hardware so there's a lot you gotta the the stay current on on the isolation there is hard so I think I think the security and supply chain threat is another one do you agree I I absolutely agree uh it's it's hard to manage supply chain nowadays we put a lot of effort into that and I think we have a great ability to forecast and make sure that we can lean in and have the resources that are available and run them run them more efficiently yeah and then like you said on the security Point Security is job one it is it is the only P1 and if you think of how we build our infrastructure from Nitro all the way up and how we respond and work with our partners and our customers there's nothing more important and Narayan your point earlier about the managed service patching and being on top of things is really going to get better all right final question I really want to thank you for your time on this showcase it's really been a great conversation uh Fred you had made a comment earlier I want to kind of end with the kind of a curveball and put you guys on the spot we're talking about a modern a new modern shift it's another we're seeing another inflection point we've been documenting it it's almost like Cloud hitting another inflection point um with application and open source growth significantly at the app layer continue to put a lot of pressure and innovation in the infrastructure side so the question is for you guys each to answer is what's the same and what's different in today's market so it's kind of like we want more of the same here but also things have changed radically and better here what are the what's what's changed for better and where what's still the same kind of thing hanging around that people are focused on can you share your perspective I'll I'll tackle it um you know uh businesses are complex and they're often unique uh that that's the same uh what's changed is how fast you can innovate the ability to combine manage services and new Innovative services and build new applications is so much faster today leveraging world-class Hardware uh that you don't have to worry about that's elastic you could not do that even five ten years ago to the degree you can today especially with the Innovation so Innovation is accelerating uh at a rate that most people can't even comprehend and understand the the set of services that are available to them it's really fascinating to see what a one pizza team of of Engineers can go actually develop in a week it is phenomenal so super excited about this space and it's only going to continue to accelerate that that's my take there I am you got a lot of platform to compete on with Amazon I got a lot to build on the memory which then you're right on your side what's your what's your answer to that question I think we're seeing a lot of innovation with new applications that customers [Music] I think uh what we see is this whole notion of how do you go from desktop to production to the secure supply chain and how can we truly uh you know build on the agility that developers desire and build all the security and the pipelines to energize that motor production quickly and efficiently I think we are seeing uh you know we're at the very start of that sort of uh of Journey um of course we have invested in kubernetes means to an end but it's so much more Beyond that's happening in the industry and I think we're at the very very beginning of this Transformations Enterprise transformation that many of our customers are going through and we're inherently part of it yeah well gentlemen I really appreciate that we're seeing the same things more the same here on you know solving these complexities with abstractions whether it's you know higher level services with large-scale infrastructure um at your fingertips infrastructure is code infrastructure to be provisioned serverless all the good stuff happening Fred with AWS on your side and we're seeing customers resonate with this idea of being an operator again being a cloud operator and developer so the developer Ops is kind of devops is kind of changing too so all for the better thank you for spending the time we're seeing again that traction with the VMware customer base and it was getting getting along great together so thanks for sharing your perspectives they appreciate it thank you so much okay thank you John okay this is thecube and AWS VMware showcase accelerating business transformation VMware Cloud on AWS jointly engineered solution bringing Innovation to the VMware customer base going to the cloud and Beyond I'm John Furrier your host thanks for watching [Music]
SUMMARY :
customers on the cube about this you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
October 2016 | DATE | 0.99+ |
Fred Ward | PERSON | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
36 | QUANTITY | 0.99+ |
New Jersey | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy jassy | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
Narayan bardawaj | PERSON | 0.99+ |
last year | DATE | 0.99+ |
John | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
next year | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Ryan | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
Fred | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
both sides | QUANTITY | 0.99+ |
NetApp | TITLE | 0.99+ |
six weeks | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
VMware Cloud | TITLE | 0.98+ |
second year | QUANTITY | 0.98+ |
Andy | PERSON | 0.98+ |
vmware | ORGANIZATION | 0.98+ |
two years later | DATE | 0.98+ |
this week | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
second storage | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Fortinet | ORGANIZATION | 0.98+ |
7 400 students | QUANTITY | 0.98+ |
vmworld | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
five ten years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
first years | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
five-year-old | QUANTITY | 0.97+ |
five containers | QUANTITY | 0.96+ |
tomorrow | DATE | 0.96+ |
two different storage options | QUANTITY | 0.96+ |
48 cores | QUANTITY | 0.96+ |
Wurden | PERSON | 0.96+ |
two new Services | QUANTITY | 0.95+ |
eight month | QUANTITY | 0.95+ |
thousand | QUANTITY | 0.95+ |
IHS | ORGANIZATION | 0.94+ |
Jay debacle | PERSON | 0.93+ |
VMware Cloud Flex | TITLE | 0.93+ |
two great guests | QUANTITY | 0.92+ |
third one | QUANTITY | 0.92+ |
Fernando Castillo & Steven Jones, AWS | AWS re:Invent 2020 Partner Network Day
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Global Partner Network. Hello, everyone. This is Dave Balanta. And welcome to the cubes Virtual coverage of AWS reinvent 2020 with a special focus on the A p N partner experience. I'm excited to have two great guests on the program. Fernando Castillo is the head s a p on AWS Partner Network and s A P Alliance and AWS and Stephen Jones is the general manager s a p E c to enterprise that aws Gentlemen, welcome to the Cube. Great to see you. >>I'm here. >>So guys ASAP on AWS. It's a core workload for customers. I call it the poster child for mission Critical workloads and applications. Now a lot has happened since we last talked to you guys. So So tell us it. Maybe start with Stephen. What's going on with Sapna Ws? Give us the update. >>I appreciate the question Day. Look, a lot of customers continue to migrate. These mission critical workloads State of us on a good example is the U. S. Navy right? Who moved their entire recipe landscape European workload AWS. This is a very large system of support. Over 72,000 users across 66 different navy commands. They estimate that 70 billion worth of parts and goods actually transact through the system every year. Just just massive. Right? And this this type of adoptions continued to accelerate a very rapid clip. And today, over 5000 customers now are running SFP workloads. I need to be us on there really trusting us, uh, to to manage and run these workloads. And another interesting stat here is that more than half of these customers are actually running asap, Hana, which is a safe He's flagship in memory database. >>Right, Fernando, can you add to that? >>Sure. So definitely about, you know, the customs are also SCP themselves continue to lose a dollar less to run their own offerings. Right? So think about conquer SCP platform. SCP analytics were when new offers like Hannah Cloud. In addition to that, we continue to see the P and L despondent network to grow at an accelerated pace. Today we have over 60 SNP company partners all over the world helping SFP customers s O that customers are my green. There s appeal asking CW's. They only look for reduced costs, improved performance but also toe again access to new capabilities. So innovate around their core business systems and transform their businesses. >>So for now, I wonder if I could stay with you for a minute. I mean, the numbers that Steve was putting out there, it's just massive scale. So you obviously have a lot of data. So I'm wondering when you talk to these customers, Are you discerning any common patterns that are emerging? What are some of the things that you're hearing or seeing when you analyze the data? >>Sure. So just to give a couple example right. Our biggest customers are doing complete ASAP. Transformations on Toe s four Hana. Their chance they're going to these new S a p r p code nine All customers have immediate needs, and they're taking their existing assets to AWS, so looking to reduce costs and improve performance, but also to sell them apart for innovation. This innovation is something that operation or something that they can wait. They need it right now. It's they This time to innovate is now right on some of these customers saying that while s and P has nice apart. So that is a multi year process on most organizations and have a look from waiting for this just before they start innovating. So instead of that, they focus on bringing what they have on start innovating right away on Steve has some great stories around here, so maybe Steve can share with that. Goes with that? >>Yeah, that'd be great, Steve. >>Yeah. Look, I think a good example here on and Fernando touched it, touched on it. Well, right. So customers coming from all kind of different places in their journey aws as it relates to this this critical workload and some are looking to really reap the benefits of the investments they made over the last couple decades sometimes. And Vista is a really good example Here, um there a subsidiary of Cook Industries, they migrated and moved their existing S a P r P solution called E c C. To AWS. They estimate that this migration alone from an infrastructure cost savings perspective, has netted them about two million per year. Additionally, you know, they started to bring some of the other issues they were trying to solve from a business perspective, together now that they were on the on the on the business on the AWS platform. And one thing that recognizes they had different data silos, that they had been operating in an on premises world. Right? So massive factories solution and bringing all of that data together on a single platform on AWS and enriching that with the SCP data has allowed them to actually improve their forecasting supply chain processes across multiple data sources and the estimate that that is saving them additional millions per year. So again, customers are not necessarily waiting to innovate. Um, but actually moving forward now. >>All right, so I gotta ask, you don't hate me for asking this question, but but everybody talks about how great they are. Supporting s a P is It's one of the top, of course, because s a p, you know, huge player in the in the application space. So I want you guys to address how aws specifically compares Thio some of your competitors that are, you know, the hyper scaler specifically as it relates to supporting S a P workloads. What's the rial differential value that you guys bring? Maybe Steve, you could start >>Sure, you're probably getting to know us a little bit. Way don't focus a lot on competition, Aziz mentioned week We continue to see customers adopt AWS for S a p a really rapid clip. And that alone actually brings a lot of feedback back into how we consider our own service offerings as it relates to this particular workload on that, that's it. That's important signal right for what we're building. But customers do tell us the security performance availability matters, especially for this workload, which, you know, to be honest, is the backbone of many, many organizations. Right? And we understand why. And there was a study that was done recently about a. D. C. Where they found that even a single hour of unplanned downtime as a released this particular workload could cost millions. And so it's it's super important. And if you look at, um, you know, publicly available data from an average perspective, um, it has considerably less downtime than the other hyper scale is out there way. Take the performance and availability of oh, our entire global footprint and in this workload in particular, super important. >>Well, you know, that's a great point, Steve. I mean, if you got critical mission critical applications like ASAP supporting the business, that's driving revenue. It's driving productivity. The higher the value of the application, the greater the impact when it's down, I wonder, Fernando, you know, Steve said, You guys don't focus on the competition. Well, is an analyst. You know, I always focused on the competition, So I wonder if you're gonna add anything to that. >>Sure. So again, as you can imagine, multiple analyst called Space right. And, uh, everybody shares information. And analysts have agreed that Italy's clean infrastructure services, including the three quite a for CP across the globe. So we feel very humble and honor about this recognition on this encourages to continue to improve ourselves to give you a couple examples for a 10 year in a row. Italy's US evaluated as a leader in the century Gardner Magic Quadrant, right for cloud infrastructure from services. And, as you know, the measure to access right they measure very execute on complete, insufficient were the highest, both of them. Another third party, just not keep with one is icy, right? You know, technology research dreamers, you already you might know advice for famous Well, the reason they publisher s a p on infrastructure service provider lands reports long name which, basically, the analyzers providers were best suited to host s a. P s four hana workloads on more broadly s a p Hannah landscapes, you know, very large scape ASAP 100 landscapes. So they recognize it, at least for the third year in a row. And conservative right, the best class enterprise. Great infrastructure towards security performances, Steve mentioned, but also making the panic community secure. Differentiation. Andi, they posted. They mentioned it all us as a little position in quadrant for the U. S. U K France, Germany, the Nordics in Brazil. So again, really honor and humble on discontinued in court just to continue to improve. >>You know, Steve, I just wrote a piece on Cloud 2030 trying to project what the next 10 years is gonna look like in one of the I listed a lot of things, but one of the things I talked about was some of the technical factors like alternative processors, specialized networks, and you guys have have have really, always done a good job of sort of looking at purpose built, you know, stuff that that can run workloads faster. How relevant is that in the the S A P community? >>Oh, that's a great question, David. It's It's absolutely relevant. You take a look at what? What we've done over the years with nitro and how we've actually brought the ability for customers to run on environmental infrastructure but still have that integrated, uh, native cloud experience. Uh, that is absolutely applicable to Unless if you workload and we're actually able toe with that technology, bring the capability to customers to run thes mission critical workloads on instances with up to 24 terabytes of brand, albeit bare metal, but fully integrated into the AWS network fabric, >>right? I mean, a lot of people, you know, need that bare metal raw performance on, and that makes sense that you've been, you know, prioritize such an important class of workload. I'm not surprised that that I mean, the numbers that you threw out a pretty impressive eso. It's clear you're leading the charge here. Maybe you could share a little glimpse of what's coming in the future. Show us a little leg, Steve. >>Yeah, well, look, uh, we know that infrastructure is super important. Thio. Our customers and in particular the customers are running these mission critical workloads. But there's a lot of heavy lifting, uh, that that we also want to simplify. And so you've seen some indications of what we've done here over the years, uh, ice G that Fernando mentioned actually called out. AWS is differentiating here, right? So for for many years, we've actually been leading in releasing tools for customers to actually orchestrate and automate the deployment of these types of worthless so ASAP in particular. I mean, if you think about it a customer who is coming to a to a hyper scale platforms like AWS and having to learn what that means, Plus understand all the best practices from S, A, P and AWS to make that thing really shine from a performance and availability perspective, that's a heavy asked. Right? So we put a lot of work from a tooling perspective into into automating this and making this super simple not just for customers, but also partners. >>Anything you wanna chime in on that particular the partner side, Fernando. >>Sure. So this is super important for public community, right? As you can imagine, the tooling that we're bringing together toe. The market is helping the Spanish to move quicker, right? So they don't have to reinvent. They will all the time. They will just take this and move and take it and move forward. Give an example. One of our parents in New York, three hosts. Thanks for lunch. We start with Steve just reference right. They want to create work clothes in an automated way. Speeding up the delivery time. 75% corporation is every environments. So it just imagine the the impact of these eso a thing here that is important is our goal is to help customers and partners move quicker, removing any undifferentiated heavy lifting, right, Andi, that's kind of the mantra of this group. >>You know, when you think about what Doug Young was saying is in the keynote, um, the importance of partners and I've been on this kick about we've moved in this industry from products to platforms, and the next 10 years is gonna be about leveraging ecosystems. The power of many versus the resource is of a few or even one is large is a W s so so partners air critical on I wonder if you could talk toe the role that that the network partners air playing in affecting S a p customer outcomes and strategies. Maybe Steve, you could take that first. >>Yeah, but look, we recognize that the migration on the management of these systems it's complex, right? And for years, we've invested in a global community of partners many partners who have been fundamental to s a p customer success over over a couple decades, Right? And so, um, that there are some nuances that that need to be realized when it comes to running ASAP on on a hyper scale platforms like AWS. And so we put a lot of work into making sure these partners are equipped to ensure customers have have a really good experience. And I mean, in a recent conversation I had with a CEO of a large, uh, CPG company, he told me he reflected that the partners really are the glue. That kind of brings it all together for them. And, uh, you know, just to share something with you today, our partners, our partner community network for S. If he is actually helping over 90% of net new customers who are coming toe migrate as if you were close to AWS, so they're just absolutely critical. >>So, Fernando, there's the m word, the migration, you know, it's you don't want to unless you have to, but people have to move to the cloud. So So what can you add to this conversation? >>Sure, they So again, just to echo what Steve mentioned, right? Uh, migration. Super important. We have ah group of partners that are right now specializing in migration projects. And they have built migration factories. You may have seen some of them. They have been doing press releases through the whole year saying that they're part of these and their special cells they're bringing to the helping customers adopt AWS. So they go through the next, you know, very detailed process. We call them map for ASAP partners. So they have these incremental value on top of being SCP competent funds, which I referred earlier on. This group has, as mentioned, you know, show additional capability to safeguard these migrations on. Of course, we appreciate and respect and we have put investment programs for them to help them support their own customers right in those in these migrations. But because the SNP ecosystem on it. But it's not about only migrations, right? One important topic that we need technologies as you as Steve mentioned, we have these great set of partner of customers have trusted us or 5000 through a year on these, uh, these customers asking for innovation right there, asking us how come the ecosystem help us innovate faster? So these partners are using a dollars a plan off innovation, creating new solutions that are relevant for SCP. So basically helping customers modernize their business processes so you can take an example like Accenture Data Accelerator writers taking SCP information and data legs Really harm is the power of data there or the Lloyd you know, kinetic finances helping, you know, deploy Central finance, which is a key component of SCP, or customer like partners like syntax that has created our industrial i o. T. Offering that connects with the SNP core. So more and more you will see thes ecosystem partners innovating on AWS to support SNP customers. >>You know, I think that's such an important point because for for decades have been around for a while. It's the migrations air like this. Oftentimes there's forced March because maybe a vendor is not going to support it anymore. Or you're just trying to, you know, squeeze Mawr costs out of the lemon. What you guys are talking about is leveraging an ecosystem for innovation and again that ties into the themes that we're talking about about Cloud 2030 in the next decade of innovation. Let's close, guys. What can customers ASAP customers AWS customers expect from reinvent this year? Um, you know, maybe more broadly, what can they expect from A W S in the coming 12 months? Maybe, Steve, you could give us a sense, and then Fernando could bring us home. >>You bet. Look, um, this year we've really tried to focus on customer stories, right? So we've we've optimized. There's a number of sessions here agreement this year. We want customers and partners to learn from other from other customer experiences, so customers will be able to listen to Bristol Myers Squibb talk about their performance, their their experiences, Alando Newmont's and Volkswagen. And I'll be talking about kind of different places where they are on this, this journey to cloud and this innovation life cycle, right, because it really is about choice and what's right for their business. So we're pretty excited about that. >>Yeah. Nice mix of representative Industries there. I Fernando bring us home, please. >>Sure. So, again, we think about 21 in the future. Rest assured, we'll continue to invest heavily to make sure it values remains the platform innovation. Right on choice for recipe customers where a customer wants to move their existing investments on continue to add value. So what they have already done for years or goto export transformation. We're here to support their choice. Right? And we're committed to that as part of our customers Asian culture. So we're super excited about the future. And we're thankful for you to spend time with us today. >>Great, guys, Look, these are the most demanding workloads we're seeing that that rapid movement to the cloud is just gonna accelerate over the coming years. Thanks so much for coming on The Cube. Really appreciate it. >>Our pleasure. Thank >>you. All >>right. Thank you for watching everyone keep it right there from or great content. You're watching the cube aws reinvent 2020
SUMMARY :
Network and s A P Alliance and AWS and Stephen Jones is the general manager talked to you guys. Look, a lot of customers continue to migrate. So innovate around their core So for now, I wonder if I could stay with you for a minute. So instead of that, they focus on bringing what they have on start innovating really reap the benefits of the investments they made over the last couple decades sometimes. What's the rial differential value that you guys bring? especially for this workload, which, you know, to be honest, I wonder, Fernando, you know, Steve said, You guys don't focus on the competition. on more broadly s a p Hannah landscapes, you know, very large scape ASAP 100 landscapes. built, you know, stuff that that can run workloads faster. Uh, that is absolutely applicable to Unless I'm not surprised that that I mean, the numbers that you threw out a pretty impressive eso. I mean, if you think about it a customer who is coming to a to a hyper scale platforms like AWS So it just imagine the the impact is large is a W s so so partners air critical on I wonder if you could talk toe the role And, uh, you know, just to share something with you today, So So what can you add to this conversation? is the power of data there or the Lloyd you know, kinetic finances helping, Um, you know, maybe more broadly, So we're pretty excited about that. I Fernando bring us home, And we're thankful for you to spend time with us today. is just gonna accelerate over the coming years. Our pleasure. you. Thank you for watching everyone keep it right there from or great content.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Steve Manly | PERSON | 0.99+ |
Sanjay | PERSON | 0.99+ |
Rick | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Fernando Castillo | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Balanta | PERSON | 0.99+ |
Erin | PERSON | 0.99+ |
Aaron Kelly | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Fernando | PERSON | 0.99+ |
Phil Bollinger | PERSON | 0.99+ |
Doug Young | PERSON | 0.99+ |
1983 | DATE | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsing | PERSON | 0.99+ |
Data Torrent | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Aaron | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
AWS Partner Network | ORGANIZATION | 0.99+ |
Maurizio Carli | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Drew Clark | PERSON | 0.99+ |
March | DATE | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rich Steeves | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
Phu Hoang | PERSON | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
1 | QUANTITY | 0.99+ |
Cook Industries | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dave Valata | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Stephen Jones | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Better Cybercrime Metrics Act | TITLE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Christian Keynote with Disclaimer
(upbeat music) >> Hi everyone, thank you for joining us at the Data Cloud Summit. The last couple of months have been an exciting time at Snowflake. And yet, what's even more compelling to all of us at Snowflake is what's ahead. Today I have the opportunity to share new product developments that will extend the reach and impact of our Data Cloud and improve the experience of Snowflake users. Our product strategy is focused on four major areas. First, Data Cloud content. In the Data Cloud silos are eliminated and our vision is to bring the world's data within reach of every organization. You'll hear about new data sets and data services available in our data marketplace and see how previous barriers to sourcing and unifying data are eliminated. Second, extensible data pipelines. As you gain frictionless access to a broader set of data through the Data Cloud, Snowflakes platform brings additional capabilities and extensibility to your data pipelines, simplifying data ingestion, and transformation. Third, data governance. The Data Cloud eliminates silos and breaks down barriers and in a world where data collaboration is the norm, the importance of data governance is ratified and elevated. We'll share new advancements to support how the world's most demanding organizations mobilize your data while maintaining high standards of compliance and governance. Finally, our fourth area focuses on platform performance and capabilities. We remain laser focused on continuing to lead with the most performant and capable data platform. We have some exciting news to share about the core engine of Snowflake. As always, we love showing you Snowflake in action, and we prepared some demos for you. Also, we'll keep coming back to the fact that one of the characteristics of Snowflake that we're proud as staff is that we offer a single platform from which you can operate all of your data workloads, across clouds and across regions, which workloads you may ask, specifically, data warehousing, data lake, data science, data engineering, data applications, and data sharing. Snowflake makes it possible to mobilize all your data in service of your business without the cost, complexity and overhead of managing multiple systems, tools and vendors. Let's dive in. As you heard from Frank, the Data Cloud offers a unique capability to connect organizations and create collaboration and innovation across industries fueled by data. The Snowflake data marketplace is the gateway to the Data Cloud, providing visibility for organizations to browse and discover data that can help them make better decisions. For data providers on the marketplace, there is a new opportunity to reach new customers, create new revenue streams, and radically decrease the effort and time to data delivery. Our marketplace dramatically reduces the friction of sharing and collaborating with data opening up new possibilities to all participants in the Data Cloud. We introduced the Snowflake data marketplace in 2019. And it is now home to over 100 data providers, with half of them having joined the marketplace in the last four months. Since our most recent product announcements in June, we have continued broadening the availability of the data marketplace, across regions and across clouds. Our data marketplace provides the opportunity for data providers to reach consumers across cloud and regional boundaries. A critical aspect of the Data Cloud is that we envisioned organizations collaborating not just in terms of data, but also data powered applications and services. Think of instances where a provider doesn't want to open access to the entirety of a data set, but wants to provide access to business logic that has access and leverages such data set. That is what we call data services. And we want Snowflake to be the platform of choice for developing discovering and consuming such rich building blocks. To see How the data marketplace comes to live, and in particular one of these data services, let's jump into a demo. For all of our demos today, we're going to put ourselves in the shoes of a fictional global insurance company. We've called it Insureco. Insurance is a data intensive and highly regulated industry. Having the right access control and insight from data is core to every insurance company's success. I'm going to turn it over to Prasanna to show how the Snowflake data marketplace can solve a data discoverability and access problem. >> Let's look at how Insureco can leverage data and data services from the Snowflake data marketplace and use it in conjunction with its own data in the Data Cloud to do three things, better detect fraudulent claims, arm its agents with the right information, and benchmark business health against competition. Let's start with detecting fraudulent claims. I'm an analyst in the Claims Department. I have auto claims data in my account. I can see there are 2000 auto claims, many of these submitted by auto body shops. I need to determine if they are valid and legitimate. In particular, could some of these be insurance fraud? By going to the Snowflake data marketplace where numerous data providers and data service providers can list their offerings, I find the quantifying data service. It uses a combination of external data sources and predictive risk typology models to inform the risk level of an organization. Quantifying external sources include sanctions and blacklists, negative news, social media, and real time search engine results. That's a wealth of data and models built on that data which we don't have internally. So I'd like to use Quantifind to determine a fraud risk score for each auto body shop that has submitted a claim. First, the Snowflake data marketplace made it really easy for me to discover a data service like this. Without the data marketplace, finding such a service would be a lengthy ad hoc process of doing web searches and asking around. Second, once I find Quantifind, I can use Quantifind service against my own data in three simple steps using data sharing. I create a table with the names and addresses of auto body shops that have submitted claims. I then share the table with Quantifind to start the risk assessment. Quantifind does the risk scoring and shares the data back with me. Quantifind uses external functions which we introduced in June to get results from their risk prediction models. Without Snowflake data sharing, we would have had to contact Quantifind to understand what format they wanted the data in, then extract this data into a file, FTP the file to Quantifind, wait for the results, then ingest the results back into our systems for them to be usable. Or I would have had to write code to call Quantifinds API. All of that would have taken days. In contrast, with data sharing, I can set this up in minutes. What's more, now that I have set this up, as new claims are added in the future, they will automatically leverage Quantifind's data service. I view the scores returned by Quantifind and see the two entities in my claims data have a high score for insurance fraud risk. I open up the link returned by Quantifind to read more, and find that this organization has been involved in an insurance crime ring. Looks like that is a claim that we won't be approving. Using the Quantifind data service through the Snowflake data marketplace gives me access to a risk scoring capability that we don't have in house without having to call custom APIs. For a provider like Quantifind this drives new leads and monetization opportunities. Now that I have identified potentially fraudulent claims, let's move on to the second part. I would like to share this fraud risk information with the agents who sold the corresponding policies. To do this, I need two things. First, I need to find the agents who sold these policies. Then I need to share with these agents the fraud risk information that we got from Quantifind. But I want to share it such that each agent only sees the fraud risk information corresponding to claims for policies that they wrote. To find agents who sold these policies, I need to look up our Salesforce data. I can find this easily within Insureco's internal data exchange. I see there's a listing with Salesforce data. Our sales Ops team has published this listing so I know it's our officially blessed data set, and I can immediately access it from my Snowflake account without copying any data or having to set up ETL. I can now join Salesforce data with my claims to identify the agents for the policies that were flagged to have fraudulent claims. I also have the Snowflake account information for each agent. Next, I create a secure view that joins on an entitlements table, such that each agent can only see the rows corresponding to policies that they have sold. I then share this directly with the agents. This share contains the secure view that I created with the names of the auto body shops, and the fraud risk identified by Quantifind. Finally, let's move on to the third and last part. Now that I have detected potentially fraudulent claims, I'm going to move on to building a dashboard that our executives have been asking for. They want to see how Insureco compares against other auto insurance companies on key metrics, like total claims paid out for the auto insurance line of business nationwide. I go to the Snowflake data marketplace and find SNL U.S. Insurance Statutory Data from SNP. This data is included with Insureco's existing subscription with SMP so when I request access to it, SMP can immediately share this data with me through Snowflake data sharing. I create a virtual database from the share, and I'm ready to query this data, no ETL needed. And since this is a virtual database, pointing to the original data in SNP Snowflake account, I have access to the latest data as it arrives in SNPs account. I see that the SNL U.S. Insurance Statutory Data from SNP has data on assets, premiums earned and claims paid out by each us insurance company in 2019. This data is broken up by line of business and geography and in many cases goes beyond the data that would be available from public financial filings. This is exactly the data I need. I identify a subset of comparable insurance companies whose net total assets are within 20% of Insureco's, and whose lines of business are similar to ours. I can now create a Snow site dashboard that compares Insureco against similar insurance companies on key metrics, like net earned premiums, and net claims paid out in 2019 for auto insurance. I can see that while we are below median our net earned premiums, we are doing better than our competition on total claims paid out in 2019, which could be a reflection of our improved claims handling and fraud detection. That's a good insight that I can share with our executives. In summary, the Data Cloud enabled me to do three key things. First, seamlessly fine data and data services that I need to do my job, be it an external data service like Quantifind and external data set from SNP or internal data from Insureco's data exchange. Second, get immediate live access to this data. And third, control and manage collaboration around this data. With Snowflake, I can mobilize data and data services across my business ecosystem in just minutes. >> Thank you Prasanna. Now I want to turn our focus to extensible data pipelines. We believe there are two different and important ways of making Snowflakes platform highly extensible. First, by enabling teams to leverage services or business logic that live outside of Snowflake interacting with data within Snowflake. We do this through a feature called external functions, a mechanism to conveniently bring data to where the computation is. We announced this feature for calling regional endpoints via AWS gateway in June, and it's currently available in public preview. We are also now in public preview supporting Azure API management and will soon support Google API gateway and AWS private endpoints. The second extensibility mechanism does the converse. It brings the computation to Snowflake to run closer to the data. We will do this by enabling the creation of functions and procedures in SQL, Java, Scala or Python ultimately providing choice based on the programming language preference for you or your organization. You will see Java, Scala and Python available through private and public previews in the future. The possibilities enabled by these extensibility features are broad and powerful. However, our commitment to being a great platform for data engineers, data scientists and developers goes far beyond programming language. Today, I am delighted to announce Snowpark a family of libraries that will bring a new experience to programming data in Snowflake. Snowpark enables you to write code directly against Snowflake in a way that is deeply integrated into the languages I mentioned earlier, using familiar concepts like DataFrames. But the most important aspect of Snowpark is that it has been designed and optimized to leverage the Snowflake engine with its main characteristics and benefits, performance, reliability, and scalability with near zero maintenance. Think of the power of a declarative SQL statements available through a well known API in Scala, Java or Python, all these against data governed in your core data platform. We believe Snowpark will be transformative for data programmability. I'd like to introduce Sri to showcase how our fictitious insurance company Insureco will be able to take advantage of the Snowpark API for data science workloads. >> Thanks Christian, hi, everyone? I'm Sri Chintala, a product manager at Snowflake focused on extensible data pipelines. And today, I'm very excited to show you a preview of Snowpark. In our first demo, we saw how Insureco could identify potentially fraudulent claims. Now, for all the valid claims InsureCo wants to ensure they're providing excellent customer service. To do that, they put in place a system to transcribe all of their customer calls, so they can look for patterns. A simple thing they'd like to do is detect the sentiment of each call so they can tell which calls were good and which were problematic. They can then better train their claim agents for challenging calls. Let's take a quick look at the work they've done so far. InsureCo's data science team use Snowflakes external functions to quickly and easily train a machine learning model in H2O AI. Snowflake has direct integrations with H2O and many other data science providers giving Insureco the flexibility to use a wide variety of data science libraries frameworks or tools to train their model. Now that the team has a custom trained sentiment model tailored to their specific claims data, let's see how a data engineer at Insureco can use Snowpark to build a data pipeline that scores customer call logs using the model hosted right inside of Snowflake. As you can see, we have the transcribed call logs stored in the customer call logs table inside Snowflake. Now, as a data engineer trained in Scala, and used to working with systems like Spark and Pandas, I want to use familiar programming concepts to build my pipeline. Snowpark solves for this by letting me use popular programming languages like Java or Scala. It also provides familiar concepts in APIs, such as the DataFrame abstraction, optimized to leverage and run natively on the Snowflake engine. So here I am in my ID, where I've written a simple scalar program using the Snowpark libraries. The first step in using the Snowpark API is establishing a session with Snowflake. I use the session builder object and specify the required details to connect. Now, I can create a DataFrame for the data in the transcripts column of the customer call logs table. As you can see, the Snowpark API provides native language constructs for data manipulation. Here, I use the Select method provided by the API to specify the column names to return rather than writing select transcripts as a string. By using the native language constructs provided by the API, I benefit from features like IntelliSense and type checking. Here you can see some of the other common methods that the DataFrame class offers like filters like join and others. Next, I define a get sentiment user defined function that will return a sentiment score for an input string by using our pre trained H2O model. From the UDF, we call the score method that initializes and runs the sentiment model. I've built this helper into a Java file, which along with the model object and license are added as dependencies that Snowpark will send to Snowflake for execution. As a developer, this is all programming that I'm familiar with. We can now call our get sentiment function on the transcripts column of the DataFrame and right back the results of the score transcripts to a new target table. Let's run this code and switch over to Snowflake to see the score data and also all the work that Snowpark has done for us on the back end. If I do a select star from scored logs, we can see the sentiment score of each call right alongside the transcript. With Snowpark all the logic in my program is pushed down into Snowflake. I can see in the query history that Snowpark has created a temporary Java function to host the pre trained H20 model, and that the model is running right in my Snowflake warehouse. Snowpark has allowed us to do something completely new in Snowflake. Let's recap what we saw. With Snowpark, Insureco was able to use their preferred programming language, Scala and use the familiar DataFrame constructs to score data using a machine learning model. With support for Java UDFs, they were able to run a train model natively within Snowflake. And finally, we saw how Snowpark executed computationally intensive data science workloads right within Snowflake. This simplifies Insureco's data pipeline architecture, as it reduces the number of additional systems they have to manage. We hope that extensibility with Scala, Java and Snowpark will enable our users to work with Snowflake in their preferred way while keeping the architecture simple. We are very excited to see how you use Snowpark to extend your data pipelines. Thank you for watching and with that back to you, Christian. >> Thank you Sri. You saw how Sri could utilize Snowpark to efficiently perform advanced sentiment analysis. But of course, if this use case was important to your business, you don't want to fully automate this pipeline and analysis. Imagine being able to do all of the following in Snowflake, your pipeline could start far upstream of what you saw in the demo. By storing your actual customer care call recordings in Snowflake, you may notice that this is new for Snowflake. We'll come back to the idea of storing unstructured data in Snowflake at the end of my talk today. Once you have the data in Snowflake, you can use our streams and past capabilities to call an external function to transcribe these files. To simplify this flow even further, we plan to introduce a serverless execution model for tasks where Snowflake can automatically size and manage resources for you. After this step, you can use the same serverless task to execute sentiment scoring of your transcript as shown in the demo with incremental processing as each transcript is created. Finally, you can surface the sentiment score either via snow side, or through any tool you use to share insights throughout your organization. In this example, you see data being transformed from a raw asset into a higher level of information that can drive business action, all fully automated all in Snowflake. Turning back to Insureco, you know how important data governance is for any major enterprise but particularly for one in this industry. Insurance companies manage highly sensitive data about their customers, and have some of the strictest requirements for storing and tracking such data, as well as managing and governing it. At Snowflake, we think about governance as the ability to know your data, manage your data and collaborate with confidence. As you saw in our first demo, the Data Cloud enables seamless collaboration, control and access to data via the Snowflake data marketplace. And companies may set up their own data exchanges to create similar collaboration and control across their ecosystems. In future releases, we expect to deliver enhancements that create more visibility into who has access to what data and provide usage information of that data. Today, we are announcing a new capability to help Snowflake users better know and organize your data. This is our new tagging framework. Tagging in Snowflake will allow user defined metadata to be attached to a variety of objects. We built a broad and robust framework with powerful implications. Think of the ability to annotate warehouses with cost center information for tracking or think of annotating tables and columns with sensitivity classifications. Our tagging capability will enable the creation of companies specific business annotations for objects in Snowflakes platform. Another key aspect of data governance in Snowflake is our policy based framework where you specify what you want to be true about your data, and Snowflake enforces those policies. We announced one such policy earlier this year, our dynamic data masking capability, which is now available in public preview. Today, we are announcing a great complimentary a policy to achieve row level security to see how role level security can enhance InsureCo's ability to govern and secure data. I'll hand it over to Artin for a demo. >> Hello, I'm Martin Avanes, Director of Product Management for Snowflake. As Christian has already mentioned, the rise of the Data Cloud greatly accelerates the ability to access and share diverse data leading to greater data collaboration across teams and organizations. Controlling data access with ease and ensuring compliance at the same time is top of mind for users. Today, I'm thrilled to announce our new row access policies that will allow users to define various rules for accessing data in the Data Cloud. Let's check back in with Insureco to see some of these in action and highlight how those work with other existing policies one can define in Snowflake. Because Insureco is a multinational company, it has to take extra measures to ensure data across geographic boundaries is protected to meet a wide range of compliance requirements. The Insureco team has been asked to segment what data sales team members have access to based on where they are regionally. In order to make this possible, they will use Snowflakes row access policies to implement row level security. We are going to apply policies for three Insureco's sales team members with different roles. Alice, an executive must be able to view sales data from both North America and Europe. Alex in North America sales manager will be limited to access sales data from North America only. And Jordan, a Europe sales manager will be limited to access sales data from Europe only. As a first step, the security administrator needs to create a lookup table that will be used to determine which data is accessible based on each role. As you can see, the lookup table has the row and their associated region, both of which will be used to apply policies that we will now create. Row access policies are implemented using standard SQL syntax to make it easy for administrators to create policies like the one our administrators looking to implement. And similar to masking policies, row access policies are leveraging our flexible and expressive policy language. In this demo, our admin users to create a row access policy that uses the row and region of a user to determine what row level data they have access to when queries are executed. When users queries are executed against the table protected by such a row access policy, Snowflakes query engine will dynamically generate and apply the corresponding predicate to filter out rows the user is not supposed to see. With the policy now created, let's log in as our Sales Users and see if it worked. Recall that as a sales executive, Alice should have the ability to see all rows from North America and Europe. Sure enough, when she runs her query, she can see all rows so we know the policy is working for her. You may also have noticed that some columns are showing masked data. That's because our administrator's also using our previously announced data masking capabilities to protect these data attributes for everyone in sales. When we look at our other users, we should notice that the same columns are also masked for them. As you see, you can easily combine masking and row access policies on the same data sets. Now let's look at Alex, our North American sales manager. Alex runs to st Korea's Alice, row access policies leverage the lookup table to dynamically generate the corresponding predicates for this query. The result is we see that only the data for North America is visible. Notice too that the same columns are still masked. Finally, let's try Jordan, our European sales manager. Jordan runs the query and the result is only the data for Europe with the same columns also masked. And you reintroduced masking policies, today you saw row access policies in action. And similar to our masking policies, row access policies in Snowflake will be accepted Hands of capability integrated seamlessly across all of Snowflake everywhere you expect it to work it does. If you're accessing data stored in external tables, semi structured JSON data, or building data pipelines via streams or plan to leverage Snowflakes data sharing functionality, you will be able to implement complex row access policies for all these diverse use cases and workloads within Snowflake. And with Snowflakes unique replication feature, you can instantly apply these new policies consistently to all of your Snowflake accounts, ensuring governance across regions and even across different clouds. In the future, we plan to demonstrate how to combine our new tagging capabilities with Snowflakes policies, allowing advanced audit and enforcing those policies with ease. And with that, let's pass it back over to Christian. >> Thank you Artin. We look forward to making this new tagging and row level security capabilities available in private preview in the coming months. One last note on the broad area of data governance. A big aspect of the Data Cloud is the mobilization of data to be used across organizations. At the same time, privacy is an important consideration to ensure the protection of sensitive, personal or potentially identifying information. We're working on a set of product capabilities to simplify compliance with privacy related regulatory requirements, and simplify the process of collaborating with data while preserving privacy. Earlier this year, Snowflake acquired a company called Crypto Numerix to accelerate our efforts on this front, including the identification and anonymization of sensitive data. We look forward to sharing more details in the future. We've just shown you three demos of new and exciting ways to use Snowflake. However, I want to also remind you that our commitment to the core platform has never been greater. As you move workloads on to Snowflake, we know you expect exceptional price performance and continued delivery of new capabilities that benefit every workload. On price performance, we continue to drive performance improvements throughout the platform. Let me give you an example comparing an identical set of customers submitted queries that ran both in August of 2019, and August of 2020. If I look at the set of queries that took more than one second to compile 72% of those improved by at least 50%. When we make these improvements, execution time goes down. And by implication, the required compute time is also reduced. Based on our pricing model to charge for what you use, performance improvements not only deliver faster insights, but also translate into cost savings for you. In addition, we have two new major announcements on performance to share today. First, we announced our search optimization service during our June event. This service currently in public preview can be enabled on a table by table basis, and is able to dramatically accelerate lookup queries on any column, particularly those not used as clustering columns. We initially support equality comparisons only, and today we're announcing expanded support for searches in values, such as pattern matching within strings. This will unlock a number of additional use cases such as analytics on logs data for performance or security purposes. This expanded support is currently being validated by a few customers in private preview, and will be broadly available in the future. Second, I'd like to introduce a new service that will be in private preview in a future release. The query acceleration service. This new feature will automatically identify and scale out parts of a query that could benefit from additional resources and parallelization. This means that you will be able to realize dramatic improvements in performance. This is especially impactful for data science and other scan intensive workloads. Using this feature is pretty simple. You define a maximum amount of additional resources that can be recruited by a warehouse for acceleration, and the service decides when it would be beneficial to use them. Given enough resources, a query over a massive data set can see orders of magnitude performance improvement compared to the same query without acceleration enabled. In our own usage of Snowflake, we saw a common query go 15 times faster without changing the warehouse size. All of these performance enhancements are extremely exciting, and you will see continued improvements in the future. We love to innovate and continuously raise the bar on what's possible. More important, we love seeing our customers adopt and benefit from our new capabilities. In June, we announced a number of previews, and we continue to roll those features out and see tremendous adoption, even before reaching general availability. Two have those announcements were the introduction of our geospatial support and policies for dynamic data masking. Both of these features are currently in use by hundreds of customers. The number of tables using our new geography data type recently crossed the hundred thousand mark, and the number of columns with masking policies also recently crossed the same hundred thousand mark. This momentum and level of adoption since our announcements in June is phenomenal. I have one last announcement to highlight today. In 2014, Snowflake transformed the world of data management and analytics by providing a single platform with first class support for both structured and semi structured data. Today, we are announcing that Snowflake will be adding support for unstructured data on that same platform. Think of the abilities of Snowflake used to store access and share files. As an example, would you like to leverage the power of SQL to reason through a set of image files. We have a few customers as early adopters and we'll provide additional details in the future. With this, you will be able to leverage Snowflake to mobilize all your data in the Data Cloud. Our customers rely on Snowflake as the data platform for every part of their business. However, the vision and potential of Snowflake is actually much bigger than the four walls of any organization. Snowflake has created a Data Cloud a data connected network with a vision where any Snowflake customer can leverage and mobilize the world's data. Whether it's data sets, or data services from traditional data providers for SaaS vendors, our marketplace creates opportunities for you and raises the bar in terms of what is possible. As examples, you can unify data across your supply chain to accelerate your time and quality to market. You can build entirely new revenue streams, or collaborate with a consortium on data for good. The possibilities are endless. Every company has the opportunity to gain richer insights, build greater products and deliver better services by reaching beyond the data that he owns. Our vision is to enable every company to leverage the world's data through seamless and governing access. Snowflake is your window into this data network into this broader opportunity. Welcome to the Data Cloud. (upbeat music)
SUMMARY :
is the gateway to the Data Cloud, FTP the file to Quantifind, It brings the computation to Snowflake and that the model is running as the ability to know your data, the ability to access is the mobilization of data to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Insureco | ORGANIZATION | 0.99+ |
Christian | PERSON | 0.99+ |
Alice | PERSON | 0.99+ |
August of 2020 | DATE | 0.99+ |
August of 2019 | DATE | 0.99+ |
June | DATE | 0.99+ |
InsureCo | ORGANIZATION | 0.99+ |
Martin Avanes | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Quantifind | ORGANIZATION | 0.99+ |
Prasanna | PERSON | 0.99+ |
15 times | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Alex | PERSON | 0.99+ |
SNP | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
Jordan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Scala | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
72% | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
Today | DATE | 0.99+ |
North America | LOCATION | 0.99+ |
each agent | QUANTITY | 0.99+ |
SMP | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Snowflake | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
each call | QUANTITY | 0.99+ |
Sri Chintala | PERSON | 0.99+ |
each role | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Crypto Numerix | ORGANIZATION | 0.99+ |
two entities | QUANTITY | 0.99+ |
Param Kahlon, UiPath | Microsoft Ignite 2019
>>live from Orlando, Florida It's the cue covering Microsoft Ignite Brought to you by Cohee City. Welcome >>back, everyone to the cubes Live coverage of Microsoft IC night here at the Orange County Convention Center in Orlando, Florida I'm your host, Rebecca Night, along with my co host Stew Minutemen were joined by Parham Cologne. He is the chief product officer at you. I path. Thank you so much for coming on the Cube. >>Thank you so much for >>coming back on the cute. >>Thank you. >>So I I was just a u IE path with you in Vegas a couple of weeks ago and the U AI Path tagline is a robot for every employee Microsoft tagline is employing empowering every employee to be a technologist, empowering citizen developers. Does it strike you that do the two missions are are similar in their way? >>That's that's absolutely right. I think we have so much in common their companies together on I think we're working very closely together and not just our technology, but also in what we're trying to achieve, which is to make people achieve more in amplifying human achievement is a core mission of our company and very excited that Microsoft so shares the same emission. >>Yeah, it really does connect with Mace onto this morning. Talked about that 61% of job openings for developers air outside the tech sector. And of course, you AI path is really trying to help. But this is productivity overall, with everything you're doing, >>absolutely, and productivity's where we focus our technology primarily on. In fact, a lot of focus is around. How do we actually get people to do more with less time so they can have more time to do the things that they could do with the creative parts of their time, as opposed to doing a Monday in part? So, yeah, productivity's is really important to us. The company. That's what we think about every day. >>Could you bring us inside the relationship with Microsoft and you? I passed? >>Yeah, so we're deeply partner that Microsoft's and today one we've most of our technology is built on Microsoft's stack on dot net miran. Our databases all run on sequel server or cloud service runs on Microsoft Azure. So we are very deeply partner to be health Microsoft Bill. A lot of a I service is around document extraction. The forms recognize her with one of the first customers that we work together with Microsoft and Chevron on so very deep partnership with Microsoft. Okay, >>so let me ask you a question. Actually, as a customer of Microsoft, you know what? Why, why everything built on Microsoft from, you know, the dot net through the infrastructure of the service. What, what? Why did you bypass choose Microsoft? >>I think it made a lot of sense. Microsoft's focus on productivity Microsoft's focus on enabling developers do stuff quickly on it also helped a lot of the founders, myself included, came through with Microsoft to be a lot of experience with Microsoft's. I think part of that helped as well. >>Does it help or hurt when you are then pitching your service? Is that that it is that it is a much more Microsoft focused company, >>So I think we've grown over the years to actually have a much broader ecosystem, so we have more than 500 partners now we work with Google. Google is a customer, it's an investor. It's also very deep partner. A lot of very I service is we're welding on it with Google were be partnered with AWS as well. So I think we're working with all the way our customers are today. But I think we're still have a very close relationship with Microsoft, given our agitated given where we started. >>Yeah, I actually I I went to the passport event last year and had not realized how deep that connection was with Microsoft. I see you. I path across all the clouds. So there's a little mention of our p A. That this morning in the keynote theme, the power automate solution coming out from Microsoft. Of course, everyone seems tohave an R p A. Out there, you know all the big software houses out there. Tell us what this means in the marketplace. >>Yes, Listen, our P a is a very fast growing market. Is the fastest growing enterprise category today, And when you grow so fast, it's good for the business but also attracts attention, I think getting somebody like Microsoft to sort of say that we're in it as well. Only help sort of solidify the foundation, solidify the category and brings a lot more, you know, credibility to this category. So I think we're excited to have Microsoft here as well. >>And in terms of a CZ, you were saying to companies that are very much focused on workplace productivity, employee collaboration, and being able to be more creative with the time that you have. How much is that cultural alignment? How much does that help your partnership? >>I think it helps a partnership a lot. So you know, when we, for example, of when you meet with the office team, they think deeply about helping people do more with last time. You know, we think about the same things as well. So if you notice some of the newer products that we've launched our very deeply integrated into office, in fact to do a lot of inspiration from products like Excel to be able to say business people that are able to, you know, do some very sophisticated, complex business models and excel should be able to do similar stuff with their products as well. So we continue to work with Microsoft and across collaboration across the steams, anything in general, our message. We have a close relationship with Microsoft, So when Microsoft bring this into opportunities and it closes, it actually retired Dakota for Microsoft Sellers as well. So I think all of that alignment really helps. >>I would love to hear you know what? What? Joint customers. You know what brings customers to you? I path at a show here. What? What are some of the key drivers for their discussions that you're having this week? >>Yeah. I mean, we've got you know, through through the years, we've got over 5000 customers that work with us large enterprises in a very large banks to companies like Chevron. Chevron in particular, is one of those customers. You know, that's a very, very deep customer of Microsoft, but also a very strong customer of ours and a specific use case at my at Chevron. Chevron wanted to extract data from their oil field service reports. They were getting more than 1000 oil. Regular reports coming in every day with about 300 pages for average. For report on. Somebody had to manly go in and physically read those reports. Put him into that s a P system so that you could predict if there was a pretty prevent amendments appear that was acquired, you know, working together with Microsoft, we were able to take service that Microsoft was building an A. I called forums recognize ER and take it to pre bid on Alfa with customers so that Chevron is now able to have all of those reports read by you. I path robots and automatically punch it into, you know, the SNP preventive maintenance applications so that you can actually ship the engineer on side before you know that something happened to the old Greg. So I think that's a pretty cool a scenario. >>Another's another similarity between AI Path and you, AI Path and Microsoft is this customer obsession. And this is something that you talked a lot about at your path forward. This spending time with customers, learning how they would use our p A and then also thinking, thinking ahead of them and in terms of how they could use our p A. How do you work with customers and Microsoft together in partnership in terms of how do you find out exactly what their needs are and the joint solutions you could provide? >>Yeah, and then that's a really good question. Microsoft has been very obsessed with, you know, driving customer obsession and all parts of the organization we culturally have a really deep obsession about working closely with customers. And I think so that Microsoft has empty sea, meet the customer sessions around around the world on We were close living Microsoft to make sure that our technology can be showcased by Microsoft people in those empty see sessions so that when customers come in, they able to not only see Microsoft technology, but also our technology. And if they're interested, then our sales teams work elaborately together to make sure we can, you know, have a joint session than planning and working with customers. >>So I had a chat earlier this year with your CMO Bobbi Patrick talking about how a I and r p a go together. You on the product? So will I. I be able to allow our p A to get into more complex configuration, give us where we are and you know what? What's what's new in that space? >>Yeah, No, absolutely. So like the first wave of our p A was all about taking sort of structured processes, you know, deluding data from excel sheets, reading data for maybe eyes and be able to process it in different systems now in the humans don't always work with that. 10% of what >>we do >>on a daily basis, a structure, data right, spreadsheets and stuff, 90% of what we d'oh reading spread shades, extracting information from papers responding Thio. You know Chad conversations. All of that unstructured information can now be processed by AI algorithms to be able to extract the intent off the chat conversation to be to extract the data. That's in that unstructured document that we just received to be able to use computer vision to detect what is on the computer screen so that you're able to detect that control, whether rendered the browser or renders in a window start to application of that. So I brings the possibility to automate a lot more complex processes within the organization, you know, mimicking sort of MME. Or human like behavior. So the robots are not just doing the numbers and structured data but be able to process unstructured information. It's >>well, well, the way I help it all, trying to understand, what can I automate? >>Absolutely. And that's the other piece off being able to use process, understanding capability. So what we've done is we've built capability that's able to follow human activity logs and how people are using systems, but also how the databases air getting updated by different applications and be able to mind that information to understand how work is getting done and the enterprise and be able to understand what are the scenarios and possibilities for automating mawr business processes that's hold onto the key benefits of how a I and process mining can be can be applied to the context of the R P. A. >>There's so many product announcements today. On the main stage is an 87 page book that we that we were sent from the Microsoft calms team. What is it? What's the most exciting things you've seen here today? >>I think I'm really excited about some of the innovation that Microsoft is doing in the analytic stock to be able to report on the, you know, the data warehouse, but also big data together and one stack. I think that's really powerful. That is something that our customers have have be very interested in, because robots process structure log, but also in structure logs. I'm also excited about some of the eye investments that Microsoft is making, I think some of the eye capabilities and are really coming to practical use. A lot of companies tuck Brody I For a long time. We've applied a I practically in our technology, but I think a lot more technology is now available for us to be used in our products. >>Okay, parm. There's a recent acquisition process. Gold was. The company could tell us a little bit about that. What what? What are the plans for that >>absolutely process Goal is a company that's basically all in Germany and nine home and in bed. Ireland. On this is the company that was focused on process, understanding of process. Mining's essentially, what they had was that connectors a different line of business applications and be able to sit and study logs of how work was getting done over long periods of time. So what happened is if you went to a line of business owner and he asked them, What is your process for procure to pay look like, in order to cash look like chances out, they'll draw you a straight line. That's a haze with the processes, However, when you look at how work is getting done, it's typically not a straight line. And depending on how many variations you're looking at, you can get up to, like, you know, 15 or 20 different variations, the same process being done. So what process gold does is identifies. What are the different ways in which processes air getting done? Identify where the bottlenecks exist in the process, right? How long is the step one? How long is the time? But we step two and step three, right? Is that taking 25% of what the total time is? And is there a way to optimize that process by eliminating that bottleneck? And once you've optimized the process, it also gives you the ability to go automate that optimized process right? You don't want to automate a process that is sub optimal. You want to go understand the process, see how work is getting done, optimized the bottlenecks and eliminate the bottlenecks, optimize the process and then go out of made that and process go. It really helps us sort of cater to that need, which is go automate. You know, the best possible way to optimize the process >>in terms of Microsoft's use of things like a I and ML And now we have not really talked a lot about ML here. I mean, it was mentioned on the main stage, but not a lot. How? What? What do you think the future holds in terms of Microsoft in the next 5 to 10 years? >>Yeah. I mean, I think I see Microsoft investing a lot in data and really being able Thio get all kinds of data because ML is useful only after it's able to reason over tons of data. And Microsoft is in a rightfully investing and the data repositories in stores so that it has the ability to store that data to process that data. And once that's got the data on the data assets over it, then it's able to go Korea the algorithms that can reason over data on and create that stuff. And I think that's really exciting because Microsoft has a lot of the horsepower to be able to not only store that data process that data efficiently said can be used in machine learning. And I >>hope our um thank you so much for coming on the Cube. It was a pleasure talking to you. >>Thank you. Pleasure to have you here. Thank you very much. >>I'm Rebecca Knight. First to minimum. Stay tuned for more of the cubes. Live coverage of Microsoft ignite.
SUMMARY :
covering Microsoft Ignite Brought to you by Cohee City. Thank you so much for coming on So I I was just a u IE path with you in Vegas a couple of weeks ago and the U AI Path tagline I think we have so much in common their companies together on I think of job openings for developers air outside the tech sector. so they can have more time to do the things that they could do with the creative parts of their time, The forms recognize her with one of the first customers that we work Actually, as a customer of Microsoft, you know what? I think part of that helped as well. A lot of very I service is we're welding on it with Google were be partnered with AWS as well. Out there, you know all the big software houses out there. brings a lot more, you know, credibility to this category. employee collaboration, and being able to be more creative with the time that you have. to be able to say business people that are able to, you know, I would love to hear you know what? prevent amendments appear that was acquired, you know, working together with Microsoft, And this is something that you talked a lot about at your path forward. sure we can, you know, have a joint session than planning and working with customers. give us where we are and you know what? sort of structured processes, you know, deluding data from excel sheets, So I brings the possibility to automate is getting done and the enterprise and be able to understand what are the scenarios and possibilities On the main stage is an 87 page book that we that we be able to report on the, you know, the data warehouse, What are the plans for that in order to cash look like chances out, they'll draw you a straight line. What do you think the future holds in terms of Microsoft in the next 5 to 10 years? And once that's got the data on the data hope our um thank you so much for coming on the Cube. Pleasure to have you here. First to minimum.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Chevron | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Rebecca Knight | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
25% | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Rebecca Night | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
61% | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
more than 1000 oil | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
excel | TITLE | 0.99+ |
Ireland | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Bobbi Patrick | PERSON | 0.99+ |
more than 500 partners | QUANTITY | 0.99+ |
Orange County Convention Center | LOCATION | 0.99+ |
Param Kahlon | PERSON | 0.99+ |
today | DATE | 0.98+ |
about 300 pages | QUANTITY | 0.98+ |
87 page | QUANTITY | 0.98+ |
Greg | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Stew Minutemen | PERSON | 0.97+ |
Chad | PERSON | 0.97+ |
step one | QUANTITY | 0.96+ |
step three | QUANTITY | 0.96+ |
two missions | QUANTITY | 0.96+ |
step two | QUANTITY | 0.95+ |
over 5000 customers | QUANTITY | 0.95+ |
earlier this year | DATE | 0.95+ |
10 years | QUANTITY | 0.94+ |
first customers | QUANTITY | 0.94+ |
this week | DATE | 0.92+ |
5 | QUANTITY | 0.91+ |
UiPath | ORGANIZATION | 0.91+ |