Image Title

Search Results for Duncan Duncan:

Duncan Lennox | AWS Storage Day 2021


 

>>Welcome back to the cubes, continuous coverage of AWS storage day. We're in beautiful downtown Seattle in the great Northwest. My name is Dave Vellante and we're going to talk about file systems. File systems are really tricky and making those file systems elastic is even harder. They've got a long history of serving a variety of use cases as with me as Duncan Lennox. Who's the general manager of Amazon elastic file system. Dunkin. Good to see you again, Dave. Good to see you. So tell me more around the specifically, uh, Amazon's elastic file system EFS you, you know, broad file portfolio, but, but let's narrow in on that. What do we need to know? >>Yeah, well, Amazon elastic file system or EFS as we call it is our simple serverless set and forget elastic file system service. So what we mean by that is we deliver something that's extremely simple for customers to use. There's not a lot of knobs and levers. They need to turn or pull to make it work or manage it on an ongoing basis. The serverless part of it is there's absolutely no infrastructure for customers to manage. We handled that entirely for them. The elastic part then is the file system automatically grows and shrinks as they add and delete data. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually using. >>What are the sort of use cases and workloads that you see EFS supporting? >>Yeah. Yeah. It has to support a broad set of customer workloads. So it's everything from, you know, serial, highly latency, sensitive applications that customers might be running on-prem today and want to move to the AWS cloud up to massively parallel scale-out workloads that they have as well. >>So. Okay. Are there any industry patterns that you see around that? Are there other industries that sort of lean in more or is it more across the board? We >>See it across the board, although I'd have to say that we see a lot of adoption within compliance and regulated industries. And a lot of that is because of not only our simplicity, but the high levels of availability and durability that we bring to the file system as well. The data is designed for 11 nines of durability. So essentially you don't need to be worrying about your anything happening into your data. And it's a regional service meaning that your file system is available from all availability zones in a particular region for high availability. >>So as part of storage data, we, we saw some, some new tiering announcements. W w w what can you tell us about those >>Super excited to be announcing EFS intelligent tiering? And this is a capability that we're bringing to EFS that allows customers to automatically get the best of both worlds and get cost optimization for their workloads and how it works is the customer can select, uh, using our lifecycle management capability, a policy for how long they want their data to remain active in one of our active storage classes, seven days, for example, or 30 days. And what we do is we automatically monitor every access to every file they have. And if we see no access to a file for their policy period, like seven days or 30 days, we automatically and transparently move that file to one of our cost optimized, optimized storage classes. So they can save up to 92% on their storage costs. Um, one of the really cool things about intelligent tiering then is if that data ever becomes active again and their workload or their application, or their users need to access it, it's automatically moved back to a performance optimized storage class, and this is all completely transparent to their applications and users. >>So, so how, how does that work? Are you using some kind of machine intelligence to sort of monitor things and just learn over time? And like, what if I policy, what if I don't get it quite right? Or maybe I have some quarter end or maybe twice a year, you know, I need access to that. Can you, can the system help me figure >>That out? Yeah. The beauty of it is you don't need to know how your application or workload is accessing the file system or worry about those access patterns changing. So we'll take care of monitoring every access to every file and move the file either to the cost optimized storage class or back to the performance optimized class as needed by your application. >>And then optimized storage classes is again, selected by the system. I don't have to >>It that's right. It's completely transparent. So we will take care of that for you. So you'll set the policy by which you want active data to be moved to the infrequent access cost optimized storage class, like 30 or seven days. And then you can set a policy that says if that data is ever touched again, to move it back to the performance optimized storage class. So that's then all happened automatically by the service on our side. You don't need to do anything >>It's, it's it's serverless, which means what I don't have to provision any, any compute infrastructure. >>That's right. What you get is an end point, the ability to Mount your file system using NFS, or you can also manage your file system from any of our compute services in AWS. So not only directly on an instance, but also from our serverless compute models like AWS Lambda and far gays, and from our container services like ECS and EKS, and all of the infrastructure is completely managed by us. You don't see it, you don't need to worry about it. We scale it automatically for you. >>What was the catalyst for all this? I mean, you know, you got to tell me it's customers, but maybe you could give me some, some insight and add some, some color. Like, what would you decoded sort of what the customers were saying? Did you get inputs from a lot of different places, you know, and you had to put that together and shape it. Uh, tell us, uh, take us inside that sort of how you came to where you are >>Today. Well, you know, I guess at the end of the day, when you think about storage and particularly file system storage, customers always want more performance and they want lower costs. So we're constantly optimizing on both of those dimensions. How can we find a way to deliver more value and lower cost to customers, but also meet the performance needs that their workloads have. And what we found in talking to customers, particularly the customers that EFS targets, they are application administrators, their dev ops practitioners, their data scientists, they have a job they want to do. They're not typically storage specialists. They don't want to have know or learn a lot about the bowels of storage architecture, and how to optimize for what their applications need. They want to focus on solving the business problems. They're focused on whatever those are >>You meaning, for instance. So you took tiering is obvious. You're tiering to lower cost storage, serverless. I'm not provisioning, you know, servers, myself, the system I'm just paying for what I use. The elasticity is a factor. So I'm not having to over provision. And I think I'm hearing, I don't have to spend my time turning knobs. You've talked about that before, because I don't know how much time is spent, you know, tuning systems, but it's gotta be at least 15 to 20% of the storage admins time. You're eliminating that as well. Is that what you mean by sort of cost optimum? Absolutely. >>So we're, we're providing the scale of capacity of performance that customer applications need as they needed without the customer needing to know exactly how to configure the service, to get what they need. We're dealing with changing workloads and changing access patterns. And we're optimizing their storage costs. As at the same time, >>When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, you set the north star. You don't want to keep revisiting that, right? This is we're moving in this direction. How do we get there might change, but what's your north star? Where do you see the future? >>Yeah, it's really all about delivering simple file system storage that just works. And that sounds really easy, but there's a lot of nuance and complexity behind it, but customers don't want to have to worry about how it works. They just need it to work. And we, our goal is to deliver that for a super broad cross section of applications so that customers don't need to worry about how they performance tune or how they cost optimize. We deliver that value for them. >>Yeah. So I'm going to actually follow up on that because I feel like, you know, when you listen to Werner Vogels talk, he gives takes you inside. It's a plumbing sometimes. So what is the, what is that because you're right. That it, it sounds simple, but it's not. And as I said up front file systems, getting that right is really, really challenging. So technically what's the challenges, is it doing this at scale? And, and, and, and, and, and having some, a consistent experience for customers, there's >>Always a challenge to doing what we do at scale. I mean, the elasticity is something that we provide to our customers, but ultimately we have to take their data as bits and put them into Adams at some point. So we're managing infrastructure on the backend to support that. And we also have to do that in a way that delivers something that's cost-effective for customers. So there's a balance and a natural tension there between things like elasticity and simplicity, performance, cost, availability, and durability, and getting that balance right. And being able to cover the maximum cross section of all those things. So for the widest set of workloads, we see that as our job and we're delivering value, and we're doing that >>For our customers. Then of course, it was a big part of that. And of course, when we talk about, you know, the taking away the, the need for tuning, but, but you got to get it right. I mean, you, you, you can't, you can't optimize for every single use case. Right. But you can give great granularity to allow those use cases to be supported. And that seems to be sort of the balancing act that you guys so >>Well, absolutely. It's focused on being a general purpose file system. That's going to work for a broad cross section of, of applications and workloads. >>Right. Right. And that's, that's what customers want. You know, generally speaking, you go after that, that metal Dunkin, I'll give you the last word. >>I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file system and get started. So come give it a, try the >>Button Duncan. Thanks so much for coming back to the cube. It's great to see you again. Thanks, Dave. All right. And keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

Good to see you again, Dave. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually you know, serial, highly latency, sensitive applications that customers might be running on-prem today Are there other industries that sort of lean in more or is it more across the board? So essentially you don't need to be worrying can you tell us about those And if we see no access to a file for their policy period, like seven days or 30 days, twice a year, you know, I need access to that. access to every file and move the file either to the cost optimized storage class or back I don't have to And then you can set a policy that says if that data is ever touched What you get is an end point, the ability to Mount your file system using NFS, I mean, you know, you got to tell me it's customers, but maybe you could give me some, of storage architecture, and how to optimize for what their applications need. Is that what you mean by sort of cost optimum? to get what they need. When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, a super broad cross section of applications so that customers don't need to worry about how they performance So what is the, what is that because you're right. And being able to cover the maximum cross section And that seems to be sort of the balancing act that you guys so That's going to work for a broad cross section that metal Dunkin, I'll give you the last word. I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file It's great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

SeattleLOCATION

0.99+

Duncan LennoxPERSON

0.99+

seven daysQUANTITY

0.99+

30 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

Werner VogelsPERSON

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

11 ninesQUANTITY

0.98+

bothQUANTITY

0.98+

up to 92%QUANTITY

0.97+

both worldsQUANTITY

0.97+

DunkinPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

twice a yearQUANTITY

0.94+

20%QUANTITY

0.93+

AWSEVENT

0.92+

single clickQUANTITY

0.88+

single use caseQUANTITY

0.85+

LambdaTITLE

0.77+

ECSTITLE

0.71+

dayEVENT

0.7+

EFSTITLE

0.65+

Storage Day 2021EVENT

0.61+

northORGANIZATION

0.6+

DuncanPERSON

0.59+

north starORGANIZATION

0.57+

at least 15QUANTITY

0.56+

EKSTITLE

0.53+

AdamsPERSON

0.43+

Ajay Vohora and Duncan Turnbull | Io-Tahoe ActiveDQ Intelligent Automation for Data Quality


 

>>From around the globe, but it's the cube presenting active DQ, intelligent automation for data quality brought to you by IO Tahoe. >>Now we're going to look at the role automation plays in mobilizing your data on snowflake. Let's welcome. And Duncan Turnbull who's partner sales engineer at snowflake and AIG Vihara is back CEO of IO. Tahoe is going to share his insight. Gentlemen. Welcome. >>Thank you, David. Good to have you back. Yeah, it's great to have you back >>A J uh, and it's really good to CIO Tao expanding the ecosystem so important. Um, now of course bringing snowflake and it looks like you're really starting to build momentum. I mean, there's progress that we've seen every month, month by month, over the past 12, 14 months, your seed investors, they gotta be happy. >>They are all that happy. And then I can see that we run into a nice phase of expansion here and new customers signing up. And now you're ready to go out and raise that next round of funding. I think, um, maybe think of a slight snowflake five years ago. So we're definitely on track with that. A lot of interest from investors and, um, we're right now trying to focus in on those investors that can partner with us, understand AI data and, and automation. >>So personally, I mean, you've managed a number of early stage VC funds. I think four of them, uh, you've taken several comp, uh, software companies through many funding rounds and growth and all the way to exit. So, you know how it works, you have to get product market fit, you know, you gotta make sure you get your KPIs, right. And you gotta hire the right salespeople, but, but what's different this time around, >>Uh, well, you know, the fundamentals that you mentioned though, those are never change. And, um, what we can say, what I can say that's different, that's shifted, uh, this time around is three things. One in that they used to be this kind of choice of, do we go open source or do we go proprietary? Um, now that has turned into, um, a nice hybrid model where we've really keyed into, um, you know, red hat doing something similar with Santos. And the idea here is that there is a core capability of technology that independence a platform, but it's the ability to then build an ecosystem around that made a pervade community. And that community may include customers, uh, technology partners, other tech vendors, and enabling the platform adoption so that all of those folks in that community can build and contribute, um, while still maintaining the core architecture and platform integrity, uh, at the core of it. >>And that's one thing that's changed was fitting a lot of that type of software company, um, emerge into that model, which is different from five years ago. Um, and then leveraging the cloud, um, every cloud snowflake cloud being one of them here in order to make use of what customers, uh, and customers and enterprise software are moving towards. Uh, every CIO is now in some configuration of a hybrid. Um, it is state whether those cloud multi-cloud on prem. That's just the reality. The other piece is in dealing with the CIO is legacy. So the past 15, 20 years they've purchased many different platforms, technologies, and some of those are still established and still, how do you, um, enable that CIO to make purchase while still preserving and in some cases building on and extending the, the legacy, um, material technology. So they've invested their people's time and training and financial investment into solving a problem, customer pain point, uh, with technology, but, uh, never goes out of fashion >>That never changes. You have to focus like a laser on that. And of course, uh, speaking of companies who are focused on solving problems, don't can turn bill from snowflake. You guys have really done a great job and really brilliantly addressing pain points, particularly around data warehousing, simplified that you're providing this new capability around data sharing, uh, really quite amazing. Um, Dunkin AAJ talks about data quality and customer pain points, uh, in, in enterprise. It, why is data quality been such a problem historically? >>Oh, sorry. One of the biggest challenges that's really affected by it in the past is that because to address everyone's need for using data, they've evolved all these kinds of different places to store all these different silos or data marts or all this kind of clarification of places where data lives and all of those end up with slightly different schedules to bringing data in and out. They end up with slightly different rules for transforming that data and formatting it and getting it ready and slightly different quality checks for making use of it. And this then becomes like a big problem in that these different teams are then going to have slightly different or even radically different ounces to the same kinds of questions, which makes it very hard for teams to work together, uh, on their different data problems that exist inside the business, depending on which of these silos they end up looking at and what you can do. If you have a single kind of scalable system for putting all of your data into it, you can kind of sidestep along to this complexity and you can address the data quality issues in a, in a single and a single way. >>Now, of course, we're seeing this huge trend in the market towards robotic process automation, RPA, that adoption is accelerating. Uh, you see, in UI paths, I IPO, you know, 35 plus billion dollars, uh, valuation, you know, snowflake like numbers, nice cops there for sure. Uh, agent you've coined the phrase data RPA, what is that in simple terms? >>Yeah, I mean, it was born out of, uh, seeing how in our ecosystem concern community developers and customers, uh, general business users for wanting to adopt and deploy a tar hose technology. And we could see that, um, I mean, there's not monkeying out PA we're not trying to automate that piece, but wherever there is a process that was tied into some form of a manual overhead with handovers and so on. Um, that process is something that we were able to automate with, with our ties technology and, and the deployment of AI and machine learning technologies specifically to those data processes almost as a precursor to getting into financial automation that, um, that's really where we're seeing the momentum pick up, especially in the last six months. And we've kept it really simple with snowflake. We've kind of stepped back and said, well, you know, the resource that a snowflake can leverage here is, is the metadata. So how could we turn snowflake into that repository of being the data catalog? And by the way, if you're a CIO looking to purchase a data catalog tool stop, there's no need to, um, working with snowflake, we've enable that intelligence to be gathered automatically and to be put, to use within snowflake. So reducing that manual effort, and I'm putting that data to work. And, um, and that's where we've packaged this with, uh, AI machine learning specific to those data tasks. Um, and it made sense that's, what's resonated with, with our customers. >>You know, what's interesting here, just a quick aside, as you know, I've been watching snowflake now for awhile and, and you know, of course the, the competitors come out and maybe criticize why they don't have this feature. They don't have that feature. And it's snowflake seems to have an answer. And the answer oftentimes is, well, its ecosystem ecosystem is going to bring that because we have a platform that's so easy to work with though. So I'm interested Duncan in what kind of collaborations you are enabling with high quality data. And of course, you know, your data sharing capability. >>Yeah. So I think, uh, you know, the ability to work on, on datasets, isn't just limited to inside the business itself or even between different business units. And we were kind of discussing maybe with their silos. Therefore, when looking at this idea of collaboration, we have these where we want to be >>Able to exploit data to the greatest degree possible, but we need to maintain the security, the safety, the privacy, and governance of that data. It could be quite valuable. It could be quite personal depending on the application involved. One of these novel applications that we see between organizations of data sharing is this idea of data clean rooms. And these data clean rooms are safe, collaborative spaces, which allow multiple companies or even divisions inside a company where they have particular, uh, privacy requirements to bring two or more data sets together for analysis. But without having to actually share the whole unprotected data set with each other, and this lets you to, you know, when you do this inside of snowflake, you can collaborate using standard tool sets. You can use all of our SQL ecosystem. You can use all of the data science ecosystem that works with snowflake. >>You can use all of the BI ecosystem that works with snowflake, but you can do that in a way that keeps the confidentiality that needs to be presented inside the data intact. And you can only really do these kinds of, uh, collaborations, especially across organization, but even inside large enterprises, when you have good reliable data to work with, otherwise your analysis just isn't going to really work properly. A good example of this is one of our large gaming customers. Who's an advertiser. They were able to build targeting ads to acquire customers and measure the campaign impact in revenue, but they were able to keep their data safe and secure while doing that while working with advertising partners, uh, the business impact of that was they're able to get a lifted 20 to 25% in campaign effectiveness through better targeting and actually, uh, pull through into that of a reduction in customer acquisition costs because they just didn't have to spend as much on the forms of media that weren't working for them. >>So, ha I wonder, I mean, you know, with, with the way public policy shaping out, you know, obviously GDPR started it in the States, you know, California, consumer privacy act, and people are sort of taking the best of those. And, and, and there's a lot of differentiation, but what are you seeing just in terms of, you know, the government's really driving this, this move to privacy, >>Um, government public sector, we're seeing a huge wake up an activity and, uh, across the whole piece that, um, part of it has been data privacy. Um, the other part of it is being more joined up and more digital rather than paper or form based. Um, we've all got stories of waiting in line, holding a form, taking that form to the front of the line and handing it over a desk. Now government and public sector is really looking to transform their services into being online, to show self service. Um, and that whole shift is then driving the need to, um, emulate a lot of what the commercial sector is doing, um, to automate their processes and to unlock the data from silos to put through into those, uh, those processes. Um, and another thing I can say about this is they, the need for data quality is as a Dunkin mentions underpins all of these processes, government pharmaceuticals, utilities, banking, insurance, the ability for a chief marketing officer to drive a, a loyalty campaign. >>They, the ability for a CFO to reconcile accounts at the end of the month. So do a, a, uh, a quick, accurate financial close. Um, also the, the ability of a customer operations to make sure that the customer has the right details about themselves in the right, uh, application that they can sell. So from all of that is underpinned by data and is effective or not based on the quality of that data. So whilst we're mobilizing data to snowflake cloud, the ability to then drive analytics, prediction, business processes off that cloud, um, succeeds or fails on the quality of that data. >>I mean it, and, you know, I would say, I mean, it really is table stakes. If you don't trust the data, you're not gonna use the data. The problem is it always takes so long to get to the data quality. There's all these endless debates about it. So we've been doing a fair amount of work and thinking around this idea of decentralized data, data by its very nature is decentralized, but the fault domains of traditional big data is that everything is just monolithic and the organizations monolithic technology's monolithic, the roles are very, you know, hyper specialized. And so you're hearing a lot more these days about this notion of a data fabric or what calls a data mesh. Uh, and we've kind of been leaning in to that and the ability to, to connect various data capabilities, whether it's a data warehouse or a data hub or a data Lake that those assets are discoverable, they're shareable through API APIs and they're governed on a federated basis. And you're using now bringing in a machine intelligence to improve data quality. You know, I wonder Duncan, if you could talk a little bit about Snowflake's approach to this topic. >>Sure. So I'd say that, you know, making use of all of your data, is there a key kind of driver behind these ideas that they can mesh into the data fabrics? And the idea is that you want to bring together not just your kind of strategic data, but also your legacy data and everything that you have inside the enterprise. I think I'd also like to kind of expand upon what a lot of people view as all of the data. And I think that a lot of people kind of miss that there's this whole other world of data they could be having access to, which is things like data from their business partners, their customers, their suppliers, and even stuff that's more in the public domain, whether that's, you know, demographic data or geographic or all these kinds of other types of data sources. And what I'd say to some extent is that the data cloud really facilitates the ability to share and gain access to this both kind of between organizations inside organizations. >>And you don't have to, you know, make lots of copies of the data and kind of worry about the storage and this federated, um, you know, idea of governance and all these things that it's quite complex to kind of manage this. Uh, you know, the snowflake approach really enables you to share data with your ecosystem all the world, without any latency with full control over what's shared without having to introduce new complexities or having complex attractions with APIs or software integration. The simple approach that we provide allows a relentless focus on creating the right data product to meet the challenges facing your business today. >>So, Andrea, the key here is to don't get to talking about it in my mind. Anyway, my cake takeaway is to simplicity. If you can take the complexity out of the equation, we're going to get more adoption. It really is that simple. >>Yeah, absolutely. Do you think that that whole journey, maybe five, six years ago, the adoption of data lakes was, was a stepping stone. Uh, however, the Achilles heel there was, you know, the complexity that it shifted towards consuming that data from a data Lake where there were many, many sets of data, um, to, to be able to cure rate and to, um, to consume, uh, whereas actually, you know, the simplicity of being able to go to the data that you need to do your role, whether you're in tax compliance or in customer services is, is key. And, you know, listen for snowflake by auto. One thing we know for sure is that our customers are super small and they're very capable. They're they're data savvy and know, want to use whichever tool and embrace whichever, um, cloud platform that is gonna reduce the barriers to solving. What's complex about that data, simplifying that and using, um, good old fashioned SQL, um, to access data and to build products from it to exploit that data. So, um, simplicity is, is key to it to allow people to, to, to make use of that data. And CIO is recognize that >>So Duncan, the cloud obviously brought in this notion of dev ops, um, and new methodologies and things like agile that brought that's brought in the notion of data ops, which is a very hot topic right now. Um, basically dev ops applies to data about how D how does snowflake think about this? How do you facilitate that methodology? >>Yeah, sorry. I agree with you absolutely. That they drops takes these ideas of agile development of >>Agile delivery and of the kind of dev ops world that we've seen just rise and rise, and it applies them to the data pipeline, which is somewhere where it kind of traditionally hasn't happened. And it's the same kinds of messages as we see in the development world, it's about delivering faster development, having better repeatability and really getting towards that dream of the data-driven enterprise, you know, where you can answer people's data questions, they can make better business decisions. And we have some really great architectural advantages that allow us to do things like allow cloning of data sets without having to copy them, allows us to do things like time travel so we can see what data looked like at some point in the past. And this lets you kind of set up both your own kind of little data playpen as a clone without really having to copy all of that data. >>So it's quick and easy, and you can also, again, with our separation of storage and compute, you can provision your own virtual warehouse for dev usage. So you're not interfering with anything to do with people's production usage of this data. So the, these ideas, the scalability, it just makes it easy to make changes, test them, see what the effect of those changes are. And we've actually seen this. You were talking a lot about partner ecosystems earlier. Uh, the partner ecosystem has taken these ideas that are inside snowflake and they've extended them. They've integrated them with, uh, dev ops and data ops tooling. So things like version control and get an infrastructure automation and things like Terraform. And they've kind of built that out into more of a data ops products that, that you can, you can make yourself so we can see there's a huge impact of, of these ideas coming into the data world. >>We think we're really well-placed to take advantage to them. The partner ecosystem is doing a great job with doing that. And it really allows us to kind of change that operating model for data so that we don't have as much emphasis on like hierarchy and change windows and all these kinds of things that are maybe use as a lot of fashioned. And we kind of taking the shift from this batch data integration into, you know, streaming continuous data pipelines in the cloud. And this kind of gets you away from like a once a week or once a month change window, if you're really unlucky to, you know, pushing changes, uh, in a much more rapid fashion as the needs of the business change. >>I mean, those hierarchical organizational structures, uh, w when we apply those to begin to that, what it actually creates the silos. So if you're going to be a silo Buster, which aji look at you guys in silo busters, you've got to put data in the hands of the domain experts, the business people, they know what data they want, if they have to go through and beg and borrow for a new data sets, et cetera. And so that's where automation becomes so key. And frankly, the technology should be an implementation detail, not the dictating factor. I wonder if you could comment on this. >>Yeah, absolutely. I think, um, making the, the technologies more accessible to the general business users >>Or those specialists business teams that, um, that's the key to unlocking is it is interesting to see is as people move from organization to organization where they've had those experiences operating in a hierarchical sense, I want to break free from that and, um, or have been exposed to, um, automation, continuous workflows, um, change is continuous in it. It's continuous in business, the market's continuously changing. So having that flow across the organization of work, using key components, such as get hub, similar to what you drive process Terraform to build in, um, code into the process, um, and automation and with a high Tahoe leveraging all the metadata from across those fragmented sources is, is, is good to say how those things are coming together. And watching people move from organization to organization say, Hey, okay, I've got a new start. I've got my first hundred days to impress my, my new manager. >>Uh, what kind of an impact can I, um, bring to this? And quite often we're seeing that as, you know, let me take away the good learnings from how to do it, or how not to do it from my previous role. And this is an opportunity for me to, to bring in automation. And I'll give you an example, David, you know, recently started working with a, a client in financial services. Who's an asset manager, uh, managing financial assets. They've grown over the course of the last 10 years through M and a, and each of those acquisitions have bought with it tactical data. It's saying instead of data of multiple CRM systems now multiple databases, multiple bespoke in-house created applications. And when the new CIO came in and had a look at those well, you know, yes, I want to mobilize my data. Yes, I need to modernize my data state because my CEO is now looking at these crypto assets that are on the horizon and the new funds that are emerging that around digital assets and crypto assets. >>But in order to get to that where absolutely data underpins and is the core asset, um, cleaning up that, that legacy situation mobilizing the relevant data into the Safelite cloud platform, um, is where we're giving time back, you know, that is now taking a few weeks, whereas that transitioned to mobilize that data, start with that, that new clean slate to build upon a new business as a, a digital crypto asset manager, as well as the legacy, traditional financial assets, bonds stocks, and fixed income assets, you name it, uh, is where we're starting to see a lot of innovation. >>Yeah. Tons of innovation. I love the crypto examples and FTS are exploding and, you know, let's face it, traditional banks are getting disrupted. Uh, and so I also love this notion of data RPA. I, especially because I've done a lot of work in the RPA space. And, and I want to, what I would observe is that the, the early days of RPA, I call it paving the cow path, taking existing processes and applying scripts, get letting software robots, you know, do its thing. And that was good because it reduced, you know, mundane tasks, but really where it's evolved is a much broader automation agenda. People are discovering new, new ways to completely transform their processes. And I see a similar, uh, analogy for data, the data operating model. So I'm wonder whenever you think about that, how a customer really gets started bringing this to their ecosystem, their data life cycles. >>Sure. Yeah. So step one is always the same is figuring out for the CIO, the chief data officer, what data do I have, um, and that's increasingly something that they want towards a mate, so we can help them there and, and do that automated data discovery, whether that is documents in the file, share backup archive in a relational data store, in a mainframe really quickly hydrating that and bringing that intelligence, the forefront of, of what do I have, and then it's the next step of, well, okay. Now I want to continually monitor and curate that intelligence with the platform that I've chosen. Let's say snowflake, um, in order such that I can then build applications on top of that platform to serve my, my internal, external customer needs and the automation around classifying data reconciliation across different fragmented data silos, building that in those insights into snowflake. >>Um, as you say, a little later on where we're talking about data quality, active DQ, allowing us to reconcile data from different sources, as well as look at the integrity of that data. Um, so they can go on to remediation, you know, I, I wanna, um, harness and leverage, um, techniques around traditional RPA. Um, but to get to that stage, I need to fix the data. So remediating publishing the data in snowflake, uh, allowing analysis to be formed performance snowflake. Th those are the key steps that we see and just shrinking that timeline into weeks, giving the organization that time back means they're spending more time on their customer and solving their customer's problem, which is where we want them to be. >>This is the brilliance of snowflake actually, you know, Duncan is, I've talked to him, then what does your view about this and your other co-founders and it's really that focus on simplicity. So, I mean, that's, you, you picked a good company to join my opinion. So, um, I wonder if you could, you know, talk about some of the industry sectors that are, again, going to gain the most from, from data RPA, I mean, traditional RPA, if I can use that term, you know, a lot of it was back office, a lot of, you know, financial w what are the practical applications where data RPA is going to impact, you know, businesses and, and the outcomes that we can expect. >>Yes, sir. So our drive is, is really to, to make that, um, business general user's experience of RPA simpler and, and using no code to do that, uh, where they've also chosen snowflake to build that their cloud platform. They've got the combination then of using a relatively simple script scripting techniques, such as SQL, uh, without no code approach. And the, the answer to your question is whichever sector is looking to mobilize their data. Uh, it seems like a cop-out, but to give you some specific examples, David, um, in banking where, uh, customers are looking to modernize their banking systems and enable better customer experience through, through applications and digital apps. That's where we're, we're seeing a lot of traction, uh, and this approach to, to pay RPA to data, um, health care, where there's a huge amount of work to do to standardize data sets across providers, payers, patients, uh, and it's an ongoing, um, process there for, for retail, um, helping to, to build that immersive customer experience. >>So recommending next best actions, um, providing an experience that is going to drive loyalty and retention, that's, that's dependent on understanding what that customer's needs intent, uh, being out to provide them with the content or the outfit at that point in time, or all data dependent utilities is another one great overlap there with, with snowflake where, you know, helping utilities, telecoms energy, water providers to build services on that data. And this is where the ecosystem just continues to, to expand. If we, if we're helping our customers turn their data into services for, for their ecosystem, that's, that's exciting. And they were more so exciting than insurance, which we always used to, um, think back to, uh, when insurance used to be very dull and mundane, actually, that's where we're seeing a huge amounts of innovation to create new flexible products that are priced to the day to the situation and, and risk models being adaptive when the data changes, uh, on, on events or circumstances. So across all those sectors that they're all mobilizing that data, they're all moving in some way, shape or form to a, a multi-cloud, um, set up with their it. And I think with, with snowflake and without Tahoe, being able to accelerate that and make that journey simple and as complex is, uh, is why we found such a good partner here. >>All right. Thanks for that. And then thank you guys. Both. We gotta leave it there. Uh, really appreciate Duncan you coming on and Aja best of luck with the fundraising. >>We'll keep you posted. Thanks, David. All right. Great. >>Okay. Now let's take a look at a short video. That's going to help you understand how to reduce the steps around your data ops. Let's watch.

Published Date : Apr 29 2021

SUMMARY :

intelligent automation for data quality brought to you by IO Tahoe. Tahoe is going to share his insight. Yeah, it's great to have you back Um, now of course bringing snowflake and it looks like you're really starting to build momentum. And then I can see that we run into a And you gotta hire the right salespeople, but, but what's different this time around, Uh, well, you know, the fundamentals that you mentioned though, those are never change. enable that CIO to make purchase while still preserving and in some And of course, uh, speaking of the business, depending on which of these silos they end up looking at and what you can do. uh, valuation, you know, snowflake like numbers, nice cops there for sure. We've kind of stepped back and said, well, you know, the resource that a snowflake can and you know, of course the, the competitors come out and maybe criticize why they don't have this feature. And we were kind of discussing maybe with their silos. the whole unprotected data set with each other, and this lets you to, you know, And you can only really do these kinds you know, obviously GDPR started it in the States, you know, California, consumer privacy act, insurance, the ability for a chief marketing officer to drive They, the ability for a CFO to reconcile accounts at the end of the month. I mean it, and, you know, I would say, I mean, it really is table stakes. extent is that the data cloud really facilitates the ability to share and gain access to this both kind Uh, you know, the snowflake approach really enables you to share data with your ecosystem all the world, So, Andrea, the key here is to don't get to talking about it in my mind. Uh, however, the Achilles heel there was, you know, the complexity So Duncan, the cloud obviously brought in this notion of dev ops, um, I agree with you absolutely. And this lets you kind of set up both your own kind So it's quick and easy, and you can also, again, with our separation of storage and compute, you can provision your own And this kind of gets you away from like a once a week or once a month change window, And frankly, the technology should be an implementation detail, not the dictating factor. the technologies more accessible to the general business users similar to what you drive process Terraform to build in, that as, you know, let me take away the good learnings from how to do um, is where we're giving time back, you know, that is now taking a And that was good because it reduced, you know, mundane tasks, that intelligence, the forefront of, of what do I have, and then it's the next step of, you know, I, I wanna, um, harness and leverage, um, This is the brilliance of snowflake actually, you know, Duncan is, I've talked to him, then what does your view about this and your but to give you some specific examples, David, um, the day to the situation and, and risk models being adaptive And then thank you guys. We'll keep you posted. That's going to help you understand how to reduce

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AndreaPERSON

0.99+

Duncan TurnbullPERSON

0.99+

Ajay VohoraPERSON

0.99+

DuncanPERSON

0.99+

20QUANTITY

0.99+

twoQUANTITY

0.99+

IOORGANIZATION

0.99+

BothQUANTITY

0.99+

OneQUANTITY

0.99+

first hundred daysQUANTITY

0.99+

SQLTITLE

0.99+

bothQUANTITY

0.99+

three thingsQUANTITY

0.98+

CaliforniaLOCATION

0.98+

five years agoDATE

0.98+

one thingQUANTITY

0.98+

25%QUANTITY

0.97+

TerraformORGANIZATION

0.97+

eachQUANTITY

0.97+

oneQUANTITY

0.96+

35 plus billion dollarsQUANTITY

0.96+

fiveDATE

0.96+

SantosORGANIZATION

0.96+

once a weekQUANTITY

0.95+

GDPRTITLE

0.95+

TahoePERSON

0.95+

once a monthQUANTITY

0.95+

consumer privacy actTITLE

0.94+

fourQUANTITY

0.94+

step oneQUANTITY

0.93+

IO TahoeORGANIZATION

0.93+

MORGANIZATION

0.91+

agileTITLE

0.91+

last six monthsDATE

0.91+

14 monthsQUANTITY

0.9+

singleQUANTITY

0.88+

six years agoDATE

0.88+

todayDATE

0.88+

Io-TahoeORGANIZATION

0.87+

12QUANTITY

0.84+

one of themQUANTITY

0.83+

AIG ViharaORGANIZATION

0.82+

One thingQUANTITY

0.8+

single wayQUANTITY

0.77+

last 10 yearsDATE

0.76+

TonsQUANTITY

0.75+

AgileTITLE

0.73+

yearsQUANTITY

0.73+

TahoeORGANIZATION

0.7+

TerraformTITLE

0.66+

every cloudQUANTITY

0.65+

DunkinORGANIZATION

0.61+

past 15, 20DATE

0.59+

TaoORGANIZATION

0.56+

SnowflakeORGANIZATION

0.56+

SafeliteORGANIZATION

0.54+

snowflakeTITLE

0.53+

Dunkin AAJPERSON

0.52+

peopleQUANTITY

0.51+

hatORGANIZATION

0.5+

Ajay Vohora and Duncan Turnbull | Io-Tahoe Data Quality: Active DQ


 

>> Announcer: From around the globe. It's the cube presenting active DQ, intelligent automation for data quality brought to you by Io Tahoe. (indistinct) >> Got it? all right if everybody is ready we'll opening on Dave in five, four, three. Now we're going to look at the role automation plays in mobilizing your data on snowflake. Let's welcome. And Duncan Turnbull who's partner sales engineer at snowflake, Ajay Vohora is back CEO of IO. Tahoe he's going to share his insight. Gentlemen. Welcome. >> Thank you, David good to be back. >> Yes it's great to have you back Ajay and it's really good to see Io Tahoe expanding the ecosystem so important now of course bringing snowflake in, it looks like you're really starting to build momentum. I mean, there's progress that we've seen every month month by month, over the past 12, 14 months. Your seed investors, they got to be happy. >> They are they're happy and they can see that we're running into a nice phase of expansion here new customers signing up, and now we're ready to go out and raise that next round of funding. Maybe think of us like Snowflake five years ago. So we're definitely on track with that. A lot of interest from investors and right now trying to focus in on those investors that can partner with us and understand AI data and an automation. >> Well, so personally, I mean you've managed a number of early stage VC funds. I think four of them. You've taken several comm software companies through many funding rounds and growth and all the way to exit. So you know how it works. You have to get product market fit, you got to make sure you get your KPIs, right. And you got to hire the right salespeople, but what's different this time around? >> Well, you know, the fundamentals that you mentioned those that never change. What I can see that's different that's shifted this time around is three things. One in that they used to be this kind of choice of do we go open source or do we go proprietary? Now that has turned into a nice hybrid model where we've really keyed into RedHat doing something similar with Centos. And the idea here is that there is a core capability of technology that underpins a platform, but it's the ability to then build an ecosystem around that made up of a community. And that community may include customers, technology partners, other tech vendors and enabling the platform adoption so that all of those folks in that community can build and contribute whilst still maintaining the core architecture and platform integrity at the core of it. And that's one thing that's changed. We're seeing a lot of that type of software company emerge into that model, which is different from five years ago. And then leveraging the Cloud, every Cloud, Snowflake Cloud being one of them here. In order to make use of what customers end customers in enterprise software are moving towards. Every CIO is now in some configuration of a hybrid. IT is state whether that is Cloud, multi-Cloud, on-prem. That's just the reality. The other piece is in dealing with the CIO, his legacy. So the past 15, 20 years I've purchased many different platforms, technologies, and some of those are still established and still (indistinct) How do you enable that CIO to make purchase whilst still preserving and in some cases building on and extending the legacy material technology. So they've invested their people's time and training and financial investment into. Yeah, of course solving a problem, customer pain point with technology that never goes out in a fashion >> That never changes. You have to focus like a laser on that. And of course, speaking of companies who are focused on solving problems, Duncan Turnbull from Snowflake. You guys have really done a great job and really brilliantly addressing pain points particularly around data warehousing, simplified that you're providing this new capability around data sharing really quite amazing. Duncan, Ajay talks about data quality and customer pain points in enterprise IT. Why is data quality been such a problem historically? >> So one of the biggest challenges that's really affected that in the past is that because to address everyone's needs for using data, they've evolved all these kinds of different places to store it, all these different silos or data marts or all this kind of pluralfiation of places where data lives and all of those end up with slightly different schedules for bringing data in and out, they end up with slightly different rules for transforming that data and formatting it and getting it ready and slightly different quality checks for making use of it. And this then becomes like a big problem in that these different teams are then going to have slightly different or even radically different ounces to the same kinds of questions, which makes it very hard for teams to work together on their different data problems that exist inside the business, depending on which of these silos they end up looking at. And what you can do. If you have a single kind of scalable system for putting all of your data, into it, you can kind of side step along this complexity and you can address the data quality issues in a single way. >> Now, of course, we're seeing this huge trend in the market towards robotic process automation, RPA that adoption is accelerating. You see in UI paths, IPO, 35 plus billion dollars, valuation, Snowflake like numbers, nice comms there for sure. Ajay you've coined the phrase data RPA what is that in simple terms? >> Yeah I mean, it was born out of seeing how in our ecosystem (indistinct) community developers and customers general business users for wanting to adopt and deploy Io Tahoe's technology. And we could see that. I mean, there's not marketing out here we're not trying to automate that piece but wherever there is a process that was tied into some form of a manual overhead with handovers. And so on, that process is something that we were able to automate with Io Tahoe's technology and the employment of AI and machine learning technologies specifically to those data processes, almost as a precursor to getting into marketing automation or financial information automation. That's really where we're seeing the momentum pick up especially in the last six months. And we've kept it really simple with snowflake. We've kind of stepped back and said, well, the resource that a Snowflake can leverage here is the metadata. So how could we turn Snowflake into that repository of being the data catalog? And by the way, if you're a CIO looking to purchase the data catalog tool, stop there's no need to. Working with Snowflake we've enabled that intelligence to be gathered automatically and to be put to use within snowflake. So reducing that manual effort and I'm putting that data to work. And that's where we've packaged this with our AI machine learning specific to those data tasks. And it made sense that's what's resonated with our customers. >> You know, what's interesting here just a quick aside, as you know I've been watching snowflake now for awhile and of course the competitors come out and maybe criticize, "Why they don't have this feature. They don't have that feature." And snowflake seems to have an answer. And the answer oftentimes is, well ecosystem, ecosystem is going to bring that because we have a platform that's so easy to work with. So I'm interested Duncan in what kind of collaborations you are enabling with high quality data. And of course, your data sharing capability. >> Yeah so I think the ability to work on datasets isn't just limited to inside the business itself or even between different business units you're kind of discussing maybe with those silos before. When looking at this idea of collaboration. We have these challenges where we want to be able to exploit data to the greatest degree possible, but we need to maintain the security, the safety, the privacy, and governance of that data. It could be quite valuable. It could be quite personal depending on the application involved. One of these novel applications that we see between organizations of data sharing is this idea of data clean rooms. And these data clean rooms are safe, collaborative spaces which allow multiple companies or even divisions inside a company where they have particular privacy requirements to bring two or more data sets together, for analysis. But without having to actually share the whole unprotected data set with each other. And this lets you to you know, when you do this inside of Snowflake you can collaborate using standard tool sets. You can use all of our SQL ecosystem. You can use all of the data science ecosystem that works with Snowflake. You can use all of the BI ecosystem that works with snowflake. But you can do that in a way that keeps the confidentiality that needs to be presented inside the data intact. And you can only really do these kinds of collaborations especially across organization but even inside large enterprises, when you have good reliable data to work with, otherwise your analysis just isn't going to really work properly. A good example of this is one of our large gaming customers. Who's an appetizer. They were able to build targeted ads to acquire customers and measure the campaign impact in revenue but they were able to keep their data safe and secure while doing that while working with advertising partners. The business impact of that was they're able to get a lift of 20 to 25% in campaign effectiveness through better targeting and actually pull through into that of a reduction in customer acquisition costs because they just didn't have to spend as much on the forms of media that weren't working for them. >> So, Ajay I wonder, I mean with the way public policy is shaping out, you know, obviously GDPR started it in the States, California consumer privacy Act, and people are sort of taking the best of those. And there's a lot of differentiation but what are you seeing just in terms of governments really driving this move to privacy. >> Government, public sector, we're seeing a huge wake up an activity and across (indistinct), part of it has been data privacy. The other part of it is being more joined up and more digital rather than paper or form based. We've all got, so there's a waiting in the line, holding a form, taking that form to the front of the line and handing it over a desk. Now government and public sector is really looking to transform their services into being online (indistinct) self service. And that whole shift is then driving the need to emulate a lot of what the commercial sector is doing to automate their processes and to unlock the data from silos to put through into those processes. And another thing that I can say about this is the need for data quality is as Duncan mentions underpins all of these processes government, pharmaceuticals, utilities, banking, insurance. The ability for a chief marketing officer to drive a a loyalty campaign, the ability for a CFO to reconcile accounts at the end of the month to do a quick accurate financial close. Also the ability of a customer operations to make sure that the customer has the right details about themselves in the right application that they can sell. So from all of that is underpinned by data and is effective or not based on the quality of that data. So whilst we're mobilizing data to the Snowflake Cloud the ability to then drive analytics, prediction, business processes of that Cloud succeeds or fails on the quality of that data. >> I mean it really is table stakes. If you don't trust the data you're not going to use the data. The problem is it always takes so long to get to the data quality. There's all these endless debates about it. So we've been doing a fair amount of work and thinking around this idea of decentralized data. Data by its very nature is decentralized but the fault domains of traditional big data is that everything is just monolithic. And the organizations monolithic that technology's monolithic, the roles are very, you know, hyper specialized. And so you're hearing a lot more these days about this notion of a data fabric or what Jimit Devani calls a data mesh and we've kind of been leaning into that and the ability to connect various data capabilities whether it's a data, warehouse or a data hub or a data lake, that those assets are discoverable, they're shareable through API APIs and they're governed on a federated basis. And you're using now bringing in a machine intelligence to improve data quality. You know, I wonder Duncan, if you could talk a little bit about Snowflake's approach to this topic >> Sure so I'd say that making use of all of your data is the key kind of driver behind these ideas of beta meshes or beta fabrics? And the idea is that you want to bring together not just your kind of strategic data but also your legacy data and everything that you have inside the enterprise. I think I'd also like to kind of expand upon what a lot of people view as all of the data. And I think that a lot of people kind of miss that there's this whole other world of data they could be having access to, which is things like data from their business partners, their customers, their suppliers, and even stuff that's, more in the public domain, whether that's, you know demographic data or geographic or all these kinds of other types of data sources. And what I'd say to some extent is that the data Cloud really facilitates the ability to share and gain access to this both kind of, between organizations, inside organizations. And you don't have to, make lots of copies of the data and kind of worry about the storage and this federated, idea of governance and all these things that it's quite complex to kind of manage. The snowflake approach really enables you to share data with your ecosystem or the world without any latency with full control over what's shared without having to introduce new complexities or having complex interactions with APIs or software integration. The simple approach that we provide allows a relentless focus on creating the right data product to meet the challenges facing your business today. >> So Ajay, the key here is Duncan's talking about it my mind and in my cake takeaway is to simplicity. If you can take the complexity out of the equation you're going to get more adoption. It really is that simple. >> Yeah, absolutely. I think that, that whole journey, maybe five, six years ago the adoption of data lakes was a stepping stone. However, the Achilles heel there was the complexity that it shifted towards consuming that data from a data lake where there were many, many sets of data to be able to cure rate and to consume. Whereas actually, the simplicity of being able to go to the data that you need to do your role, whether you're in tax compliance or in customer services is key. And listen for snowflake by Io Tahoe. One thing we know for sure is that our customers are super smart and they're very capable. They're data savvy and they'll want to use whichever tool and embrace whichever Cloud platform that is going to reduce the barriers to solving what's complex about that data, simplifying that and using good old fashioned SQL to access data and to build products from it to exploit that data. So simplicity is key to it to allow people to make use of that data and CIO is recognize that. >> So Duncan, the Cloud obviously brought in this notion of DevOps and new methodologies and things like agile that's brought in the notion of DataOps which is a very hot topic right now basically DevOps applies to data about how does Snowflake think about this? How do you facilitate that methodology? >> So I agree with you absolutely that DataOps takes these ideas of agile development or agile delivery and have the kind of DevOps world that we've seen just rise and rise. And it applies them to the data pipeline, which is somewhere where it kind of traditionally hasn't happened. And it's the same kinds of messages. As we see in the development world it's about delivering faster development having better repeatability and really getting towards that dream of the data-driven enterprise, where you can answer people's data questions they can make better business decisions. And we have some really great architectural advantages that allow us to do things like allow cloning of data sets without having to copy them, allows us to do things like time travel so we can see what the data looked like at some point in the past. And this lets you kind of set up both your own kind of little data playpen as a clone without really having to copy all of that data so it's quick and easy. And you can also, again with our separation of storage and compute, you can provision your own virtual warehouse for dev usage. So you're not interfering with anything to do with people's production usage of this data. So these ideas, the scalability, it just makes it easy to make changes, test them, see what the effect of those changes are. And we've actually seen this, that you were talking a lot about partner ecosystems earlier. The partner ecosystem has taken these ideas that are inside Snowflake and they've extended them. They've integrated them with DevOps and DataOps tooling. So things like version control and get an infrastructure automation and things like Terraform. And they've kind of built that out into more of a DataOps products that you can make use of. So we can see there's a huge impact of these ideas coming into the data world. We think we're really well-placed to take advantage to them. The partner ecosystem is doing a great job with doing that. And it really allows us to kind of change that operating model for data so that we don't have as much emphasis on like hierarchy and change windows and all these kinds of things that are maybe viewed as a lot as fashioned. And we kind of taken the shift from this batch stage of integration into streaming continuous data pipelines in the Cloud. And this kind of gets you away from like a once a week or once a month change window if you're really unlucky to pushing changes in a much more rapid fashion as the needs of the business change. >> I mean those hierarchical organizational structures when we apply those to begin to that it actually creates the silos. So if you're going to be a silo buster, which Ajay I look at you guys in silo busters, you've got to put data in the hands of the domain experts, the business people, they know what data they want, if they have to go through and beg and borrow for a new data sets cetera. And so that's where automation becomes so key. And frankly the technology should be an implementation detail not the dictating factor. I wonder if you could comment on this. >> Yeah, absolutely. I think making the technologies more accessible to the general business users or those specialists business teams that's the key to unlocking. So it is interesting to see is as people move from organization to organization where they've had those experiences operating in a hierarchical sense, I want to break free from that. And we've been exposed to automation. Continuous workflows change is continuous in IT. It's continuous in business. The market's continuously changing. So having that flow across the organization of work, using key components, such as GitHub and similar towards your drive process, Terraform to build in, code into the process and automation and with Io Tahoe, leveraging all the metadata from across those fragmented sources is good to see how those things are coming together. And watching people move from organization to organization say, "Hey okay, I've got a new start. I've got my first hundred days to impress my new manager. What kind of an impact can I bring to this?" And quite often we're seeing that as, let me take away the good learnings from how to do it or how not to do it from my previous role. And this is an opportunity for me to bring in automation. And I'll give you an example, David, recently started working with a client in financial services. Who's an asset manager, managing financial assets. They've grown over the course of the last 10 years through M&A and each of those acquisitions have bought with its technical debt, it's own set of data, that multiple CRM systems now multiple databases, multiple bespoke in-house created applications. And when the new CIO came in and had a look at those he thought well, yes I want to mobilize my data. Yes, I need to modernize my data state because my CEO is now looking at these crypto assets that are on the horizon and the new funds that are emerging that's around digital assets and crypto assets. But in order to get to that where absolutely data underpins that and is the core asset cleaning up that that legacy situation mobilizing the relevant data into the Snowflake Cloud platform is where we're giving time back. You know, that is now taking a few weeks whereas that transitioned to mobilize that data start with that new clean slate to build upon a new business as a digital crypto asset manager as well as the legacy, traditional financial assets, bonds, stocks, and fixed income assets, you name it is where we're starting to see a lot of innovation. >> Tons of innovation. I love the crypto examples, NFTs are exploding and let's face it. Traditional banks are getting disrupted. And so I also love this notion of data RPA. Especially because Ajay I've done a lot of work in the RPA space. And what I would observe is that the early days of RPA, I call it paving the cow path, taking existing processes and applying scripts, letting software robots do its thing. And that was good because it reduced mundane tasks, but really where it's evolved is a much broader automation agenda. People are discovering new ways to completely transform their processes. And I see a similar analogy for the data operating model. So I'm wonder what do you think about that and how a customer really gets started bringing this to their ecosystem, their data life cycles. >> Sure. Yeah. Step one is always the same. It's figuring out for the CIO, the chief data officer, what data do I have? And that's increasingly something that they want to automate, so we can help them there and do that automated data discovery whether that is documents in the file share backup archive in a relational data store in a mainframe really quickly hydrating that and bringing that intelligence the forefront of what do I have, and then it's the next step of, well, okay now I want to continually monitor and curate that intelligence with the platform that I've chosen let's say Snowflake. In order such that I can then build applications on top of that platform to serve my internal external customer needs. and the automation around classifying data, reconciliation across different fragmented data silos building that in those insights into Snowflake. As you say, a little later on where we're talking about data quality, active DQ, allowing us to reconcile data from different sources as well as look at the integrity of that data. So then go on to remediation. I want to harness and leverage techniques around traditional RPA but to get to that stage, I need to fix the data. So remediating publishing the data in Snowflake, allowing analysis to be formed, performed in Snowflake but those are the key steps that we see and just shrinking that timeline into weeks, giving the organization that time back means they're spending more time on their customer and solving their customer's problem which is where we want them to be. >> Well, I think this is the brilliance of Snowflake actually, you know, Duncan I've talked to Benoit Dageville about this and your other co-founders and it's really that focus on simplicity. So I mean, that's you picked a good company to join in my opinion. So I wonder Ajay, if you could talk about some of the industry sectors that again are going to gain the most from data RPA, I mean traditional RPA, if I can use that term, a lot of it was back office, a lot of financial, what are the practical applications where data RPA is going to impact businesses and the outcomes that we can expect. >> Yes, so our drive is really to make that business general user's experience of RPA simpler and using no code to do that where they've also chosen Snowflake to build their Cloud platform. They've got the combination then of using a relatively simple scripting techniques such as SQL without no code approach. And the answer to your question is whichever sector is looking to mobilize their data. It seems like a cop-out but to give you some specific examples, David now in banking, where our customers are looking to modernize their banking systems and enable better customer experience through applications and digital apps, that's where we're seeing a lot of traction in this approach to pay RPA to data. And health care where there's a huge amount of work to do to standardize data sets across providers, payers, patients and it's an ongoing process there. For retail helping to to build that immersive customer experience. So recommending next best actions. Providing an experience that is going to drive loyalty and retention, that's dependent on understanding what that customer's needs, intent are, being able to provide them with the content or the offer at that point in time or all data dependent utilities. There's another one great overlap there with Snowflake where helping utilities telecoms, energy, water providers to build services on that data. And this is where the ecosystem just continues to expand. If we're helping our customers turn their data into services for their ecosystem, that's exciting. Again, they were more so exciting than insurance which it always used to think back to, when insurance used to be very dull and mundane, actually that's where we're seeing a huge amounts of innovation to create new flexible products that are priced to the day to the situation and risk models being adaptive when the data changes on events or circumstances. So across all those sectors that they're all mobilizing their data, they're all moving in some way but for sure form to a multi-Cloud setup with their IT. And I think with Snowflake and with Io Tahoe being able to accelerate that and make that journey simple and less complex is why we've found such a good partner here. >> All right. Thanks for that. And thank you guys both. We got to leave it there really appreciate Duncan you coming on and Ajay best of luck with the fundraising. >> We'll keep you posted. Thanks, David. >> All right. Great. >> Okay. Now let's take a look at a short video. That's going to help you understand how to reduce the steps around your DataOps let's watch. (upbeat music)

Published Date : Apr 20 2021

SUMMARY :

brought to you by Io Tahoe. he's going to share his insight. and it's really good to see Io Tahoe and they can see that we're running and all the way to exit. but it's the ability to You have to focus like a laser on that. is that because to address in the market towards robotic and I'm putting that data to work. and of course the competitors come out that needs to be presented this move to privacy. the ability to then drive and the ability to connect facilitates the ability to share and in my cake takeaway is to simplicity. that is going to reduce the And it applies them to the data pipeline, And frankly the technology should be that's the key to unlocking. that the early days of RPA, and the automation and the outcomes that we can expect. And the answer to your question is We got to leave it there We'll keep you posted. All right. That's going to help you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Ajay VohoraPERSON

0.99+

Duncan TurnbullPERSON

0.99+

Duncan TurnbullPERSON

0.99+

fiveQUANTITY

0.99+

DuncanPERSON

0.99+

twoQUANTITY

0.99+

DavePERSON

0.99+

IOORGANIZATION

0.99+

Jimit DevaniPERSON

0.99+

AjayPERSON

0.99+

Io TahoeORGANIZATION

0.99+

20QUANTITY

0.99+

Io-TahoeORGANIZATION

0.99+

OneQUANTITY

0.99+

California consumer privacy ActTITLE

0.99+

TahoePERSON

0.99+

Benoit DagevillePERSON

0.99+

SnowflakeTITLE

0.99+

five years agoDATE

0.99+

SQLTITLE

0.99+

first hundred daysQUANTITY

0.98+

fourQUANTITY

0.98+

GDPRTITLE

0.98+

eachQUANTITY

0.98+

threeQUANTITY

0.98+

bothQUANTITY

0.98+

25%QUANTITY

0.97+

three thingsQUANTITY

0.97+

oneQUANTITY

0.97+

M&AORGANIZATION

0.97+

once a weekQUANTITY

0.97+

one thingQUANTITY

0.96+

SnowflakeORGANIZATION

0.95+

once a monthQUANTITY

0.95+

DevOpsTITLE

0.95+

snowflakeTITLE

0.94+

singleQUANTITY

0.93+

last six monthsDATE

0.92+

StatesTITLE

0.92+

six years agoDATE

0.91+

single wayQUANTITY

0.91+

Snowflake CloudTITLE

0.9+

DataOpsTITLE

0.9+

todayDATE

0.86+

12QUANTITY

0.85+

35 plus billion dollarsQUANTITY

0.84+

fiveDATE

0.84+

Step oneQUANTITY

0.83+

TonsQUANTITY

0.82+

RedHatORGANIZATION

0.81+

CentosORGANIZATION

0.8+

One thingQUANTITY

0.79+

14 monthsQUANTITY

0.79+

Duncan Lennox, Amazon Web Services | AWS Storage Day 2019


 

[Music] hi everybody this is David on tape with the Cuban welcome to Boston we're covering storage here at Amazon storage day and we're looking at all the innovations and the expansion of Amazon's pretty vast storage portfolio Duncan Lennox is here is the director of product management for Amazon DFS Duncan good to see it's great to be here so what is so EF s stands for elastic file system what is Amazon EFS that's right EFS is our NFS based filesystem service designed to make it super easy for customers to get up and running with the file system in the cloud so should we think of this as kind of on-prem file services just stuck into the cloud or is it more than that it's more than that but it's definitely designed to enable that we wanted to make it really easy for customers to take the on pram applications that they have today that depend on a file system and move those into the cloud when you look at the macro trends particularly as it relates to file services what are you seeing what a customer's telling you well the first thing that we see is that it's still very early in the move to the cloud the vast majority of workloads are still running on Prem and customers need easy ways to move those thousands of applications they might have into the cloud without having to necessarily rewrite them to take advantage of cloud native services and that's a key thing that we built EFS for to make it easy to just pick up the application and drop it into the cloud without the application even needing to know that it's now running in the cloud okay so that's transparent to the to the to the application and the workload and it absolutely is we built it deliberately using NFS so that the application wouldn't even need to know that it's now running in the cloud and we also built it to be elastic and simple for the same reason so customers don't have to worry about provisioning the storage they need it just works NFS is hard making making NFS simple and elastic is not a trivial engineering task is it it hadn't been done until we did it a lot of people said it couldn't be done how could you make something that truly was elastic in the cloud but still support that NFS but we've been able to do that for tens of thousands of customers successfully and and what's the real challenge there is it to maintain that performance and the recoverability from a technical standpoint an engineering standpoint what's yes sir it's all of the above people expect a certain level of performance whether that's latency throughput and I ops that their application is dependent on but they also want to be able to take advantage of that pay-as-you-go cloud model that AWS created back with s3 13 years ago so that elasticity that we offer to customers means they don't have to worry about capex they don't have to plan for exactly how much storage they need to provision the file system grows and shrinks as they add and remove data they pay only for what they're using and we handle all the heavy lifting for them to make that happen this this opens up a huge new set of workloads for your customers doesn't it it absolutely does and a big part of what we see is customers wanting to go on that journey through the cloud so initially there starting with lifting and shifting those applications as we talked about it but as they mature they want to be able to take advantage of newer technologies like containerization and ultimately even service all right let's talk about EFS ia infrequently access files is really what it's designed for tell us more about it right so one of the things that we heard a lot from our customers of course is can you make it cheaper we love it but we'd like to use more of it and what we discovered is that we could develop this infrequent access storage class and how it works is you turn on a capability we call lifecycle management and it's completely automated after that so we know from industry analysts and from talking to customers that the majority of data perhaps as much as 80% goes pretty cold after about a month and it's rarely touched again so we developed the infrequent access storage class to take advantage of that so once you enable it which is a single click in the console or one API call you pick a policy 14 days 30 days and we monitor the readwrite IO to every file individually and once a file hasn't been read from or written to in that policy period say 30 days we automatically and transparently move it to the infrequent access storage class which is 92% cheaper than our standard storage class it's only two and a half cents in our u.s. East one region as opposed to 30 cents for our standard storage class two and a half cents per per gigabyte per gigabyte month we've done about four customers that were particularly excited about is that it remains active file system data so we move your files to the infrequent access storage class but it does not appear to move in the file system so for your applications and your users it's the same file in the same directory so they don't even need to be aware of the fact that it's now on the infrequent access storage class you just get a bill that's 92 percent cheaper for storage for that file like that ok and it's and it's simple to set up you said it's one click and then I set my policy and I can go back and change my that's exactly right we have multiple policies available you can change it later you can turn off lifecycle management if you decide you no longer need it later so how do you see customers taking advantage of this what do you expect the adoption to be like and what are you hearing from them well what we heard from customers was that they like to keep larger workloads in their file systems but because the data tends to go cold and isn't frequently accessed it didn't make economic sense to say to keep large amounts of data in our standard storage class but there's advantages to them in their businesses for example we've got customers who are doing genomic sequencing and for them to have a larger set of data always available to their applications but not costing them as much as it was allows them to get more results faster as one example you obviously see that yeah what we're what we're trying to do all the time is help our customers be able to focus less on the infrastructure and the heavy lifting and more on being able to innovate faster for their customer so Duncan Duncan some of the sort of fundamental capabilities of EFS include high availability and durability tell us more about that yeah when we were developing EFS we heard a lot from customers that they really wanted higher levels of durability and availability than they typically been able to have on Prem it's super expensive and complex to build high availability and high durability solutions so we've baked that in as a standard part of EFS so when a file is written to an EFS file system and that acknowledgement is received back by the client at that point the data is already spread across three availability zones for both availability and durability what that means is not only are you extremely unlikely to ever lose any data if one of those AZ's goes down or becomes unavailable for some reason to your application you continue to have full read/write access to your file system from the other two available zones so traditionally this would be a very expensive proposition it was sort of on Prem and multiple data centers maybe talk about how it's different in the clouds yeah it's complex to build there's a lot of moving parts involved in it because in our case with three availability zones you were talking about three physically distinct data centers high-speed networking between those and actually moving the data so that it's written not just to one but to all three and we handled that all transparently under the hood in EFS it's all included in our standard storage to your cost as well so it's not something that customers have to worry about more either a complexity or a cost point of view it's so so very very I guess low RPO and an RTO and my essentially zero if you will between the three availability zones because once your client gets that acknowledgement back it's already durably written to the three availability zones all right we'll give you last word just in the world of file services what should we be paying attention to what kinds of things are you really trying to achieve I think it's helping people do more for less faster so there's always more we can do and helping them take advantage of all the services AWS has to offer spoken like a true Amazonian Duncan thanks so much for coming on the queue for thank you good all right and thank you for watching everybody be back from storage day in Boston you watching the cute

Published Date : Nov 20 2019

SUMMARY :

adoption to be like and what are you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DuncanPERSON

0.99+

92%QUANTITY

0.99+

BostonLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

92 percentQUANTITY

0.99+

30 centsQUANTITY

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DavidPERSON

0.99+

14 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Duncan LennoxPERSON

0.99+

30 daysQUANTITY

0.99+

80%QUANTITY

0.99+

one clickQUANTITY

0.99+

two available zonesQUANTITY

0.99+

tens of thousands of customersQUANTITY

0.97+

first thingQUANTITY

0.97+

three availability zonesQUANTITY

0.97+

EFSTITLE

0.96+

13 years agoDATE

0.96+

two and a half centsQUANTITY

0.96+

applicationsQUANTITY

0.95+

todayDATE

0.93+

one exampleQUANTITY

0.93+

about a monthQUANTITY

0.93+

three physically distinct data centersQUANTITY

0.92+

every fileQUANTITY

0.9+

one APIQUANTITY

0.9+

PremORGANIZATION

0.89+

three availability zonesQUANTITY

0.87+

CubanOTHER

0.87+

Duncan DuncanPERSON

0.85+

one regionQUANTITY

0.84+

a lot of peopleQUANTITY

0.84+

storage dayEVENT

0.84+

three availability zonesQUANTITY

0.82+

four customersQUANTITY

0.82+

a half centsQUANTITY

0.82+

gigabyteQUANTITY

0.79+

both availabilityQUANTITY

0.79+

single clickQUANTITY

0.78+

two andQUANTITY

0.78+

one of the thingsQUANTITY

0.78+

oneQUANTITY

0.77+

threeQUANTITY

0.76+

u.s. EastLOCATION

0.69+

capexORGANIZATION

0.69+

s3TITLE

0.66+

Storage Day 2019EVENT

0.65+

zeroQUANTITY

0.63+

AZORGANIZATION

0.62+

of dataQUANTITY

0.6+

DFSTITLE

0.52+

lot ofQUANTITY

0.51+

Chad Duncan, Accenture & Jim Goode, Capital One | AWS Executive Summit 2018


 

>> Live (lively music) from Las Vegas it's the Cube covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back everyone to The Cube's live coverage of the AWS Executive Summit I'm your host Rebecca Knight. We have two guests for this segment we have Chad Duncan, Managing Director of Financial Services Technology Advisory Cloud Lead North America at Accenture, it's quite a long title. (laughs) And Jim Good, Senior Director Product and Portfolio Delivery at Capital One. Thank you both so much for coming on the show. >> Thanks for having us. >> Thank you. >> So we're talking today about Capital One's migration to the Cloud, but Jim, let's start out with Capital One the bank and why moving to the Cloud was a business imperative for you. >> Essentially, as we look at Capital One, we have national reach and in credit cards, people are very familiar with that, but we also wanted national reach in banking services too. And the approach we're using is not to go the old fashioned way, bricks and mortar, but it's to actually go more into the way people like to interact with their financial services partners and that's through mobile devices. And the only way to really get the kind of innovation you need, and to get the features to customers that they want on a regular basis is to be a very nimble, and use strategies like DevOps, et cetera. And the Cloud really puts us in the position to do that. By the dynamic provisioning of infrastructure, all the different things that our Agile practices can take advantage of so that we can regularly deliver new features to customers that they want. >> So, Agile delivery, you mentioned Agile. What is it about Capital One's culture in terms of it's approach to innovation that sort of enables that? >> Well we've adopted Agile a number of years ago and this is something where we'd like to really empower teams to work with the business to deliver these features on a recurring basis, regular releases. That's ingrained in our culture. I don't think we'd be able to actually do this Cloud migration without that structure because the teams themselves are doing the work. The teams themselves now have control over the infrastructure. No more centralized group doing all the work for them. It's really distributed to the teams. And so that's really become what's expected of our teams that they can actually deploy when they need to and actually build as needed. Again, without the Cloud, without the AWS services that we're using, we simply would not be able to realize that and the teams could not innovate the way that they are. >> Chad, in terms of you, you've been working with Capital One for a few years now on this migration. What would you say about this company and about how it's migration has gone? >> Their innovation strategy right? They want to be innovative, you heard Jim talk a little bit about that just now, and how they go to market for their customers. How they create new service offerings for their customers. Be their new cafes. Right? They don't have typical branches. You walk into a cafe, you can get a cup of coffee, yes there's financial advisors in there, but that's not the main focus it's not walking into a traditional branch bank. So taking that, if you think about that theme across all of their different product sets, and being able to very quickly and iteratively roll out new products to the market, and services that customers are desiring and really kind of being a disruptor in the industry. Frankly, is the approach that they're taking. >> And is Accenture says, we are living in this age of epic disruption so >> Epic disruption. >> (laughing) Exactly. So Jim, one of the challenges in the migrating legacy platforms is this lack of megadata metadata, I'm sorry. (laughs) Megadata. >> There's megadata. >> There is megadata. >> (laughing) I think we need the aid of the U.S. house bans to talk about that. >> The mega needs meta. (all laughing) >> But this lack of metadata, so how do you overcome that obstacle? >> Well it's been one of the more challenging things that we face. We have a lot of legacy systems that we're kind of unwinding and migrating to the cloud. We're building new platforms for those new services of those. There's been a lot of rolling up the sleeves work just to understand what all this is the old fashioned way. But, what we're really able to do now because as we move things to the Cloud as we move new applications to the Cloud, we're able to use information that's now available to us that was not available to us before. VPC Flow Logs for example, from Amazon, allow us to know what are the connections between all these different services, and we've been able to use some of their tools and other tools that we've developed internally to start to visualize this in a much better way. Would not have been able to do that in our legacy setup. And so this is something that now we're actually using to aid the migrations, to understand how things connect in a much better way. And really, looking forward, we're in a much better place and we now know what we have, and we're able to track it very well. >> So Chad, Capital One is making it sound like it is pretty easy, (laughing) but we know that moving to the Cloud is actually really hard for so many financial institutions. Why has Capital One been able to succeed at a time when so many other banks are really struggling to do this? >> Yeah, I think about it in a couple of ways. They're not afraid to lead and innovate and fail fast, right? So you get out there, you talked about an MVP, and how you would stand up a new surface offering, or one of the applications in the cloud, right? Go ahead and do that, get some momentum, get people excited about the progress that's being made. That's one thing. And really understanding that security shouldn't be an issue, right? There's ways to secure your data in the Cloud. You can run core banking in the Cloud, Capital One's doing that, right? So, there's things like that that some other institutions sort of have analysis paralysis and they're like, "Well I don't know if I can secure my data, "I don't know if I can get the throughput that I need." The data latency may be an issue for banking and really bring the right architects to the table and do that. Capital one did a great job from the beginning of getting their people trained and certified in the Cloud technologies. A lot, mostly with AWS, right? Frankly. And really making that a culture of their organization. They don't consider themselves a financial services institution really. They consider themselves a technology company. >> Yep. >> And that's the culture. When you walk into a Capital One building, not a bank ... >> A Pete's cafe. (laughs) Right? Yeah. >> People center. >> The center, yes right, and headquarters building. You feel like you're walking through a technology company. You don't feel like you're in a bank. And setting that culture and that expectation with all of the Capital One associates I think is a huge key to your success and how you guys were able to get everybody on board. >> Yes. >> You had your CEO your CIO all talking about we're moving to Cloud. We're going to close our data centers. We're going to be all-in in public Cloud and that's the marching orders. And that's the drum beat, right? And you kind of feel that when you're there. >> And also from our inception, we've been a test and learn company and culture. That is what we have built Capital One on is finding out what customers really want, responding to that and iterating, and iterating, and iterating in different offerings. And it's no different with how we've approached our migration to the Cloud. We're going to set the minimum viable product as far as outcomes are concerned. We're going to test and learn, test and learn, test and learn. We learn from those, it's the fail fast kind of mentality, and we learn from some of those failures and adjust. And it's been, again, it really does fit our culture very well historically. >> And that's how, because there are so many trade-offs involved when you're thinking about these things. And is that how you sort of stick with the minimum viable product? This test and learn ethos? >> Well the test and learn is a way to get there. The minimum viable product is like this is our goal let's kind of be focused there so we can get to that. It takes some discipline to be able to say no there's shiny objects over here and over here, but if we go that way it's going to take us a little bit off track. So we spend a lot of time discussing what is MVP for the migration, for an application, whatever it might be, and sticking to that and making sure we stay true to that. So we have regular reviews at a team level, at a program level, to make sure we're staying the course and driving toward that. >> And that's critical. So many of our customers think they have to have it all thought out, all planned out, the entire strategy, all of the different dependencies mapped out, how we're going to develop this in the Cloud and they never get anywhere. Because you can't absorb all of that at once. So you start small, you gain, you iterate, and you go from there. >> When you're talking about getting inside the brains of customers and figuring out what they want and then delivering that, when we think about the bank of the future Capital One has this digital first strategy, what do you envision? For how people will interact with their financial services institution? >> Well I have four kids and they're all in their 20s and so I observe them a lot and I learn from them a lot and I can see what people want to do. They want to use their mobile devices. That's what they want to be able to do. They want to have access to their information at their fingertips, when they want it. The cafes Chad mentioned are kind of our big step toward, it's an educational offering more than anything else. Like here's how you can do that, here's the things you can do with this. It's not a sales oriented thing, it's an educational oriented thing so people can understand what tools they have available, understand what products we have available to help them, and then go about their lives the way they want. >> Great. What are some of the most exciting applications coming down the pipeline in terms of this new way of banking that Capital One is showing us? >> Do you want to take this one? >> We've actually built our primary customer servicing application that our customers use every day native in the Cloud. And we're continuing to iterate on that so I think you don't have to look much further than our mobile app to see what we're super excited about and what we already offer to folks. And again, that's been enabled by our migration to the Cloud so it's going to continue to iterate, we continue to learn from our customers what they want, what new features they want, we continue to build those out. >> Great. >> And even from a call center perspective you guys are using Amazon connect, right? >> Yes we are. >> To man your call centers and that has enabled a different way to interact with the customer. You have more data at your fingertips. You're learning some of the patterns from your customer calls in a way that you've not been able to do that in the past. So enabling some of that data is also been effective and kind of servicing those accounts and having that very good interaction with your customer. >> Great. Chad, Jim, thank you so much for coming on the show it was really fun. >> Thank you. >> Thank you. Thanks. >> I'm Rebecca Knight, we will have more of the Cube's live coverage of the AWS Executive Summit coming up in just a little bit. (lively music)

Published Date : Nov 28 2018

SUMMARY :

Brought to you by Accenture. And Jim Good, Senior Director Product and Portfolio migration to the Cloud, And the only way to really get the kind of innovation you in terms of it's approach to innovation and the teams could not innovate the way that they are. and about how it's migration has gone? and how they go to market for their customers. So Jim, one of the challenges in the migrating bans to talk about that. The mega needs meta. and migrating to the cloud. Why has Capital One been able to succeed at a time and really bring the right architects And that's the culture. (laughs) Right? and how you guys were able to get everybody on board. and that's the marching orders. We're going to set the minimum viable product And is that how you sort of stick with and sticking to that and making sure we stay true to that. So many of our customers think they have to have it all here's the things you can do with this. What are some of the most exciting And again, that's been enabled by our migration to the Cloud and having that very good interaction with your customer. it was really fun. Thank you. the Cube's live coverage of the AWS Executive Summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Chad DuncanPERSON

0.99+

Capital OneORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Capital oneORGANIZATION

0.99+

Jim GoodPERSON

0.99+

JimPERSON

0.99+

AmazonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

20sQUANTITY

0.99+

ChadPERSON

0.99+

AccentureORGANIZATION

0.99+

two guestsQUANTITY

0.99+

Jim GoodePERSON

0.99+

four kidsQUANTITY

0.99+

AgileTITLE

0.99+

AWS Executive SummitEVENT

0.98+

oneQUANTITY

0.97+

first strategyQUANTITY

0.96+

bothQUANTITY

0.96+

one thingQUANTITY

0.96+

DevOpsTITLE

0.95+

todayDATE

0.93+

ChadORGANIZATION

0.93+

AWS Executive Summit 2018EVENT

0.92+

NorthLOCATION

0.92+

U.S.ORGANIZATION

0.89+

Accenture Executive SummitEVENT

0.84+

ChadLOCATION

0.83+

of years agoDATE

0.82+

CubePERSON

0.78+

Financial Services Technology AdvisoryORGANIZATION

0.75+

coffeeQUANTITY

0.61+

AmericaLOCATION

0.6+

PeteORGANIZATION

0.55+

CubeORGANIZATION

0.49+

VPCORGANIZATION

0.48+

cupQUANTITY

0.47+

CubeCOMMERCIAL_ITEM

0.4+

Tyler Duncan, Dell & Ed Watson, OSIsoft | PI World 2018


 

>> [Announcer] From San Francisco, it's theCUBE covering OSIsoft PIWORLD 2018, brought to you by OSIsoft. >> Hey, welcome back, everybody, Jeff Frick here with theCUBE, we're in downtown San Francisco at the OSIsoft PIWorld 2018. They've been doing it for like 28 years, it's amazing. We've never been here before, it's our first time and really these guys are all about OT, operational transactions. We talk about IoT and industrial IoT, they're doing it here. They're doing it for real and they've been doing it for decades so we're excited to have our next two guests. Tyler Duncan, he's a Technologist from Dell, Tyler, great to see you. >> Hi, thank you. >> He's joined by Ed Watson, the global account manager for channels for Osisoft. Or OSIsoft, excuse me. >> Glad to be here. Thanks, Jeff. >> I assume Dell's one of your accounts. >> Dell is one of my accounts as well as Nokia so-- >> Oh, very good. >> So there's a big nexus there. >> Yep, and we're looking forward to Dell Technology World next week, I think. >> Next week, yeah. >> I think it's the first Dell Technology not Dell EMC World with-- >> That's right. >> I don't know how many people are going to be there, 50,000 or something? >> There'll be a lot. >> There'll be a lot. (laughs) But that's all right, but we're here today... >> Yeah. >> And we're talking about industrial IoT and really what OSIsoft's been doing for a number of years, but what's interesting to me is from the IT side, we kind of look at industrial IoT as just kind of getting here and it's still kind of a new opportunity and looking at things like 5G and looking at things like IPE, ya know, all these sensors are now going to have IP connections on them. So, there's a whole new opportunity to marry the IT and the OT together. The nasty thing is we want to move it out of those clean pristine data centers and get it out to the edge of the nasty oil fields and the nasty wind turbine fields and crazy turbines and these things, so, Edge, what's special about the Edge? What are you guys doing to take care of the special things on the Edge? >> Well, a couple things, I think being out there in the nasty environments is where the money is. So, trying to collect data from the remote assets that really aren't connected right now. In terms of the Edge, you have a variety of small gateways that you can collect the data but what we see now is a move toward more compute at the Edge and that's where Dell comes in. >> Yeah, so I'm part of Dell's Extreme Scale and Structure Group, ESI, and specifically I'm part of our modular data center team. What that means is that for us we are helping to deploy compute out at the Edge and also at the core, but the challenges at the Edge is, you mentioned the kind of the dirty area, well, we can actually change that environment so that's it's not a dirty environment anymore. It's a different set of challenges. It may be more that it's remote, it's lights out, I don't have people there to maintain it, things like that, so it's not necessarily that it's dirty or ruggedized or that's it's high temperature or extreme environments, it just may be remote. >> Right, there's always this kind of balance in terms of, I assume it's all application specific as to what can you process there, what do you have to send back to process, there's always this nasty thing called latency and the speed of the light that just gets in the way all the time. So, how are you redesigning systems? How are you thinking about how much computing store do you put out on the Edge? How do you break up that you send back to central processing? How much do you have to keep? You know we all want to keep everything, it's probably a little bit more practical if you're keepin' it back in the data center versus you're tryin' to store it at the Edge. So how are you looking at some of these factors in designing these solutions? >> [Ed] Well, Jeff, those are good points. And where OSIsoft PI comes in, for the modular data center is to collect all the power cooling and IT data, aggregate it, send to the Cloud what needs to be sent to the Cloud, but enable Dell and their customers to make decisions right there on the Edge. So, if you're using modular data center or Telecom for cell towers or autonomous vehicles for AR VR, what we provide for Dell is a way to manage those modular data centers and when you're talking geographically dispersed modular data centers, it can be a real challenge. >> Yeah, and I think to add to that, there's, when we start lookin' at the Edge and the data that's there, I look at it as kind of two different purposes. There's one of why is that compute there in the first place. We're not defining that, we're just trying to enable our customers to be able to deploy compute however they need. Now when we start looking at our control system and the software monitoring analytics, absolutely. And what we are doing is we want to make sure that when we are capturing that data, we are capturing the right amount of data, but we're also creating the right tools and hooks in place in order to be able to update those data models as time goes on. >> [Jeff] Right. >> So, that we don't have worry about if we got it right on day one. It's updateable and we know that the right solution for one customer and the right data is not necessarily the right data for the next customer. >> [Jeff] Right. >> So we're not going to make the assumptions that we have it all figured out. We're just trying to design the solution so that it's flexible enough to allow customers to do whatever they need to do. >> I'm just curious in terms of, it's obviously important enough to give you guys your own name, Extreme Scale. What is Extreme Scale? 'Cause you said it isn't necessarily because it's dirty data and hardened and kind of environmentally. What makes an Extreme Scale opportunity for you that maybe some of your cohorts will bring you guys into an opportunity? >> Yeah so I think for the Extreme Scale part of it is, it is just doing the right engineering effort, provide the right solution for a customer. As opposed to something that is more of a product base that is bought off of dell.com. >> [Jeff] Okay. >> Everything we do is solution based and so it's listening to the customer, what their challenges are and trying to, again, provide that right solution. There are probably different levels of what's the right level of customization based off of how much that customer is buying. And sometimes that is adding things, sometimes it's taking things away, sometimes it's the remote location or sometimes it's a traditional data center. So our scrimpt scale infrastructure encompasses a lot of different verticals-- >> And are most of solutions that you develop kind of very customer specific or is there, you kind of come up with a solution that's more of an industry specific versus a customer specific? >> Yeah, we do, I would say everything we do is very customer specific. That's what our branch of Dell does. That said, as we start looking at more of the, what we're calling the Edge. I think ther6e are things that have to have a little more of a blend of that kind of product analysis, or that look from a product side. I'm no longer know that I'm deploying 40 megawatts in a particular location on the map, instead I'm deploying 10,000 locations all over the world and I need a solution that works in all of those. It has to be a little more product based in some of those, but still customized for our customers. >> And Jeff, we talked a little bit about scale. It's one thing to have scale in a data center. It's another thing to have scale across the globe. And, this is where PI excels, in that ability to manage that scale. >> Right, and then how exciting is it for you guys? You've been at it awhile, but it's not that long that we've had things like at Dupe and we've had things like Flink and we've had things like Spark, and kind of these new age applications for streaming data. But, you guys were extracting value from these systems and making course corrections 30 years ago. So how are some of these new technologies impacting your guys' ability to deliver value to your customers? >> Well I think the ecosystem itself is very good, because it allows customers to collect data in a way that they want to. Our ability to enable our customers to take data out of PI and put it into the Dupe, or put it into a data lake or an SAP HANA really adds significant value in today's ecosystem. >> It's pretty interesting, because I look around the room at all your sponsors, a lot of familiar names, a lot of new names as well, but in our world in the IT space that we cover, it's funny we've never been here before, we cover a lot of big shows like at Dell Technology World, so you guys have been doing your thing, has an ecosystem always been important for OSIsoft? It's very, very important for all the tech companies we cover, has it always been important for you? Or is it a relatively new development? >> I think it's always been important. I think it's more so now. No one company can do it all. We provide the data infrastructure and then allow our partners and clients to build solutions on top of it. And I think that's what sustains us through the years. >> Final thoughts on what's going on here today and over the last couple of days. Any surprises, hall chatter that you can share that you weren't expecting or really validates what's going on in this space. A lot of activity going on, I love all the signs over the building. This is the infrastructure that makes the rest of the world go whether it's power, transportation, what do we have behind us? Distribution, I mean it's really pretty phenomenal the industries you guys cover. >> Yeah and you know a lot of the sessions are videotaped so you can see Tyler from last year when he gave a presentation. This year Ebay, PayPal are giving presentations. And it's just a very exciting time in the data center industry. >> And I'll say on our side maybe not as much of a surprise, but also hearing the kind of the customer feedback on things that Dell and OSIsoft have partnered together and we work together on things like a Redfish connector in order to be able to, from an agnostic standpoint, be able to pull data from any server that's out there, regardless of brand, we're full support of that. But, to be able to do that in an automatic way that with their connector so that whenever I go and search for my range of IP addresses, it finds all the devices, brings all that data in, organizes it, and makes it ready for me to be able to use. That's a big thing and that's... They've been doing connectors for a while, but that's a new thing as far as being able to bring that and do that for servers. That, if I have 100,000 servers, I can't manually go get all those and bring them in. >> Right, right. >> So, being able to do that in an automatic way is a great enablement for the Edge. >> Yeah, it's a really refreshing kind of point of view. We usually look at it from the other side, from IT really starting to get together with the OT. Coming at it from the OT side where you have such an established customer base, such an established history and solution set and then again marrying that back to the IT and some of the newer things that are happening and that's exciting times. >> Yeah, absolutely. >> Yeah. >> Well thanks for spending a few minutes with us. And congratulations on the success of the show. >> Thank you. >> Thank you. >> Alright, he's Tyler, he's Ed, I'm Jeff. You're watching theCUBE from downtown San Francisco at OSIsoft PI WORLD 2018, thanks for watching. (light techno music)

Published Date : May 29 2018

SUMMARY :

covering OSIsoft PIWORLD 2018, brought to you by OSIsoft. excited to have our next two guests. the global account manager for channels Glad to be here. Yep, and we're looking forward to But that's all right, but we're here today... and get it out to the edge of the nasty oil fields In terms of the Edge, you have a variety of and also at the core, and the speed of the light that just for the modular data center is to collect and hooks in place in order to be able to for one customer and the right data is not necessarily so that it's flexible enough to allow customers it's obviously important enough to give you guys it is just doing the right engineering effort, and so it's listening to the customer, I think ther6e are things that have to have in that ability to manage that scale. Right, and then how exciting is it for you guys? because it allows customers to collect data We provide the data infrastructure and then allow the industries you guys cover. Yeah and you know a lot of the sessions are videotaped But, to be able to do that in an automatic way So, being able to do that in an automatic way and then again marrying that back to the IT And congratulations on the success of the show. at OSIsoft PI WORLD 2018, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

TylerPERSON

0.99+

Jeff FrickPERSON

0.99+

OSIsoftORGANIZATION

0.99+

Ed WatsonPERSON

0.99+

DellORGANIZATION

0.99+

PayPalORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Tyler DuncanPERSON

0.99+

NokiaORGANIZATION

0.99+

40 megawattsQUANTITY

0.99+

last yearDATE

0.99+

Next weekDATE

0.99+

OsisoftORGANIZATION

0.99+

10,000 locationsQUANTITY

0.99+

next weekDATE

0.99+

28 yearsQUANTITY

0.99+

DupeORGANIZATION

0.99+

EbayORGANIZATION

0.99+

50,000QUANTITY

0.99+

oneQUANTITY

0.99+

SAP HANATITLE

0.99+

EdPERSON

0.99+

100,000 serversQUANTITY

0.99+

firstQUANTITY

0.99+

first timeQUANTITY

0.99+

Dell TechnologyORGANIZATION

0.99+

todayDATE

0.99+

This yearDATE

0.98+

dell.comORGANIZATION

0.98+

twoQUANTITY

0.98+

30 years agoDATE

0.98+

SparkTITLE

0.97+

EdgeORGANIZATION

0.96+

ESIORGANIZATION

0.96+

FlinkORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

Dell EMC WorldORGANIZATION

0.95+

one customerQUANTITY

0.95+

OSIsoft PIWORLD 2018EVENT

0.94+

two guestsQUANTITY

0.93+

RedfishORGANIZATION

0.92+

PI World 2018EVENT

0.91+

Scale and Structure GroupORGANIZATION

0.9+

OSIsoft PIWorld 2018EVENT

0.87+

nexusORGANIZATION

0.86+

OSIsoft PI WORLD 2018EVENT

0.85+

one thingQUANTITY

0.83+

Dell Technology WorldORGANIZATION

0.8+

last couple of daysDATE

0.79+

decadesQUANTITY

0.75+

Extreme ScaleOTHER

0.72+

WorldEVENT

0.71+

day oneQUANTITY

0.68+

OSIsoft PIORGANIZATION

0.68+

Duncan Epping, VMware | VeeamON 2018


 

>> Narrator: Live from Chicago, Illinois, it's theCUBE, covering VeeamOn 2018. Brought to you my Veeam. >> Welcome back to Chicago everybody. You're watching theCUBE, the leader in live tech coverage, and we are covering VeeamOn 2018, #VeaamOn. My name is Dave Vellante, and I'm here with my cohost Stuart Miniman, Duncan Epping is here, Chief Technologist, Storage and Availability at VMWare and the world's number one blogger in virtualization, Yellow Bricks, yellow-bricks.com. Duncan, thanks very much coming to theCUBE. Good to see you. >> No problem, my pleasure, it's been a while. I actually hoped to be on the show probably six, seven, eight years ago, I don't know how long it is, but I've watched many episodes. So it's great to be part of it. >> Well great, Duncan one of the biggest problems is you're so busy, every year at VM World you were totally booked up, so no thanks so much we're so glad we could do this. >> So Stu and I remember the peer insight we did many many years ago back when we had Boonyon on recently, and he was talking about when VMWare sort of created virtualization, it pushed the bottle neck around. It created a lot of stress on the storage systems. And WMWare for years dealt with that through API integration and the like and very well sort of covered. But I wonder if you could take us through your perspectives of the journey of storage at VMWare and generally, or specifically, and virtualization generally. >> Yeah, it's a good question. I think everyone that has been part of the community has faced all of the different challenges from a storage perspective. I mean, Stu, you now what kind of problem EMC had when VMWare first started doing virtualization. And I think the key reasons for these were fairly straightforward. When we started virtualization and we started leveraging shared storage systems, those shared storage systems were never designed with virtualization in the back of their minds. They were designed for physical workloads, maybe one or two machines connected to it, you know in larger volume it may be 10 or 15, but not 10 or 15 physical hosts with hundred of virtual machines. So we started noticing is that from a performance perspective systems were lagging, we were doing all sorts of things to the storage systems that they weren't expecting, virtual machine snapshots. They were seeing IO patterns that they had never seen before. Instead of sequential IO we had a lot of random IOs so we had to start doing different things from a storage perspective so as you said, we started with APIs, we had the vSPhere APIs for IO filtering, we have the Divi APIs, the array integration, so that we can offload some of the functionality, but of course on top of that what we started doing within VMWare is we started exploring what we could do smarter from a storage point from our stance. So not just looking at how we can help the ecosystem, but also what we can do from our perspective, so there were two main efforts over the past couple of years. The first one is virtual volumes. It has taken a while before the adoption ramped up. I think part of that is mainly because a lot of our customer base was still on vSphere 5.5. Now that we're starting to see broader adoption of vSphere 6.0 and actually 6.5 and 6.7, we're starting to see the adoption of stuff like virtual volumes go up as well. That is also due to the fact that our partners like Pure Storage, Nimble, HP with 3PAR has been pushing or have been pushing VVols tremendously. So they've done a great job, and we're starting to see a lot of customers adopting VVols, and that way we're getting around some of the limitations that we have from a traditional storage perspective. >> Explain that, what are customers telling you about the benefits that they're getting out of VVols and VVol and VVol adoption? >> Well, there's two main things. It kind of depends on what kind of problems you're facing, but a lot of customers come to us with management issues and scalability issues. From a scalability perspective we have larger customers that literally have thousands of volumes. If you look at an E6 cluster today you're limited in terms of the numbers of volumes you can connect to a cluster. So that's one thing. As soon as they start moving to VVols now they're not managing those individual volumes anymore but they're managing the storage system as a whole, and they start creating policies, and that's where the management aspect comes into play. So it becomes a lot easier to manage, because instead of having thousands of volumes to select from, they don't normally have to look at a spreadsheet, for instance, to figure out where to place a virtual machine, now they simply make a policy and the policy engine will figure out where to place that virtual machine. >> Dave: It sounds like cloud. >> It actually is, you know, the cloud version of, cloudified version of storage I would say. But it brings a lot of benefits. And the funny thing is that we've been talking about policies and policy engines for a long time, even in the cloud, but try to come up with one cloud that actually has a decent policy engine. Hardly anyone has that today. From a storage perspective I think storage policy based management framework that VMWare has is quite unique. Well now we're starting to see that popping up in other areas, and that's the strange thing about it. >> Always back to the software mainframe Stu. >> Yeah, and Duncan one of the things we've really seen, a transition for, it took us about a decade to try and fix storage in a virtualized environment, and today most things are built either understanding virtualization, or at least that's part of the puzzle, and then of course VVols led us into was the ability for vSANs. Help us kind of transition that threshold as to how that's just kind of a given underneath for vSAN and other solutions like it. >> Yeah, if you look at vSAN it has been around for a while. The beta was in 2013, as you guys know. We have a large adoption, at least we saw a large increase over the last couple of years, I would say the last two years. You guys have spoken with Yangbing before, so you know about the business side of vSAN, I'm not going to cover that, but if you look at it from a technology perspective we stared developing this 2008, 2009, that's when we started thinking about what we could do different from a storage perspective. There were already some companies doing something in the hyperverge space and we figured we could do something significantly different than they were doing. They had a storage solution that sat on top of the hypervisor, we own the hypervisor so we can create something that sits within the hypervisor, and that's when we started looking at including these different technologies, so we started looking in how can we introduce things like deduplication and compression? What can we do with for ROBO solutions? Can we do something like stretch clustering in an easy way? There are a lot of stretch cluster solutions out there, but if you look at a stretch clustering solution today it typically takes weeks to implement that. If you look at something like vSAN, it was our aim to actually to be able to deploy something like that from a storage perspective within hours instead of weeks, right? And we've been able to achieve that, and it has been a huge undertaking, but I think it's fair to say that it has been rather successful. >> All right, Duncan, help connect the dots to where we are here at VeeamOn. It's funny, I think Veeam started out heavily in virtualization, still heavily involved in virtualization, they've got a v in the beginning of their name. When I hear the keynote this morning, a lot of hyper, reminded me of before we had, before hyperconverge fully took over, VMWare tried to call it a hypervisor converge system around VMWare, so talk to us a little bit about data protection, the Veeam relationship and how that fits into things like vSAN and vSphere? >> Yeah I think, I talk to a lot of customers as a Chief Technologist, it's part of my role to talk to customers and have discussions about what's on top of their mind. Data protection is always one of those things that comes up. I would say it's always in the top three. Whenever you talk to a CIO, a CTO, protection of the data, availability of data, resiliency, reliability, it's fairly important. Veeam of course, for us, is a great partner. Primarily because of the simplicity of the features and the products that they offer. Whenever I talk to a customer and they explain how difficult it is to manage their backup and recovery solutions I always point them to a partner like Veeam simply because it's going to make their life a lot easier if you ask me. And I can see that Veeam is slowly transitioning. As you mentioned, the v is in front of the name. The v is in front of our name as well, but we know that it's not, the whole world isn't just VMWare and the whole world isn't just virtual. There's a lot of other different solutions out there, and actually Veeam's looking at other revenue streams as well. I would argue, though, if you're looking at something like the edge space which I think that more or less exploring at looking at things like IoT, there's going to be some form of virtualization within that, whether that's VMWare based or another solution of course is going to be the question. That is something that we'll need to figure out in the upcoming years, but I think there's a big opportunity out there. If you ask me, the keynote was really interesting. I kind of missed the end of details. I'm hoping that the closing keynote is going to give some more details on what they will be doing in the IoT space, how they see their solution evolving from that point of view because it's a market that's still being developed, but that's definitely going to be interesting. >> So Duncan it's interesting to hear you say that when you talk to customers data protection is in the top three, even amongst CIOs. It used to not be that way. Data protection was always a bolt on, it was an afterthought, it was kind of one size fits all. What's changed? >> Well I think the importance of the data has changed. If you look over the last 10 years whenever you talk to any company out there that has lost any significant amount of data they understand what the value was of the data that they were hosting. I think the big difference over the past 10 years is in the past we had applications like email, maybe some file services and that's it. But now everything revolves around applications, and that's also the shift that I'm seeing in the industry. Also from an IT perspective, right? In the past, over the past decade I think everyone has been focused on the infrastructure layer. If you look at something like VBlock, very much infrastructure focused. If you look at something like hyperconverged solutions, very much infrastructure focused. But now whenever we talk to customers, customers are more and more interested in what we can do for the application layer. What kind of benefits do we have for Exchange, for Oracle, for SAP, you name it? I think that's also a big change that's happening in the industry right now. One of the things from a technical perspective, and there may be others, but when VMWare really became prominent it was wonderful but we were reducing the number of physical resources, and the one workload that took a lot of physical resources was backup, and then sort of Veeam swept in and took advantage of that sea change. What's the technical constraint now when you think about things like multi cloud and SaaS and IoT, data's much more distributed, it's out of the control necessarily of a single platform. So from a technicals perspective, what's the big challenge and sort of the gate to architectures today? >> Well as you said, the distribution of data is the big challenge as it stands right now from a technical perspective. I think the biggest challenge that most of the players in this space, and not just Veeam, some other players as well, will have is trying to figure out how to control and manage their data. Other platforms are facing similar challenges. And no one really has solved this problem yet. We're starting to see some players in this space that have solutions that sit out in Azure, that sit out in Google Cloud, but it's a very challenging solution, and I think if you ask me, and this is something that I've said internally as well, the company that is capable of managing and owning the data is the company that's probably going to be most successful in the cloud war that's now happening. I think that's the most critical aspect. Workloads can move around, but data is very difficult to move around and own as well. >> Duncan one of the discontinuities we see in the marketplace that you mentioned earlier, wondering if you can talk to, in the enterprise in the data center, how do we get them to get to that next version? Comfortable with it, it's stable, it works. I look at the cloud, I'm running Microsoft Azure or AWS I'm running the version that they want. How do we help close that gap? Because from a security standpoint, from a features standpoint, we need to move there, but you know it seems to be just one of the greatest disconnects we see between kind of my data center and somebody else's cloud. >> That is a great question. I think we had a lot of challenges in the past. I think it's fair to say with vSphere 5.0 it was a great release, 5.5 among great releases. But the challenges that we have from an upgrade perspective was typically V centered and all of the components connected to it. It's not just the vSphere platform but if you look at the vSphere platform, the challenges that we had were all of the components integrating with it, whether that's something like vROps, VRA's, or VREalize Automation, but it could also be something like Evermar or maybe Veeam. So there were so many different components we had to take into account. So what we started doing within VMWare was simplifying the architecture from a vSPhere perspective. If you look at vSAN for instance, it used to be a solution where we had multiple functions spread out across different virtual machines. I'm now trying to bring that back into a single virtual machine again. Actually dumbing it down, making it easier to upgrade. So that is something that is actively happening within VMWare, and it is something that we started with 6.0, and that's also the reason why we see the adoption from 6.0 to 6.5 and 6.5 to 6.7, is at a must faster pace than 5, in the 5 code stream, so 5 to 5.1, for instance, took a lot longer for a lot of customers or 5.1 to 6.0, took extremely long for a lot of customers. It's the key reason is complexity from our infrastructure stand. While we're changing that, we're evolving that in the upcoming years. >> Duncan it's the last question here, but as the technologist, things that you're looking at that are exciting to you, that you know, get your juices flowing? >> Yeah, that's an interesting one because it's something that I've been thinking about recently. I've been doing vSphere for the last, well wasn't even called vSphere back then, but I've been doing this for the last 12 years, virtualization. Thirteen years maybe something like that. At least as a consultant and then as a technologist and technical marketing, but recently I'm starting to look more and more at the edge space. For computing, IoT, I think that's a really interesting space, especially because there isn't really significant market. Well, there is a significant market out there, but there isn't really one player out there that really stands out. No one has really figured out what customers would like to do with it and how our customers are going to use it. So the edge computing space and IoT's a really interesting thing and especially because of the distributed aspect is one of the things that I've been always been passionate about, vSphere clusters, which is a distributed mechanism. So distributed computing is definitely something that has my interest. >> All right if you care about virtualization, VMWare, follow the yellow brick road, yellow-bricks.com. Duncan, thanks very much for coming on theCUBE. >> Thanks for having me guys. >> You're welcome. All right, keep it right there buddy. We'll be back with our next guest. You're watching theCUBE live from Chicago, VeeamOn 2018. We'll be right back. (techno music)

Published Date : May 15 2018

SUMMARY :

Brought to you my Veeam. and the world's number one So it's great to be part of it. of the biggest problems is of the journey of storage has faced all of the different challenges in terms of the numbers of volumes and that's the strange thing about it. Always back to the or at least that's part of the puzzle, over the last couple of years, When I hear the keynote this morning, I kind of missed the end of details. is in the top three, even amongst CIOs. of the data that they were hosting. most of the players in this space, one of the greatest disconnects we see and all of the components connected to it. of the distributed aspect VMWare, follow the yellow brick road, from Chicago, VeeamOn 2018.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

DuncanPERSON

0.99+

2008DATE

0.99+

oneQUANTITY

0.99+

2013DATE

0.99+

10QUANTITY

0.99+

HPORGANIZATION

0.99+

EMCORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

15QUANTITY

0.99+

Stuart MinimanPERSON

0.99+

StuPERSON

0.99+

5QUANTITY

0.99+

thousandsQUANTITY

0.99+

two machinesQUANTITY

0.99+

Duncan EppingPERSON

0.99+

NimbleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

2009DATE

0.99+

vSANTITLE

0.99+

ChicagoLOCATION

0.99+

vSphereTITLE

0.99+

todayDATE

0.99+

vSPhereTITLE

0.99+

two main thingsQUANTITY

0.99+

6.7QUANTITY

0.99+

VMWareTITLE

0.98+

VeeamOn 2018TITLE

0.98+

VeeamOnORGANIZATION

0.98+

6.5QUANTITY

0.98+

one playerQUANTITY

0.98+

vSphere 5.5TITLE

0.98+

VM WorldORGANIZATION

0.98+

5.1QUANTITY

0.98+

vSphere 6.0TITLE

0.97+

#VeaamOnTITLE

0.97+

first oneQUANTITY

0.97+

sixDATE

0.97+

OneQUANTITY

0.97+

two main effortsQUANTITY

0.96+

Pure StorageORGANIZATION

0.96+

yellow-bricks.comOTHER

0.96+

vROpsTITLE

0.96+

6.0QUANTITY

0.96+

single platformQUANTITY

0.94+

eight years agoDATE

0.94+

Thirteen yearsQUANTITY

0.94+

EvermarTITLE

0.94+

OracleORGANIZATION

0.93+

one thingQUANTITY

0.92+

firstQUANTITY

0.92+

VeeamONEVENT

0.92+

VRATITLE

0.91+

last 10 yearsDATE

0.91+

VREalize AutomationTITLE

0.91+

AzureTITLE

0.9+

theCUBETITLE

0.89+

VeeamTITLE

0.89+

Chicago, IllinoisLOCATION

0.89+

this morningDATE

0.89+

Tyler Duncan, Dell & Ed Watson, OSIsoft | PI World 2018


 

>> Announcer: From San Francisco, it's theCUBE covering OSIsoft PIWORLD 2018, brought to you by OSIsoft. >> Hey, welcome back, everybody, Jeff Frick here with theCUBE, we're in downtown San Francisco at the OSIsoft PIWorld 2018. They've been doing it for like 28 years, it's amazing. We've never been here before, it's our first time and really these guys are all about OT, operational transactions. We talk about IoT and industrial IoT, they're doing it here. They're doing it for real and they've been doing it for decades so we're excited to have our next two guests. Tyler Duncan, he's a Technologist from Dell, Tyler, great to see you. >> Hi, thank you. >> He's joined by Ed Watson, the global account manager for channels for Osisoft. Or OSIsoft, excuse me. >> Glad to be here. Thanks, Jeff. >> I assume Dell's one of your accounts. >> Dell is one of my accounts as well as Nokia so-- >> Oh, very good. >> So there's a big nexus there. >> Yep, and we're looking forward to Dell Technology World next week, I think. >> Next week, yeah. >> I think it's the first Dell Technology not Dell EMC World with-- >> That's right. >> I don't know how many people are going to be there, 50,000 or something? >> There'll be a lot. >> There'll be a lot. (laughs) But that's all right, but we're here today... >> Yeah. >> And we're talking about industrial IoT and really what OSIsoft's been doing for a number of years, but what's interesting to me is from the IT side, we kind of look at industrial IoT as just kind of getting here and it's still kind of a new opportunity and looking at things like 5G and looking at things like IPE, ya know, all these sensors are now going to have IP connections on them. So, there's a whole new opportunity to marry the IT and the OT together. The nasty thing is we want to move it out of those clean pristine data centers and get it out to the edge of the nasty oil fields and the nasty wind turbine fields and crazy turbines and these things, so, Edge, what's special about the Edge? What are you guys doing to take care of the special things on the Edge? >> Well, a couple things, I think being out there in the nasty environments is where the money is. So, trying to collect data from the remote assets that really aren't connected right now. In terms of the Edge, you have a variety of small gateways that you can collect the data but what we see now is a move toward more compute at the Edge and that's where Dell comes in. >> Yeah, so I'm part of Dell's Extreme Scale and Structure Group, ESI, and specifically I'm part of our modular data center team. What that means is that for us we are helping to deploy compute out at the Edge and also at the core, but the challenges at the Edge is, you mentioned the kind of the dirty area, well, we can actually change that environment so that's it's not a dirty environment anymore. It's a different set of challenges. It may be more that it's remote, it's lights out, I don't have people there to maintain it, things like that, so it's not necessarily that it's dirty or ruggedized or that's it's high temperature or extreme environments, it just may be remote. >> Right, there's always this kind of balance in terms of, I assume it's all application specific as to what can you process there, what do you have to send back to process, there's always this nasty thing called latency and the speed of the light that just gets in the way all the time. So, how are you redesigning systems? How are you thinking about how much computing store do you put out on the Edge? How do you break up that you send back to central processing? How much do you have to keep? You know we all want to keep everything, it's probably a little bit more practical if you're keepin' it back in the data center versus you're tryin' to store it at the Edge. So how are you looking at some of these factors in designing these solutions? >> Ed: Well, Jeff, those are good points. And where OSIsoft PI comes in, for the modular data center is to collect all the power cooling and IT data, aggregate it, send to the Cloud what needs to be sent to the Cloud, but enable Dell and their customers to make decisions right there on the Edge. So, if you're using modular data center or Telecom for cell towers or autonomous vehicles for AR VR, what we provide for Dell is a way to manage those modular data centers and when you're talking geographically dispersed modular data centers, it can be a real challenge. >> Yeah, and I think to add to that, there's, when we start lookin' at the Edge and the data that's there, I look at it as kind of two different purposes. There's one of why is that compute there in the first place. We're not defining that, we're just trying to enable our customers to be able to deploy compute however they need. Now when we start looking at our control system and the software monitoring analytics, absolutely. And what we are doing is we want to make sure that when we are capturing that data, we are capturing the right amount of data, but we're also creating the right tools and hooks in place in order to be able to update those data models as time goes on. >> Jeff: Right. >> So, that we don't have worry about if we got it right on day one. It's updateable and we know that the right solution for one customer and the right data is not necessarily the right data for the next customer. >> Jeff: Right. >> So we're not going to make the assumptions that we have it all figured out. We're just trying to design the solution so that it's flexible enough to allow customers to do whatever they need to do. >> I'm just curious in terms of, it's obviously important enough to give you guys your own name, Extreme Scale. What is Extreme Scale? 'Cause you said it isn't necessarily because it's dirty data and hardened and kind of environmentally. What makes an Extreme Scale opportunity for you that maybe some of your cohorts will bring you guys into an opportunity? >> Yeah so I think for the Extreme Scale part of it is, it is just doing the right engineering effort, provide the right solution for a customer. As opposed to something that is more of a product base that is bought off of dell.com. >> Jeff: Okay. >> Everything we do is solution based and so it's listening to the customer, what their challenges are and trying to, again, provide that right solution. There are probably different levels of what's the right level of customization based off of how much that customer is buying. And sometimes that is adding things, sometimes it's taking things away, sometimes it's the remote location or sometimes it's a traditional data center. So our scrimpt scale infrastructure encompasses a lot of different verticals-- >> And are most of solutions that you develop kind of very customer specific or is there, you kind of come up with a solution that's more of an industry specific versus a customer specific? >> Yeah, we do, I would say everything we do is very customer specific. That's what our branch of Dell does. That said, as we start looking at more of the, what we're calling the Edge. I think ther6e are things that have to have a little more of a blend of that kind of product analysis, or that look from a product side. I'm no longer know that I'm deploying 40 megawatts in a particular location on the map, instead I'm deploying 10,000 locations all over the world and I need a solution that works in all of those. It has to be a little more product based in some of those, but still customized for our customers. >> And Jeff, we talked a little bit about scale. It's one thing to have scale in a data center. It's another thing to have scale across the globe. And, this is where PI excels, in that ability to manage that scale. >> Right, and then how exciting is it for you guys? You've been at it awhile, but it's not that long that we've had things like at Dupe and we've had things like Flink and we've had things like Spark, and kind of these new age applications for streaming data. But, you guys were extracting value from these systems and making course corrections 30 years ago. So how are some of these new technologies impacting your guys' ability to deliver value to your customers? >> Well I think the ecosystem itself is very good, because it allows customers to collect data in a way that they want to. Our ability to enable our customers to take data out of PI and put it into the Dupe, or put it into a data lake or an SAP HANA really adds significant value in today's ecosystem. >> It's pretty interesting, because I look around the room at all your sponsors, a lot of familiar names, a lot of new names as well, but in our world in the IT space that we cover, it's funny we've never been here before, we cover a lot of big shows like at Dell Technology World, so you guys have been doing your thing, has an ecosystem always been important for OSIsoft? It's very, very important for all the tech companies we cover, has it always been important for you? Or is it a relatively new development? >> I think it's always been important. I think it's more so now. No one company can do it all. We provide the data infrastructure and then allow our partners and clients to build solutions on top of it. And I think that's what sustains us through the years. >> Final thoughts on what's going on here today and over the last couple of days. Any surprises, hall chatter that you can share that you weren't expecting or really validates what's going on in this space. A lot of activity going on, I love all the signs over the building. This is the infrastructure that makes the rest of the world go whether it's power, transportation, what do we have behind us? Distribution, I mean it's really pretty phenomenal the industries you guys cover. >> Yeah and you know a lot of the sessions are videotaped so you can see Tyler from last year when he gave a presentation. This year Ebay, PayPal are giving presentations. And it's just a very exciting time in the data center industry. >> And I'll say on our side maybe not as much of a surprise, but also hearing the kind of the customer feedback on things that Dell and OSIsoft have partnered together and we work together on things like a Redfish connector in order to be able to, from an agnostic standpoint, be able to pull data from any server that's out there, regardless of brand, we're full support of that. But, to be able to do that in an automatic way that with their connector so that whenever I go and search for my range of IP addresses, it finds all the devices, brings all that data in, organizes it, and makes it ready for me to be able to use. That's a big thing and that's... They've been doing connectors for a while, but that's a new thing as far as being able to bring that and do that for servers. That, if I have 100,000 servers, I can't manually go get all those and bring them in. >> Right, right. >> So, being able to do that in an automatic way is a great enablement for the Edge. >> Yeah, it's a really refreshing kind of point of view. We usually look at it from the other side, from IT really starting to get together with the OT. Coming at it from the OT side where you have such an established customer base, such an established history and solution set and then again marrying that back to the IT and some of the newer things that are happening and that's exciting times. >> Yeah, absolutely. >> Yeah. >> Well thanks for spending a few minutes with us. And congratulations on the success of the show. >> Thank you. >> Thank you. >> Alright, he's Tyler, he's Ed, I'm Jeff. You're watching theCUBE from downtown San Francisco at OSIsoft PI WORLD 2018, thanks for watching. (light techno music)

Published Date : Apr 28 2018

SUMMARY :

covering OSIsoft PIWORLD 2018, brought to you by OSIsoft. excited to have our next two guests. the global account manager for channels Glad to be here. Yep, and we're looking forward to But that's all right, but we're here today... and get it out to the edge of the nasty oil fields In terms of the Edge, you have a variety of and also at the core, and the speed of the light that just for the modular data center is to collect and hooks in place in order to be able to for one customer and the right data is not necessarily so that it's flexible enough to allow customers it's obviously important enough to give you guys it is just doing the right engineering effort, and so it's listening to the customer, I think ther6e are things that have to have in that ability to manage that scale. Right, and then how exciting is it for you guys? because it allows customers to collect data We provide the data infrastructure and then allow the industries you guys cover. Yeah and you know a lot of the sessions are videotaped But, to be able to do that in an automatic way So, being able to do that in an automatic way and then again marrying that back to the IT And congratulations on the success of the show. at OSIsoft PI WORLD 2018, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

TylerPERSON

0.99+

OSIsoftORGANIZATION

0.99+

Ed WatsonPERSON

0.99+

PayPalORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

DellORGANIZATION

0.99+

Tyler DuncanPERSON

0.99+

San FranciscoLOCATION

0.99+

last yearDATE

0.99+

40 megawattsQUANTITY

0.99+

next weekDATE

0.99+

10,000 locationsQUANTITY

0.99+

Next weekDATE

0.99+

OsisoftORGANIZATION

0.99+

EbayORGANIZATION

0.99+

28 yearsQUANTITY

0.99+

50,000QUANTITY

0.99+

DupeORGANIZATION

0.99+

EdPERSON

0.99+

oneQUANTITY

0.99+

100,000 serversQUANTITY

0.99+

first timeQUANTITY

0.99+

Dell TechnologyORGANIZATION

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

SAP HANATITLE

0.99+

This yearDATE

0.98+

twoQUANTITY

0.97+

dell.comORGANIZATION

0.97+

30 years agoDATE

0.97+

SparkTITLE

0.97+

EdgeORGANIZATION

0.96+

ESIORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

Dell EMC WorldORGANIZATION

0.95+

FlinkORGANIZATION

0.94+

one customerQUANTITY

0.94+

OSIsoft PIWORLD 2018EVENT

0.94+

RedfishORGANIZATION

0.93+

two guestsQUANTITY

0.92+

PI World 2018EVENT

0.9+

Scale and Structure GroupORGANIZATION

0.89+

OSIsoft PI WORLD 2018EVENT

0.87+

last couple of daysDATE

0.86+

one thingQUANTITY

0.84+

OSIsoft PIWorld 2018EVENT

0.83+

nexusORGANIZATION

0.81+

Dell Technology WorldORGANIZATION

0.79+

decadesQUANTITY

0.77+

Extreme ScaleOTHER

0.76+

day oneQUANTITY

0.7+

OSIsoft PIORGANIZATION

0.66+

ScaleOTHER

0.54+

Duncan Angove, Infor - Inforum 2017 - #Inforum2017 - #theCUBE


 

>> Announcer: Live from the Javits Center in New York City, it's theCUBE. Covering Inforum 2017. Brought to you buy Infor. >> Welcome back to Inforum 2017 everybody. This is theCUBE, the leader in live tech coverage. Duncan Angove is here, the President of Infor and a Cube alum. Good to see you again Duncan. >> Hey, afternoon guys. >> So it's all coming together right? When we first met you guys down in New Orleans, we were sort of unpacking, trying to squint through what the strategy is. Now we call it the layer cake, we were talking about off camera, really starting to be cohesive. But set up sort of what's been going on at Infor. How are you feeling? What the vibe is like? >> Yeah it's been an amazing journey over the last six years. And, um, you know, all the investments we put in products, as you know, we said to you guys way back then, we've always put products at the center. Our belief is that if you put innovation and dramatic amounts of investment in the core product, everything else ends up taking care of itself. And we put our money where our mouth was. You know, we're a private company, so we can be fairly aggressive on the level of investment we put into R&D and it's increased double digit every single year. And I think the results you've seen over the last two years, in terms of our financials is that, you know the market's voting in a way that we're growing double digits dramatically faster than our peers. So that feels pretty good. >> So Jim is, I know, dying to get into the AI piece, but lets work our way up that sort of strategy layer cake with an individual had a lot to do with that. So you know, you guys started with the decision of Micro-verticals and you know the interesting thing to us is you're starting to see some of the big SI's join in. And I always joke, that they love to eat at the trough. But you took a lot of the food away by doing that last mile. >> Yeah. >> But now you're seeing them come in, why is that? >> You know I think the whole industry is evolving. And the roles that different and the valor that different companies in that ecosystem play, whether it's an enterprise software vendor or it's a systems integrator. Everything's changing. I mean, The Cloud was a big part of that. That took away tasks that you would sometimes see a systems integrator doing. As larger companies started to build more completely integrated suites, that took away the notion that you need a systems integrator to plug all those pieces together. And then the last piece for us was all of the modifications that were done to those suites of software to cover off gaps in industry functionality or gaps in localizations for a country, should be done inside the software. And you can only do that if you have a deep focus, by industry on going super, super deep at a rapid rate on covering out what we call these last malfeatures. So that means that the role of the systems integrators shifted. I mean they've obviously pivoted more recently into a digital realm. They've all acquired digital agencies. And having to adapt to this world where you have these suites of software that run in The Cloud that don't need as much integration or as much customization. So we were there you know five, six years ago. They weren't quite there. It was still part of this symbiotic relationship with other large vendors. And I think now, you know, the reason for the first time we've got guys like Accenture, and Deloitte, and Capgemini, and Grant Thornton here, is that they see that. And their business model's evolved. And you know those guys obviously like to be where they can win business and like to build practices around companies they see winning business. So the results we've seen and the growth we've seen over the last two to three years, obviously that's something they want a piece of. So I think it's going to work out. >> Alright so Jim, you're going to have to bear with me a second 'cause I want to keep going up the stack. So the second big milestone decision was AWS. >> Duncan: Yeah. >> And we all understand the benefits of AWS. But there's two sides to that cone and one is, when you show your architectural diagram, there's a lot of AWS in there. There's S3, there's DynamoDB, I think I saw Kinesis in there. I'm sure there's some Ec2 and other things. And it just allows you to focus on what you do best. At the same time, you're getting an increasingly complex data pipeline and ensuring end-to-end performance has to be technically, a real challenge for you. So, I wanted to ask you about that and see if you could comment and how you're managing that. >> Yeah so, I mean obviously, we were one of the first guys to actually go all in on Amazon as a Cloud delivery platform. And obviously others now have followed. But we're still one of their top five ISV's on there. The only company that Amazon reps actually get compensated on. And it's a two way relationship right? We're not just using them as a Cloud delivery partner. We're also using some of their components. You know you talked about some of their data storage components. We're also leverage them for AI which we'll get into in a second. But it's a two way relationship. You know, they run our asset management facility for all of their data centers globally. We do all the design and manufacturing of their drones and robots. We're partnered with them on the logistic side. So it's a deep two way relationship. But to get to your question on just sort of the volume and the integration. We work in integrations with staggering volumes right? I mean, retail, you're dealing with billions and billions of data points. And we'll probably get into that in a second you know. The whole asset management space, is one of the fastest growing applications we have. Driven by cycle dynamics of IoT and explosion in device data and all of that. So we've had for a very, very long time, had to figure out an efficient way to move large amounts of data that can be highly chatty. And do it in an efficient way. And sometimes it's less about the pipes in moving it around, it's how you ingest that data into the right technology from a data storage perspective. Ingest it and then turn it into insights that can power analytics or feed back into our applications to drive execution. Whether it's us predicting maintenance failure on a pump and then feeding that back into asset management to create a work order and schedule an engineer on it. Right? >> That's not a trivial calculus. Okay, now we're starting to get into Jim's wheelhouse, which is, you call it, I think you call it the "Age of Network Intelligence". And that's the GT Nexus acquisition. >> Yeah. >> To us it's all about the data. I think you said 18 years of transaction history there. So, talk about that layer and then we'll really get into the data the burst piece and then of course the AI. >> Yeah, so there were two parts to why we called it "The Age of Network Intelligence". And it's not often that technology or an idea comes along in human history that actually bends the curve of progress right? And I think that we said it on stage, the steam engine was one of those and it lead to the combustion engine, it lead to electricity and it lead to the internet and the mobile phone and it all kind of went. Of course it was invented by a British man, an Englishman you know? That doesn't happen very often right? Where it does that. And our belief is that the rise of networks, coupled with the rise of artificial intelligence, those two things together will have the same impact on society and mankind. And it's bigger than Infor and bigger than enterprise software, it's going to change everything. And it's not going to do it in a linear way. It's going to be exponential. So the network part of that for us, from an Infor perspective was, yes it was about the commerce network, which was GT Nexus, and the belief that almost every process you have inside an enterprise at some point has to leave the enterprise. You have to work with someone else, a supplier or a customer. But ERP's in general, were designed to automate everything inside the four walls. So our belief was that you should extend that and encompass an entire network. And that's obviously what the GT Nexus guys spent 18 years building was this idea of this logistics network and this network where you can actually conduct trade and commerce. They do over 500 billion dollars a year on that network. And we believe, and we've announced this as network CloudSuites, that those two worlds will blur. Right? That ultimately, CloudSuites will run completely nakedly on the network. And that gives you some very, very interesting information models and the parallel we always give is like a Linkedin or a Facebook. On Linkedin, there's one version of the application. Right? There's one information model where everyone's contact information is. Everyone's details about who they are is stored. It's not stored in all these disparate systems that need to be synchronized constantly. Right? It's all in one. And that's the power of GT Nexus and the commerce network, is that we have this one information model for the entire supply chain. And now, when you move the CloudSuite on top of that, it's like this one plus one is five. It's a very, very powerful idea. >> Alright Jim, chime in here, because you and I both excited about the burst when we dug into that a little bit. >> Yes. >> Quite impressed actually. Not lightweight vis, you know? It's not all sort of BI. >> Well the next generation of analytics, decision support analytics that infuse and inform and optimize transactions. In a distributed value chain. And so for the burst is a fairly strong team, you've got Brad Peters who was on the keynote yesterday, and of course did the pre-briefing for the analyst community the day before. I think it's really exciting, the Coleman strategy is really an ongoing initiative of course. First of all, on the competitive front, all of your top competitors in this very, I call it a war of attrition in ERP. SAP, Oracle and Microsoft have all made major investments on going in AI across their portfolios. With a specific focus on informing and infusing their respective ERP offerings. But what I conceived from what Infor's announced with the Coleman strategy, is that yours is far more comprehensive in terms of taking it across your entire portfolio, in a fairly accelerated fashion. I mean, you've already begun to incorporate, Coleman's already embedded in several of your vertical applications. First question I have for you Duncan, as I was looking through all the discussions around Coleman, when will this process be complete in terms of, "Colemanizing", is my term? "Colemanizing" the entire CloudSuite and of course network CloudSuite portfolio. That's a huge portfolio. And it's like you got fresh funding, a lot of it, from Koch industries. To what extent can, at what point in the next year or two, can most Infor customers have the confidence that their cloud applications are "Colemanized"? And then when will, if ever, Coleman AI technology be made available to those customers who are using your premises based software packages? >> So yeah, we could spend a long time talking about this. The thing about Coleman and RAI and machine learning capabilities is that we've been at work on it for a while. And you know we created the dynamic science labs. Our team of 65 Ph.D.'s based up in M.I.T. got over three and a half four years ago. And our differentiation versus all the other guys you mentioned is that, two things, one, we bring a very application-centric view of it. We're not trying to build a horizontal, generic, machine learning platform. In the same way that we- >> Yeah you're not IBM with Watson, all that stuff. >> Yeah, no, no. Or even Auricle. >> Jim: Understood. >> Or Microsoft. >> Jim: Nobody expects you to be. >> No, you know, and we've always been the guys that have worked for the Open Source community. Even when you look at like, we're the first guys to provide a completely open source stack underneath our technology with postscripts. We don't have a dog in the hunt like most of the other guys do. Right? So we tap in to the innovation that happens in the Open Source community. And when you look at all the real innovation that's happening in machine learning, it's happening in the Open Source Community. >> Jim: Yes. >> It's not happening with the old legacy, you know, ERP guys. >> Jim: Pencer, Flow and Spark and all that stuff. >> Yeah, Google, Apple, the GAFA. >> Yeah. >> Right? Google, Apple, Facebook, those are the guys that are doing it. And the academic community is light years ahead on top of that of what these other guys will do. So that's what we tap into right? >> Are you tapping into partners like AWS? 'Cause they've obviously, >> Duncan: Absolutely >> got a huge portfolio of AI. >> Yeah, so we. >> Give us a sense whether you're going to be licensing or co-developing Coleman technologies with them going forward. >> Yeah so we obviously we have NDA's with them, we're deeply inside their development organization in terms of working on things. You know, our science is obviously presented to them around ideas we think they need to go. I mean, we're a customer of their AI frameup to machine learning and we're testing it at scale with specific use cases in industries, right? So we can give them a lot of insights around where it needs to go and problems we're trying to solve. But we do that across a number of different organizations and we've got lots and lots of academic collaborations that happen on around all of the best universities that are pushing on this. We've even received funding from DAPA in certain cases around things that we're trying to solve for. You know quietly we've made some machine-learning acquisitions over the last five, six years. That have obviously brought this capability into it. But the point is we're going to leverage the innovation that happens around these frameworks. And then our job is understanding the industries we're in and that we're an applications company, is to bring it to life in these applications in a seamless way, that solves a very specific problem in an industry, in a powerful and unique way. You know on stage I talked about this idea of bringing this AI first mindset to how we go about doing it. >> So it's important, if I can interject. This is very important. This is Infor IP, the serious R&D that's gone into this. It's innovation. 'Cause you know what your competitors are going to say. They're going to deposition and say, oh, it's Alexa on steroids. But it's not. It's substantial IP and really leveraging a lot of the open source technologies that are out there. >> Yeah. So you know, I talked about there were four components to Coleman, right? And the first part of it was, we can leverage machine-learning services to make the CloudSuites conversational. So they can chat, and talk, and see, and hear, and all of that. And yeah, some of those are going to use the technology that sits behind Alexa. And it's available in AWS's Alexa as you guys know. But that's only really a small part of what we're doing. There are some places where we are looking at using computer vision. For example, automated inspection of car rental returns, is one area. We're using it for quality management pilot at a company that normally has humans inspect something on a production line. That kind of computer-vision, that's not Alexa, right? It's you know, I gave the example of image recognition. Some of it can leverage AWS's framework there. But again, we're always going to look for the best platform and framework out there to solve the specific problem that we're trying to solve. But we don't do it just for the sake of it. We do it with a focus to begin with, with an industry. Like, where's a really big problem we can solve? Or where is there a process that happens inside an application today that if you brought an AI first mindset to it, it's revolutionary. And we use this phrase, "the AI is the UI". And we've got some pretty good analogies there that can help bring it to life. >> And I like your approach for presenting your AI strategy, in terms of the value it delivers your customers, to business. You know, there's this specter out there in the culture that AI's going to automate everybody out of a job. Automation's very much a big part of your strategy but you expressed it well. Automating out those repetitive functions so that human beings, you can augment the productivity of human beings, free them up for more value-added activities and then augment those capabilities through conversational chat box. And so forth, and so on. Provide you know, in-application, in process, in context, decision support with recommendations and all that. I think that's the exact right way to pitch it. One of the things that we focus on and work on in terms of application development, disciplines that are totally fundamental to this new paradigm. Recommendation engines, recommender systems, in line to all application. It's happening, I mean, Coleman, that really in many ways, Coleman will be the silent, well not so silent, but it'll be the recommendation engine embedded inside all of your offerings at some point. At least in terms of the strategy you laid out. >> Yeah, no, absolutely right I mean. It's not just about, we all get hung up on machine-learning and deep learning 'cause it's the sexy part of AI, right? But there's a lot more. I mean, AI, all the way back, you can go all the way back to Socrates and the father of logic right? I mean, some of the things you can do is just based on very complex rules and logic. And what used to be called process automation right? And then it extends all the way to deep learning and neural networks and so on. So one of the things that Coleman also does, is it unifies a lot of this technology. Things that you would normally do for prediction or optimization, and optimization normally is the province of operations research guys right? Which again it's a completely different field. So it unifies all of that into one consistent platform that has all of that capability into it. And then it exposes it in a consistent way through our API architecture. So same thing with bots. People always think chat bots are separate. Well that too is unified inside Coleman. So it's a cohesive platform but again, industry focused. >> What's your point of view on developers? And how do you approach the development community and what's your strategy there? >> Yeah, I mean, it's critical right? So we've always, I mean, hired an incredible number of application engineers every year. I think the first 12 months we were here, we hired 1800 right? 'Cause you know, that's kind of what we do. So we believe hugely in smarts. And it sounds kind of obvious, but experience can be learned, smarts is portable. And we have a lot of programs in place with universities. We call it the Education Alliance Program. And I think we have up to 32 different universities around the world where we're actually influencing curriculum, and actually bringing students right out of there. Using internships during the year and then actually bringing them into our development organization. So we've got a whole pipeline there. I mean that's critical that we have access to those. >> And what about outside your four walls, or virtual walls have been four? Is there a strategy to specifically pursue external developers and open up a PAZ layer? >> Yeah we do. >> Or provide an STK for Coleman for example, for developers. >> Yeah so we did, as part of our Infor Operating Service update. Which is, you know, the name for our unified technology platform. We did announce Mongoose platform was a service. Our Mongoose pass. >> Host: Oh Mongoose, sure. >> So that now is being delivered as a platform with a service for application development. And it's used in two ways. It's used for us to build new applications. It's a very mobile-first type development framework too. And obviously Hook and Loop had a huge influence in how that ships. The neat thing about it, is that it ships with plumbing into ION API, plumbing into our security layer. So customers will use it because it leverages our security model. It's easy to access everything else. But it's also used by our Hook and Loop digital team. So those guys are going off and they're building completely differentiated curated apps for customers. And again, they're using Mongoose. So I think between ION API's and between all the things you get in the Infor Operating Service, and Mongoose, we've got a pretty good story around extensibility and application development. As it relates to an STK for Coleman, we're just working through that now. Again, our number one focus is to build those things into the applications. It's a feature. The way most companies have approached optimization and machine learning historically, is it's a discrete app that you have to license. And it's off to the side and you integrate it in. We don't think that's the right way of doing it. Machine-learning and artificial intelligence, is a platform. It's an enabler. And it fuses and changes every part of the CloudSuite. And we've got a great example on how you can rethink demand forecasting, demand planning. Every, regardless of the industry we serve, everyone has to predict demand right? It's the basis for almost every other decision that happens in the enterprise. And, how much to make, how many nurses to put on staff, all of that, every industry, that prediction of demand. And the thinking there really hasn't changed in 20, 30 years. It really hasn't. And some of that's just because of the constraints with technology. Storage, compute, all of that. Well with the access we have to the elastic super-computing now and the advancements in sort of machine-learning and AI, you can radically rethink all of that, and take what we call and "AI First" approach, which is what we've done with building our brand new demand prediction platform. So the example we gave is, you think about when early music players came along on the internet right? The focus was all around building a gorgeous experience for how to build a playlist. It was drag and drop, I could do it on a phone, I could share it with people and it showed pictures of the album art. But it was all around the usability of making that playlist better. Then guys like Spotify and Pandora came around and it took an AI First approach to it. And the machine builds your playlist. There is no UI. AI is the UI. And it can recommend music I never knew I would've liked. And the way it does that, comes back to the data. Which is why I'm going to circle back to Infor here in a second. Is that, it breaks a song down into hundreds if not thousands of attributes about that song. Sometimes it's done by a human, sometimes it's even done by machine listening algorithms. Then you have something that crawls the web, finds music reviews online, and further augments it with more and more attributes. Then you layer on top of that, user listening activity, thumbs up, thumbs down, play, pause, skip, share, purchase. And you find, at that attribute level, the very lowest level, the true demand drivers of a song. And that's what's powering it right? Just like you see with Netflix for movies and so on. Imagine bringing that same thought process into how you predict demand for items, that you've never promoted before. Never changed the price before. Never put in this store before. Never seen before. >> The cold start problem in billing recommendation areas. >> Exactly right, so, that's what we mean by AI First. It's not about just taking traditional demand planning approaches and making it look sexier and putting it on an iPad right? Rethink it. >> Well it's been awesome to watch. We are out of time. >> Yeah, we're out of time. >> Been awesome to watch the evolution, >> We could go on and on with this yeah. >> of Infor as it's really becoming a data company. And we love having executives like you on. >> Yeah >> You know, super articulate. You got technical chops. Congratulations on the last six years. >> Thanks. >> The sort of quasi-exit you guys had. >> Great show, amazing turnout. >> And look forward to watching the next six to 10. So thanks very much for coming out. >> Brilliant, thank you guys. Alright thank you. >> Alright keep it right there everybody, we'll be back with our next guest, this is Inforum 2017 and this is theCUBE. We'll be right back. (digital music)

Published Date : Jul 12 2017

SUMMARY :

Brought to you buy Infor. Good to see you again Duncan. When we first met you guys down in New Orleans, and dramatic amounts of investment in the core product, And I always joke, that they love to eat at the trough. And I think now, you know, the reason for the first time So the second big milestone decision was AWS. And it just allows you to focus on what you do best. And sometimes it's less about the pipes in moving it around, And that's the GT Nexus acquisition. I think you said 18 years of transaction history there. And our belief is that the rise of networks, because you and I both excited about the burst Not lightweight vis, you know? And it's like you got fresh funding, a lot of it, And you know we created the dynamic science labs. Yeah, no, no. And when you look at all the real innovation you know, ERP guys. And the academic community is light years ahead with them going forward. that happen on around all of the best universities a lot of the open source technologies that are out there. And it's available in AWS's Alexa as you guys know. At least in terms of the strategy you laid out. I mean, some of the things you can do And I think we have up for developers. Which is, you know, And it's off to the side and you integrate it in. and putting it on an iPad right? Well it's been awesome to watch. And we love having executives like you on. Congratulations on the last six years. And look forward to watching the next six to 10. Brilliant, thank you guys. we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GoogleORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JimPERSON

0.99+

DuncanPERSON

0.99+

FacebookORGANIZATION

0.99+

Brad PetersPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GAFAORGANIZATION

0.99+

Duncan AngovePERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

18 yearsQUANTITY

0.99+

New OrleansLOCATION

0.99+

two partsQUANTITY

0.99+

IBMORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

two sidesQUANTITY

0.99+

DAPAORGANIZATION

0.99+

SocratesPERSON

0.99+

New York CityLOCATION

0.99+

20QUANTITY

0.99+

billionsQUANTITY

0.99+

KinesisTITLE

0.99+

ColemanPERSON

0.99+

CapgeminiORGANIZATION

0.99+

NDAORGANIZATION

0.99+

SAPORGANIZATION

0.99+

yesterdayDATE

0.99+

PandoraORGANIZATION

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

ColemanORGANIZATION

0.99+

hundredsQUANTITY

0.99+

fiveQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

First questionQUANTITY

0.99+

fiveDATE

0.99+

two thingsQUANTITY

0.99+

next yearDATE

0.99+

AuricleORGANIZATION

0.99+

InforORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

1800QUANTITY

0.98+

firstQUANTITY

0.98+

two waysQUANTITY

0.98+

LinkedinORGANIZATION

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.97+

DynamoDBTITLE

0.97+

two worldsQUANTITY

0.97+

first timeQUANTITY

0.97+

S3TITLE

0.97+

over 500 billion dollars a yearQUANTITY

0.97+

one versionQUANTITY

0.96+

AlexaTITLE

0.96+

six years agoDATE

0.96+

thousands of attributesQUANTITY

0.96+

CloudSuitesTITLE

0.96+

one areaQUANTITY

0.96+

65 Ph.D.QUANTITY

0.96+