Image Title

Search Results for one of these light:

Priya Rajagopal | Supercloud22


 

(upbeat music) >> Okay, we're now going to try and stretch our minds a little bit and stretch Supercloud to the edge. Supercloud, as we've been discussing today and reporting through various breaking analyses, is a term we use to describe a continuous experience across clouds, or even on-prem, that adds new value on top of hyperscale infrastructure. Priya Rajagopal is the director of product management at Couchbase. She's a developer, a software architect, co-creator on a number of patents as well as being an expert on edge, IoT, and mobile computing technologies. And we're going to talk about edge requirements. Priya, you've been around software engineering and mobile and edge technologies your entire career, and now you're responsible for bringing enterprise class database technology to the edge and IoT environments, synchronizing. So, when you think about the edge, the near edge, the far edge, what are the fundamental assumptions that you have to make with regards to things like connectivity, bandwidth, security, and any other technical considerations when you think about software architecture for these environments? >> Sure, sure. First off, Dave, thanks for having me here. It's really exciting to be here again, my second time. And thank you for that kind introduction. So, quickly to get back to your question. When it comes to architecting for the edge our principle is prepare for the worst and hope for the best. Because, really, when it comes to edge computing, it's sort of the edge cases that come to bite you. You mentioned connectivity, bandwidth, security. I have a few more. Starting with connectivity, as you import on low network connectivity, think offshore oil rigs, cruise ships, or even retail settings, when you want to have business continuity, most of the time you've got an internet connection, but then when there is disruption, then you lose business continuity. Then when it comes to bandwidth, the notion or the approach we take is that bandwidth is always limited or it's at a premium. Data plans can go up through the roof, depending on the volume of data. Think medical clinics in rural areas. When it comes to security, edge poses unique challenges because you're moving away from this world garden, central cloud-based environment, and now everything is accessible over the internet. And the internet really is inherently untrustworthy. Every bit of data that is written or read by an application needs to be authenticated, needs to be authorized. The entire path needs to be secured end-to-end. It needs to be encrypted. That's confidentiality. Also the persistence of data itself. It needs to be encrypted on disk. Now, one of the advantages of edge computing or distributing data is that the impacted edge environment can be isolated away without impacting the other edge location. Looking at the classic retail architecture, if you've got retail use case, if you've got a a retail store where there's a security breach, you need to have a provision of isolating that store so that you don't bring down services for the other stores. When it comes to edge computing, you have to think about those aspects of security. Any of these locations could be breached. And if one of them is breached, how do you control that? So, that's to answer those three key topics that you brought up. But there are other considerations. One is data governance. That's a huge challenge. Because we are a database company at Couchbase, we think of database, data governance, compliance, privacy. All that is very paramount to our customers. It's not just about enforcing policies right now. We are talking about not enforcing policies in a central location, but you have to do it in a distributed fashion because one of the benefits of edge computing is, as you probably very well know, is the benefits it brings when it comes to data privacy, governance policies. You can enforce that at a granular scale because data doesn't have to ever leave the edge. But again, I talked about this in the context of security, there needs to be a way to control this data at the edge. You have to govern the data when it is at the edge remotely. Some of the other challenges when thinking about the edge is, of course, volume, scale, think IoT, mobile devices, classic far edge type scenarios. And I think the other criteria that we have to keep in mind when we are architecting a platform for this kind of computing paradigm is the heterogeneity of the edge itself. It's no longer a uniform set of compute and storage resources that are available at your disposal. You've got a variety of IoT devices. You've got mobile devices, different processing capabilities, different storage capabilities. When it comes to edge data centers, it's not uniform in terms of what services are available. Do they have a load balancer? Do they have a firewall? Can I deploy a firewall? These are all some key architectural considerations when it comes to actually architecting a solution for the edge. >> Great. Thank you for that awesome setup. Talking about stretching to the edge this idea of Supercloud that connote that single logical layer that spans across multiple clouds. It can include on on-prem, but a critical criterion is that the developer, and, of course, the user experience, is identical or substantially similar. Let's say identical. Let's say identical, irrespective of physical location. Priya, is that vision technically achievable today in the world of database. And if so, can you describe the architectural elements that make it possible to perform well and have low latency and the security and other criteria that you just mentioned? What's the technical enablers? Is it just good software. Is it architecture? Help us understand that. >> Sure. You brought up two aspects. You mentioned user experience, and then you mentioned from a developer standpoint, what does it take? And I'd like to address the two separately. They are very tightly related, but I'd like to address them separately. Just focusing on the easier of the two when it comes to user experience, what are the factors that impact user experience? You're talking about reliability of service. Always on, always available applications. It doesn't matter where the data is coming from. Whether the data is coming from my device, it's sourced from an on-prem data center, or if it is from the edge of the cloud, it's from a central cloud data center, from an end-user perspective, all they care about is that their application is available. The next is, of course, responsiveness. Users are getting increasingly impatient. Do you want to reduce wait times to service? You want something which is extremely fast. They're looking for immersive applications or immersive experiences, AR, VR, mixed reality use cases. Then something which is very critical, and what you just touched upon, is this sort of seamless experience. Like this omnichannel, as we talk about in the context of retail kind of experience, Or what I like to refer to as park and pick up reference. You park, you start your application, running your application, you start a transaction on one device, you park it, pick it up on another device. Or in case of retail, you walk into a store, you pick it up from there. So, there's a park and pick up. Seamless mobility of data is extremely critical. In the context of a database, when we talk about responsiveness, two key, the KPIs are latency, bandwidth. And latency is really the round trip time from the time it takes to make a request for data, and the response comes back. The factors that impact latency are, of course, the type of the network itself, but also the proximity of the data source to the point of consumption. And so the more number of hubs that the data packets have to take to reach from the source to its destination, then you're going to incur a lot of latency. And when it comes to bandwidth, we are talking about the capacity of the network. How much data can be shot through the pipe? And, of course, when edge computing, large number of clients. I talked about scale, the volume of devices. And when you're talking about all of them concurrently connected, then you're going to have network congestion which impacts bandwidth which, in turn, impacts performance. And so when it comes to how do you architect a solution for that, if you completely remove the reliance on network to the extent possible, then you get the highest guarantees when it comes to responsiveness, availability, reliability. Because your application is always going to be on. In order to do that, if you have the database and the data processing components co-located with the application that needs it, that would give you the best experience. But, of course, you want to bring it as close. A lot of times, it's not possible to end with that data within your application itself. And that's where you have options of your an on-prem data center, the edge of the cloud, max end and so on. So the closer you bring the data, you're going to get the better experience. Now, that's all great. But then when it comes to something to achieve a vision of Supercloud, when we talked about, "Hey, one way from a developer standpoint, I have one API to set up this connection to a server, but then behind the scenes, my data could be resident anywhere." How do you achieve something like that? And so, a critical aspect of the solution is data synchronization. I talked about data storage as a database, data storage database, that's a critical aspect of what database is really where the data is persisted, data processing, the APIs to access and query the data. But another really critical aspect of distributing a database is the data synchronization technology. And so once all the islands of data, whether it is on the device, whether it's an on-prem data center, whether it's the edge of the cloud, or whether it is a regional data center, once all those databases are kept in sync, then it's a question of when connectivity to one of those data centers goes down, then there needs to be a seamless switch to another data center. And today, at least when it comes to Couchbase, a lot of our customers do employ global load balancers which can automatically detect. So, from a perspective of an application, it's just one URL end point. But then when one of those services goes down or data centers goes down, we have active failover and standby. And so the load balance automatically redirects all the traffic to the backup data center. And of course, for that to happen, those two data centers need to be in sync. And that's critical. Did that answer your question? >> Yeah, let me jump in here. Thank you again for that. I want to unpack some of those, and I want use the example of Couchbase Light, which, as the name implies, a mobile version of Couchbase. I'm interested in a number of things that you said. You talked about, in some cases, you want to get data from the most proximate location. Is there a some kind of metadata intelligence that you have access to? I'm interested in how you do the synchronization. How do you deal with conflict resolution and recovery if something goes wrong? You're talking about distributed database challenges. How do you approach all that? >> Wow, great question. And probably one that I could occupy the entire session for, but I'll try and keep it brief and try and answer most of the points that you touched upon. So, we talked about distributed database and data sync. But here's the other challenge. A lot of these distributed locations can actually be disconnected. So, we've just exacerbated this whole notion of data sync. And that's what we call offline first, not just we call, what is typically referred to as offline first sync. But the ability for an application to run in a completely disconnected mode, but then when there is network connectivity, the data is synced back to the backend data servers. In order for this to happen, you need a sync protocol (indistinct). Since you asked in the context of Couchbase, our sync protocol, it's a web sockets, extremely lightweight data synchronization protocol that's resilient to network disruption. So, what this means is I could have hundreds of thousands of clients that are connected to a data center, and they could be at various stages of disconnect. And you have a field application, and then you are veering in and out of pockets of network connectivity, so network is disrupted, and then network connectivity is restored. Our sync protocol has got a built-in checkpoint mechanism that allows the two replicating points to have a handshake of what is the previous sync point, and only data from that previous sync point is sent to that specific client. And in order to achieve that you mentioned Couchbase Light, which is, of course, our embedded database for mobile, desktop and any embedded platform. But the one that handles the data synchronization is our Sync Gateway. So, we got a component, Sync Gateway, that sits with our Couchbase server, and that's responsible for securely syncing the data and implementing this protocol with Couchbase Light. You talked about conflict resolution. And it's great that you mentioned that. Because when it comes to data sync, a lot of times folks think, "Oh well, how hard can that be?" I mean, you request for some data, and you pull down the a data, and that's great. And that's the happy path. When all of the clients are connected, when there is reliable network connectivity, that's great. But we are, of course, talking about unreliable network connectivity and resiliency to network disruptions. And also the fact that you have lots of concurrently connected clients, all of them potentially updating the same piece of data. That's when you have a conflict, When two or more clients are updating the same, clients or writers. You could have the writes coming in from the clients. You could have the writes coming in from the backend systems. Either way, multiple writers do the same piece of data. That's when you have conflicts. Now, when it comes to, so, a little bit to explain how conflict resolution is handled within our data sync protocol in Couchbase, it would help to understand a little bit about what kind of database we are, how is data itself stored within our database. So, Couchbase Light is a NoSql JSON document store, which means everything is stored as JSON documents. And so every time there is a write, an update to a document, let's say you start with an initial version of the document, the document is created. Every time there is a mutation to a document, you have a new revision to that document. So, as you build in more rights or more mutations to that document, you build out what's called a revision tree. And so when does a conflict happen? Conflict happens when there is a branch in the tree. So, you've got two writers, writing to the same revision, then you get a branch, and that's what is a conflict. We have a way of detecting those conflicts automatically. That's conflict detection. So, now we know there's a conflict, but we have to resolve it. And within Couchbase, you have two options. You don't have to do anything about it. The system has built-in automatic conflict resolution heuristics built in. So, it's going to check, pick a winning revision. And so we use a bunch of criteria, and we pick a winning revision. So, if two writers are updating the same revision of the document, version of the document, we pick a winner. But then that seemed to work from our experience, 80% of the use cases. But then for the remaining 20%, applications would like to have more control over how the winner of the conflict is picked. And for that, applications can implement a custom conflict resolver. So, we'll automatically detect the conflicting revisions and send these conflicting revisions over to the application via a callback, and the application has access to the entire document body of the two revisions and can use whatever criteria needs to merge >> So, that's policy based in that example? >> Yes. >> Yeah, yeah, okay. >> So you can have user policy based, or you can have the automatic heuristics. >> Okay, I got to wrap because we're out of time, but I want to run this scenario by you. One of the risks to the Supercloud Nirvana that we always talk about is this notion of a new architecture emerging at the edge, far edge really, 'cause they're highly-distributed environments. They're low power, tons of data. And this idea of AI inferencing at the edge, a lot of the AI today is done in modeling in the cloud. You think about ARM processors in these new low-cost devices and massive processing power eventually overwhelming the economics. And then that's seeping back into the enterprise and disrupting it. Now, you still get the problem of federated governance and security, and that's probably going to be more centralized slash federated. But, in one minute, do you see that AI inferencing real-time taking off at the edge? Where is that on the S-curve? >> Oh, absolutely right. When it comes to IoT applications, it's all about massive volumes of data generated at the edge. You talked about the economics doesn't add up. Now you need to actually, the data needs to be actioned at some point. And if you have to transfer all of that over the internet for analysis, the responsiveness, you're going to lose that. You're not going to get that real-time responsiveness and availability. The edge is the perfect location. And a lot of this data is temporal in nature. So, you don't want that to be sent back to the cloud for long-term persistence, but instead you want that to be actioned close as possible to the source itself. And when you talk about, there are, of course, the really small microcontrollers and so on. Even there, you can actually have some local processing done, like tiny ML models, but then mobile devices, when you talk about those, as you're very well aware, these are extremely capable. They're capable of running neural, they have neural network processors. And so they can do a lot of processing locally itself. But then when you want to have an aggregated view within the edge, you want to process that data in an IoT gateway and only send the aggregated data back to the cloud for long-term analytics and persistence. >> Yeah, this is something we're watching, and I think could be highly disruptive, and it's hard to predict. Priya, I got to go. Thanks so much for coming on the "theCube." Really appreciate your time. >> Yeah, thank you. >> All right, you're watching "Supercloud 22." We'll be right back right after this short break. (upbeat music)

Published Date : Jul 25 2022

SUMMARY :

Priya Rajagopal is the most of the time you've is that the developer, that the data packets have to take that you have access to? most of the points that you touched upon. or you can have the automatic heuristics. One of the risks to the Supercloud Nirvana the data needs to be and it's hard to predict. after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

PriyaPERSON

0.99+

Priya RajagopalPERSON

0.99+

twoQUANTITY

0.99+

two writersQUANTITY

0.99+

80%QUANTITY

0.99+

two revisionsQUANTITY

0.99+

CouchbaseORGANIZATION

0.99+

20%QUANTITY

0.99+

second timeQUANTITY

0.99+

two aspectsQUANTITY

0.99+

two optionsQUANTITY

0.99+

oneQUANTITY

0.99+

one minuteQUANTITY

0.99+

FirstQUANTITY

0.98+

two data centersQUANTITY

0.98+

one deviceQUANTITY

0.98+

NoSqlTITLE

0.98+

JSONTITLE

0.98+

todayDATE

0.98+

OneQUANTITY

0.97+

CouchbaseTITLE

0.97+

two keyQUANTITY

0.97+

SupercloudORGANIZATION

0.96+

Couchbase LightTITLE

0.93+

three key topicsQUANTITY

0.93+

two replicating pointsQUANTITY

0.91+

Supercloud 22TITLE

0.87+

hundreds of thousandsQUANTITY

0.86+

one URLQUANTITY

0.84+

firstQUANTITY

0.82+

Supercloud22TITLE

0.77+

clientsQUANTITY

0.71+

one wayQUANTITY

0.69+

single logical layerQUANTITY

0.69+

Supercloud NirvanaORGANIZATION

0.63+

moreQUANTITY

0.59+

themQUANTITY

0.53+

SyncOTHER

0.5+

theCubeTITLE

0.38+

Neil Macdonald, HPE | HPE Discover 2022


 

>>The Cube Presents HPD Discovered 2020 >>two. >>Brought to You by H. P E >>Good >>Morning Live from the Venetian Expo Centre Lisa Martin Day Volonte Day two of the Cubes Coverage of HP Discover 22 We've had some great conversations yesterday. Today, full day, a content coming your way. We've got one of our alumni back with us. Neil MacDonald joins us, the executive vice president and general manager of Compute at HPD Neale, Great to have you back on the Cube. >>It's great to be back. And how cool is it to be able to do this face to face again instead of on zoom. Right. So >>great. Great. The keynote yesterday absolutely packed, so refreshing to see that many people eager to hear what HP has been doing. It's been three years since we've all gotten together in person. >>It is, and we've been busy. We've been busy. We've got to share some great news yesterday about some of the work that we're doing with HB Green Lake Cloud Platform and really bringing together all the capabilities across the company in a very unified, cohesive way to enable our customers to embrace that as a service experience we committed to Antonio three years ago, said we were gonna deliver everything we do as a company as a service through Green Lake and we've done it. And it's fantastic to see the momentum that that's really building and how it's breaking down the silos from different types of infrastructure and offer to really create integrated solutions for our customers. So that's been a lot of fun. >>Give us the scope of your role, your areas of responsibility. And then I'd love to hear some feedback. You've been a couple of days here around customers. What some of the feedback help us understand that. >>So at HP, I lead the Compute business, which is our largest business. That includes our hardware and software and services in the compute space. Both, um, what flows through the green late model, but also what throws flows through a traditional purchase model. So, um, that's, uh, that's about $13 billion business for the company and the core of so much of what we do, and it's a real honour to be leading a business that's such a a legacy in a franchise with with 30 years of innovation for our customers in an ocean of followers. Um and it's great to be able to start to share some of the next chapters in that with our customers this week. >>Well, it's almost half the business H p e and as we've talked about, it's an awesome time to be in the computer business. What are you seeing in terms of the trends? Obviously you're all in on as a service. But some customers say, Tell me I got a lot of capital. Yeah, absolutely. I'm fine with Capex. What are you hearing from customers in that regard? And presumably you're happy to sell them in a kind of Capex model? >>Absolutely. And in the current environment, in particular with with some of the economic headwinds that we're starting to stare down here, it's really important for organisations to continue to transform digitally but to be able to match their investments with the revenues as they're building new services and new capabilities. And for some organisations, the challenge of investing all the Capex up front is a big lift and there's quite a delay before they can really monetise all of that. So the power of HP Green Lake is enabling them to match their investment in the infrastructure on a pay as you go basis with the actual revenue they're going to generate from their new capability. So for lots of people that works. But for many other customers, it's it's much more palatable to continue in a Capex purchase, but and we're delighted to do that. A lot of my business still is in that mode. What's changing the or what are the needs, whether you're in the green light environment or in the Capex environment? Um, increasingly, the edge has become a bigger and bigger part of all of our worlds, right, the edges where we all live and work. We've all seen over the last couple of years enormous change in how that work experience and how the shape of businesses has changed, and that creates some challenges for infrastructure. So one of the things that we've announced and we shared some more details of this week is HP Green Light for Computer Ops Management, which is a location agnostic, cloud based management set up that enables you to automate and lifecycle, manage your physical compute infrastructure wherever it lies, so that might be in a distributed environment in hotel locations or out at the edge for so much more data is now being gathered and has to be computed on. So we're really excited about that. And the great thing is because it's fully integrated with HP. Green Light Cloud Platform is in there alongside the storage, alongside the connectivity alongside all the other capabilities. And we can bring those together in a very cohesive infrastructure view for our customers and then build workloads and services and tops. And that's that's really exciting. How have >>your customer conversations evolved, especially over the last couple of years as the edge has exploded? But we've been living in such uncertain times. Are you seeing a change there in the stakeholders rising up the C suite stack in terms of how do we really fine tune this? Because we've got to be competitive. We've got to be a data company. >>Well, that's so true because everybody has seen seen data as a currency and is desperately innovating and Modernising their business model, and with it, the underlying infrastructure and how they think about development. And nowhere is that truer than in enterprises that really becoming digital. First, organisations more and more companies are doing their own in house full stack, cloud native development and pivoting hard from a more traditional view of in house enterprise i t. And in that regard, >>let's >>start to look a lot like a Saas company or a service provider in terms of the needs of the infrastructure you want linear performance scaling. You want to be very sensitive not just to the cost, as you call it, but also to the environmental cost and the power efficiency. And so yesterday we were really thrilled to announce the HBP Reliant are all 300 General Live in, which is the first of our general living platforms. And that's in partnership with Ampere is the first of several things that we're gonna go do together. We're looking forward to building out the rest of our Gen 11 portfolio broadly with all of our industry partners in the in the coming quarters. But we're thrilled about the feedback that we're starting to get from some of our customers about the gains in power efficiency that they're getting from using this new server line that we've developed with amber. >>So, you know, this is an area that I'm very interested in what I write about this a lot. So tell us the critical aspects of Gen 11, where ampere fits, is it is it being used for primarily offloads and there's a core share with us. So >>if you look at the opportunity here is really as a core compute tool for organisations that are doing that in house full snack cloud native development and in that environment, being able to do it with great power efficiency at a great cost point is the great combination. The maturity of the ecosystem, um, is really, really improving to the point where is much, much more accessible for those loads? And if you consider how the infrastructure evolves underneath it, the gains that you get from power efficiency multiply. It's a TCO benefit. It's obviously an environmental benefit, and we all have much, much more to do as an industry on that journey. But every little helps, and we're really excited about being able to bring that to market. The other thing that we've done is recognising the value that we bring in the prelim experience, everything with our integrated lights out management, all of the security, the, uh, hardware root of trust, the secure boot chains, all of that Reliant family values we brought to that platform, just as we do with our others. But we've also recognised that for some of our service provider customers, there's a lot of interest in leveraging open BMC and being able to integrate the management plane and control that in house and tie it to whatever orchestrations being done in the service product. So we have full support for open BMC out of the box out of the gate with Janna Levin. And that's one of the ways that we're evolving. Are offering to meet our customers where they are, including not just the assassin service providers but the enterprises who are starting to adopt more and more of those practises as they build out digital. First, >>tell us more about the architecture. If you would kneel. I mean, so where does ampere and that partnership add value? That's incremental to what you what you might think is a traditional server architecture. How's that evolving? >>Well, it's another alternative for certain workloads in that full stack in house proud Native Development model. Um, it's another choice. It's another option and something that's very excited about >>That's the right course for the horse, for the course that was back in internal development because it's just more efficient. It's lower power, more sustainable. All those things exactly. >>And the wonderful thing for us in the uh in this juncture in the market is there is so much architectural innovation. There are so many innovators out there in the industry creating different optimizations in technology with the lesson silicon or other aspects of the system. And that gives us a much broader palette to paint from as we meet our customers' needs as their businesses involving the requirements are evolving, we can be much more creative as we bring this all together. It's a real thrill to be able to bring some of these technologies into the HP reliant space because we've always felt that compute matters. We've always known that hardware matters, and we've been leading and innovating and meeting these needs as they've evolved over the decades, and it's really fun to be able to continue to do that. Hardware still >>matters. It doesn't matter. We know that here on the Cube, talk about the influence of the customer with so much architectural innovation. There's a lot of choice for customers in every industry. When you're in customer conversations, how are you helping them make decisions? One of the key differentiators that you articulate that's going to really help them achieve outcomes that they have to achieve? >>Well, I think that's exactly as you say. It's about the outcome. Too often, I think the conversation can get down into the lower level details of component, tree and technology and our philosophy. HP has always been focused on what it is that the customer is trying to achieve. How are they trying to serve their customers? What are their needs? And then we can bring an opinionated point of view on the best way to solve that problem, whether that's recommendations on the particular Capex, infrastructure and architecture to build or increasingly, the opportunity to serve that through HP Green Lake, either as hard or as a service. Or is HP Green Lake services further up the stack? Because when you start talking about what is the outcome you're trying to achieve, you have you have a much, much better opportunity to focus the technology to serve the business and not get wrapped up in managing the infrastructure and that's what we love to do. >>So where? Give us the telescope vision. Maybe not to tell a binocular vision as to where compute is going. We're clearly seeing more diversity in silicon. Uh, it's not just a you know x 86 CPU world anymore. There's all these other supporting components new workloads coming in. Where do you you mentioned Edge, whole new ballgame ai inference sing. And that was kind of new workloads, offloads and things of that. Where do you see it all going in the next 3 to 5 years? >>I think it's gonna be really, really exciting time because more and more of our data is getting captured to the edge. And because of the experiences that companies are trying to deliver and organisations are trying to deliver that requires more and more stories are more and more compute at the edge. The edge is not just about connectivity, and again, that's why with the F B green light cloud platform, the power of bringing together the connectivity with the compute with the storage with the other capabilities in that integrated way gives us the ability to serve that combined need at the edge in a very, very compelling way. The room moves a lot of friction and a lot of work for our customers. But as you see that happen, you're going to see more and more combining of functionalities. The silos are going to start to break down between different classes of building block in the data centre, and you've already seen shifts with more and more software to find more and more hybrid offerings running across a computing substrate. But perhaps delivering storage services are analytic services or other workloads, and you're gonna see that to conduct that continue to evolve. So it's gonna be very fun over the next few years to see that, uh, that diversification and a much more opinionated set of offers for particular use cases and workloads and at our job and value is going to be simplifying that complexity because choices great right up to the point where you're paralysed by too many choices. So the wonderful thing about the world that's been done here is that we're able to bring that opinionated point of view and help guide, and again it's all about starting with what are you trying to achieve. What are the outcomes you're trying to deliver? And if you start there were having a great time helping our customers find the right path forward. >>Wow, it sounds like a fun job. Talk to me about, you know, maybe one of your favourite examples that you really think articulates the value of of the choice and the opportunities that HP can deliver to customers, maybe favourite customer example where you think we really nailed it here and they're achieving some incredible outcomes. >>Well, we're really excited about this week as I was chatting with the CEO of Cloud Sigma, which is a global ideas and pass provider who's actually been using our new HP per client moral 300 general live in Are you on purpose? Server line? And, uh, their CEO was reporting to me yesterday that based on his benchmarking, they're seeing a significant improvement in power efficiency, and that's that's that's cool to an engineer. But what's even better is the next thing, he said. That's enabling them to deliver better cost to their customers and advanced their sustainability goals, which is such a core part of what we as an industry and we as society are going to have to continue to make stepwise progress against over the next decade in order to confront those challenges in the environment so that that's that's really fulfilling, not just to see the tech, which is always interesting to an engineer but actually see the impact that it's having an enabling that outcome foreclosed signal >>so many customers, including Cloud Sigma and customers in every industry. E S G is an incredibly important initiative. And so it's vital for companies that have a core focus on E. S G to partner with companies like HP who will help them facilitate that actually demonstrate outcomes to their own users. >>It's such an important journey and it's gonna be a journey of many steps together. But I think it's one of the most critical partnerships that as an industry and as an ecosystem, we still have a lot of work to do and we have to stay focused on it every day, continuing, moving the bar. >>You >>know, to your point about E. S G. You see these E s G reports. Now that they're unbelievable, the data that is in them and the responsibility that organisations mid and large organisations have to actually publish that and be held accountable. It's actually kind of daunting, but there's a lot of investments going on there. You're absolutely right. The >>accountability is key, and it's it's it's necessary to have an accountability partner and ecosystem that can facilitate that. Exactly. >>We just published last week our Own Living Progress report this year, talking about some of the steps that we're making the commitments that we pulled in in time. Um, and we're looking forward to continue to work on that with our customers and with the industry, because it's so critical that we make faster progress together on that >>last question. What's your favourite comment that you've heard the last couple of days being back in person with about 8000 customers, partners and execs? It's >>not. It's not the common. It's the sparkles in the eyes. It's the energy. It is so great to be back together, face to face. I think we, uh, we've soldiered through a couple of tough years. We've done a lot of things remotely together, but there's no substitute for being back together, and the energy is just palpable and it's it's fantastic to be able to share some of what we've been up to in the interim and see the excitement about getting adopted by customers and partners. >>I agree the energy has been fantastic. We were talking about that yesterday. You brought it today, Neil, Thank you so much for joining us. We're excited about Antonio coming up next, going to unpack all the announcements. Really good customers. Perspective from the top of H P E for Neil and Dave Volonte. I'm Lisa Martin joins us in just a few minutes as the CEO of HP, Antonio Neary joins us next.

Published Date : Jun 29 2022

SUMMARY :

Neale, Great to have you back on the Cube. And how cool is it to be able to do this face to face again instead of on zoom. many people eager to hear what HP has been doing. And it's fantastic to see the momentum that that's really building and how it's breaking And then I'd love to hear some feedback. be able to start to share some of the next chapters in that with our customers this week. Well, it's almost half the business H p e and as we've talked about, So the power of HP Green Lake is enabling them to match their We've got to be a data company. and with it, the underlying infrastructure and how they think about development. the cost, as you call it, but also to the environmental cost and the power efficiency. So tell us the critical aspects of Gen 11, where ampere fits, is it is it being used development and in that environment, being able to do it with great power efficiency at a That's incremental to what you It's another option and something that's very excited about That's the right course for the horse, for the course that was back in internal development because over the decades, and it's really fun to be able to continue to do that. We know that here on the Cube, talk about the influence of the customer with It's about the outcome. as to where compute is going. And because of the experiences that companies are trying to deliver and organisations are trying to deliver of of the choice and the opportunities that HP can deliver to customers, against over the next decade in order to confront those challenges in the environment so that that's that's really a core focus on E. S G to partner with companies like HP who every day, continuing, moving the bar. the data that is in them and the responsibility that organisations mid and large accountability is key, and it's it's it's necessary to have an accountability partner and and with the industry, because it's so critical that we make faster progress together on that It's and the energy is just palpable and it's it's fantastic to be able to share some of what we've been up to in the interim I agree the energy has been fantastic.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Neil MacDonaldPERSON

0.99+

NeilPERSON

0.99+

Lisa MartinPERSON

0.99+

Janna LevinPERSON

0.99+

Neil MacdonaldPERSON

0.99+

Dave VolontePERSON

0.99+

AmpereORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

HPORGANIZATION

0.99+

Antonio NearyPERSON

0.99+

yesterdayDATE

0.99+

Cloud SigmaORGANIZATION

0.99+

TodayDATE

0.99+

last weekDATE

0.99+

SaasORGANIZATION

0.99+

BothQUANTITY

0.99+

FirstQUANTITY

0.99+

Green LakeORGANIZATION

0.99+

three yearsQUANTITY

0.99+

H P EORGANIZATION

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

CapexORGANIZATION

0.99+

HPEORGANIZATION

0.99+

about $13 billionQUANTITY

0.98+

this yearDATE

0.98+

this weekDATE

0.98+

oneQUANTITY

0.98+

AntonioPERSON

0.98+

about 8000 customersQUANTITY

0.98+

three years agoDATE

0.98+

Venetian Expo CentreLOCATION

0.96+

Green LightCOMMERCIAL_ITEM

0.96+

HP Green LakeORGANIZATION

0.95+

OneQUANTITY

0.95+

twoQUANTITY

0.93+

ampereORGANIZATION

0.93+

HBP ReliantORGANIZATION

0.92+

Green Light Cloud PlatformTITLE

0.92+

HPD NealeORGANIZATION

0.91+

HPDORGANIZATION

0.91+

5 yearsQUANTITY

0.88+

E. S GTITLE

0.85+

2022DATE

0.83+

CubeCOMMERCIAL_ITEM

0.83+

Lake Cloud PlatformTITLE

0.82+

Gen 11OTHER

0.82+

last couple of yearsDATE

0.8+

Green LakeCOMMERCIAL_ITEM

0.79+

DayQUANTITY

0.79+

BMCORGANIZATION

0.77+

H. P EPERSON

0.75+

3QUANTITY

0.74+

2020DATE

0.74+

300 GeneralQUANTITY

0.73+

Own Living ProgressTITLE

0.7+

last coupleDATE

0.7+

Lisa Martin Day VolonteEVENT

0.62+

Discover 22EVENT

0.62+

amberORGANIZATION

0.62+

HB GreenORGANIZATION

0.6+

300QUANTITY

0.6+

daysDATE

0.59+

11OTHER

0.57+

Kacy Clarke & Elias Algna


 

>>you welcome to the cubes, continuing coverage of Splunk dot com. 21 I'm lisa martin of a couple guests here with me. Next talking about Splunk H P E N. Deloitte, please welcome Casey Clark, Managing Director and chief architect at Deloitte and Elias Alanya Master Technologists Office of the North American C T O at H P E. Guys welcome to the program. Great to have you. >>Thank you lisa. It's great to be here. >>Thanks lisa >>Here we still are in this virtual world the last 18 months, so many challenges, some opportunities, some silver linings but some of the big challenges that organizations are facing this rapid shift to remote work. The rapid acceleration In digital transformation ran somewhere up nearly 11 x in the first half of this year alone. Solar winds talk to me about some of the challenges that organizations are facing and how you're helping them deal with that Casey >>we'll start with you So most of our clients as we move to virtual um have accelerated their adoption of multiple cloud platforms. You know, moving into a W S into Azure into google. And one of the biggest challenges is in this distributed environment, they still have significant workloads on prem Part of the workloads are in office 3 65. Part of them are in salesforce part of them they're moving into AWS or big data workloads into google. How do you make this all manageable from both. A security point of view and accelerating threats. Uh make that much worse but also from an operational point of view, you know, how do I do application performance management when I have workloads in the cloud calling. Api is back on prem into the mainframe. How do I make an operationally when I have tons of containers and virtual machines operating out there? So the importance of Splunk and good log management observe ability along with all the security management and the security logs and being able to monitor for your environment in this complex distributed environment is absolutely critical and it's just going to get more complex as we get more distributed. >>How can companies given the complexity? How can companies with these complicated I. T. Landscapes get ahead of some of these issues? >>One of the things that we really focused on making sure that you're getting ahead of those and you know we work with organizations like Splunk and Deloitte is how do we how do we collect all of the data? Not just a little bit of it, you know Splunk, help and Deloitte are helping us look across all of those places. We want to make sure that we can can really ingest everything that's out there and then let the tools like Splunk then use all of that data. We found a lot of organizations really struggle with that and with the retention of that data it's been a challenge. So those are things that we really worked hard on figuring out with organizations out there um how to how to ingest retain and then modernize how they do those things at the same time. >>I was reading the Splunk state of Security report which they surveyed over 500 security leaders I think it was over nine um global economies and they said 78% of security and I. T. Leaders worry 78% that they're going to be hit by something like solar winds. Um That style of attack Splunk saying security is a data problem but also looking at all this talk about being on the defensive and preventing attacks the threat landscape escaping companies also have to plan for growth. They have to plan for agility. How do you both help them accomplished? Both at the same time Casey will start with you. >>Well fundamentally on the security front you start with security by design. You're designing the logging the monitoring the defenses into the systems as they are being designed up front as opposed to adding them when you get to Um you know you 80 or production environment. So security by design much like devops and Fc cops is pushing that attitude towards security back earlier in the process so that each of the systems as we're developing them um have the defenses that are needed and have the logging that are embedded in them and the standards for logging so that you don't just get a lot of different kinds of data you get the data you actually need coming into the system and then setting up the correlation of that data so you can identify those threats early through a i through predictive analytics, you get to identify things more quickly. You know, it's all about reducing cycle times and getting better information by designing it in from the beginning, >>standing in from the beginning that shifting left Elias. What are your thoughts about this, enabling that defense, designing an upfront and also enabling organizations to have the agility to grow and expand? >>Yes, sort of reminded of something our friends with the Blue oval used to say in manufacturing quality isn't inspected, it's built in right and and two cases point you have to build it in. We've we've definitely worked with delight to do that and we've set up systems so that they have true agility. We've done things like container ice block with kubernetes uh you know, work with object storage. A lot of the new modern technologies that maybe organizations aren't quite accustomed to yet are still getting on board with. And so we wrap those up in our HP Green Lake managed services so that we can provide those things to organizations that aren't maybe aren't ready for them yet. But the threat landscape is such that you have to be able to do those things if you're not orchestrating these thousands and thousands of containers with something like kubernetes, it's just it becomes such a manual labor intensive process. And so that that labor intensive, non automated process. That's the thing that we're trying to remove. >>Well that's an inhibitor to growth, right number one there, let's go ahead and dig into the HP. Deloitte Splunk solution case. I'm going to go back over to, you talk to me about kind of the catalyst for developing the solution and then we'll dig into it in terms of what it's delivering. >>So Deloitte has had long term partnerships with both H B E and Splunk and we're very excited about working together with them on this solution. Um the HP Green Light, which is hardware by subscription, the flexibility of that platform, you know, the cost effectiveness of the platform. Be able to run workloads like Splunk on it that are constantly changing. You have peaks and valleys depending on, you know, how much work you're doing, how many logs are coming in and so being able to expand that environment quickly through containerized architecture, Oz Funk, which is what we worked on, um you know, with the HP Green Light team uh and and also with spunk so that we can Federated the workloads and everything that's going on on prem with workloads that are in the cloud and doing it very flexibly with the HP on prim platform as well as, you know, Splunk on google and Azure and Splunk cloud um and then having one pane of glass that goes across all of it has been very exciting. You know, we were getting lots of interest in the demo of what we've done on the Green light platform and the partnership has been going great, uh >>that single pane of glass is so critical. We talked about cloud complexity a few minutes ago, customers are dealing with so many different applications there now in this hybrid multi cloud world, it's probably only going to proliferate, Let's talk to me about H P. S perspective and how you're going to help reduce the cloud complexity that customers in every industry are facing. >>Yeah, so within the HP Green Lake umbrella of portfolio, we have set up our uh admiral container platform, for example, are Green Lake management services. We bring all these things together in a way that that really can accelerate applications uh that can make the magic that Deloitte does work underneath. And so when, when our friends at Deloitte go and build something, someone has to, has to bring that to life, has to run it for for our customers. And so that's what Hb Green Lake does, then we do that in a way that fundamentally aligns to the business cycles that go on. And so, uh you know, we think of cloud as an operating model, not necessarily just a physical destination. And so we work on prem Coehlo public hybrid Green Lake spans across all of those and can bring together in a way that really helps customers. We've seen so many times, they have these silos and islands of data. Um you know, you've got uh data being generated in the cloud. Well, you need Splunk in the cloud, you've got the energy generated in uh, Amelia, Well you've got spunk into me and so so Deloitte's really done some great things to help us put that together and then we, we underpin that with the, with the green like uh management services with our software and our infrastructure to make it all >>work. Yeah, Elias, one of the areas that you just mentioned is is one of the hottest trends that we've noticed out there. A lot of clients, you know, with the competition for skilled resources out there on the engineering side and operations are looking at managed services as an option to building, you know, their own technology, you know, hiring their own team, running it themselves and the work that we do with both on the security side as well as operations to provide managed services for our clients in collaboration with companies like HP E and running of the Green Lake platform platforms as well as one cloud, those combined services together and delivered as a managed service uh to our clients is an exciting trend out there that um, is increasingly seen as very cost effective for our clients >>saving cost is key case. I want to get your perspective on what you think differentiates this, this solution, the technology alliance, what are the differentiators in this from Deloitte's lens. >>So bringing the expertise of a company like HP and the flexibility and expand ability of the Green lake platform and the container ization that they've done with Israel, you know, it's, it's bringing that cloud like automation and virtual and flexibility to on uh, the on prem and the hybrid cloud solution combined with Splunk who is rapidly expanding not only what they do in the security space where the constantly changing security landscape out there, but also in observe ability application, performance management, um, Ai ops, um, you know, fully automated and integrated response to operational events that are out there. So HP is doing what they do really well and adapting to this new world. Splunk is constantly changing their products to make it easier for us to go after those operational issues. And Deloitte is coming in with both the industry and the technical experience to bring it all together, you know, how do you log the right things, you know, how do you identify, you know, the real signal versus the noise out there? You know, when you're collecting massive amounts of log data, you know, how do you make it actionable? How can you automate those actions? So by bringing together all three of these berms together, uh we can bring a much better, much, much more effective solutions to our clients in much shorter time frames, >>Shorter time frames are key given that one of the things we've learned in the last 18 months, is that real time is really business critical for companies in every industry unless I want to get your perspective from a technology lens, talk to me about the differentiators here, what this solution is three way alliance brings to your customers. >>Yeah, sure thing. We've done a lot of work with Deloitte and with Intel also on performance optimization, which is, is key for any application and that gets to what I mentioned earlier of bringing more data in some of the work that we've done with until we've able been able to accelerate Are the ingest rate of Splunk by about 17 times, which is pretty incredible. Uh, and that allows us to do more or do more with less and that can help reduce the cost. Also done a lot of work on the, on the setup side. So there's a lot of complexities in running a big enterprise application like Splunk. Um, it does a lot of great things but with that comes some complications for sure. And so, uh, a lot of the work that we've done is to help really make this production ready at scale disaster tolerance and bring all of those things together. And that >>requires a fair amount of >>work on the back end to make sure that we can, we can do that at scale and, and to be a, you know, to run, you know, in a way that businesses of significant size can take advantage of these things without having to worry about what happens if I lose a data center or what happens if I lose a region. Um And and to do those things with absolute assurance >>That's critical case you have a question for you. How will this solution help facilitate one of the positives that we've seen during the last 18 months and that is the strengthening of the IT security relationship. What are your thoughts there? >>I think one of the important things here is that the standardization and automation of what we're what we're bringing together you know so that security can monitor all the different things that are being configured because I can go in and look at the automation that it's creating them. So we have a very dynamic environment now with the new cloud based and virtualized environment so going in and manually configuring anything anymore. It's just not possible. Not when you're managing tens of thousands of servers out there. So security working together very closely with operations and collaborating on that automation so that the managed services are are configured right from the beginning as we talked about security about design. Operations by design in the beginning it's that early collaboration and that shift left that is giving us the very close collaboration that results in good telemetry, good visibility you know good reaction times on the other end. >>That collaboration is something that we've also seen is really a key theme that's emerged I think from all of us in every industry in the last 18 months. And I want to punt the last question to you and that's where can customers go to learn more information? How do they get started with this solution? >>A great way to get started is to reach out to our partners like Deloitte, they can help you on that journey. Hp. Es there, of course. Hp dot com. We have a number of white papers, collateral presentations, reference architecture is you name it, it's out there. But really every organization is unique. Every every challenge that we come up with always requires a little bit of hard thinking and and so that's why we have the partnership >>to be able to work with customers and collaborate. I'll say to really identify what their challenges are, how they help them in this very dynamic. No doubt continuing to be dynamic market. Thank you both so much for joining me talking to me about what Deloitte Splunk NHP are doing, how you're helping customers address that cloud complexity from the security lens, the operations lens. We appreciate your time. >>Thanks lisa. Thank you lisa tonight >>For my guests. I'm Lisa Martin, you're watching the cubes coverage of splunk.com 21. Yeah. Mhm

Published Date : Oct 18 2021

SUMMARY :

Elias Alanya Master Technologists Office of the North American C T O at H P Thank you lisa. some opportunities, some silver linings but some of the big challenges that organizations are facing management and the security logs and being able to monitor for your environment How can companies given the complexity? One of the things that we really focused on making sure that you're getting ahead of those and How do you both help them accomplished? into the systems as they are being designed up front as opposed to adding them when you get standing in from the beginning that shifting left Elias. A lot of the new modern technologies that I'm going to go back over to, you talk to me about kind of the with the HP on prim platform as well as, you know, Splunk on google and going to help reduce the cloud complexity that customers in every industry are facing. And so, uh you know, we think of cloud as an operating model, Yeah, Elias, one of the areas that you just mentioned is is one of the hottest trends I want to get your perspective on what you think and expand ability of the Green lake platform and the container ization that they've done with Israel, is that real time is really business critical for companies in every industry unless I want to get your perspective of bringing more data in some of the work that we've done with until we've able been able and to be a, you know, to run, you know, in a way that businesses one of the positives that we've seen during the last 18 months and that is the strengthening of the IT security and automation of what we're what we're bringing together you know so that And I want to punt the last question to you and that's where can customers a number of white papers, collateral presentations, reference architecture is you name Thank you both so much for joining me talking to me about what Deloitte Splunk NHP are doing, Thank you lisa tonight I'm Lisa Martin, you're watching the cubes coverage of splunk.com 21.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

DeloitteORGANIZATION

0.99+

Casey ClarkPERSON

0.99+

lisaPERSON

0.99+

HPORGANIZATION

0.99+

tensQUANTITY

0.99+

AWSORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

78%QUANTITY

0.99+

thousandsQUANTITY

0.99+

IntelORGANIZATION

0.99+

H B EORGANIZATION

0.99+

AmeliaPERSON

0.99+

lisa martinPERSON

0.99+

Kacy ClarkePERSON

0.99+

eachQUANTITY

0.99+

over 500 security leadersQUANTITY

0.99+

Green LakeORGANIZATION

0.99+

googleORGANIZATION

0.99+

HP Green LakeORGANIZATION

0.99+

EliasPERSON

0.99+

OneQUANTITY

0.98+

BothQUANTITY

0.98+

threeQUANTITY

0.98+

bothQUANTITY

0.98+

CaseyPERSON

0.98+

H P E.ORGANIZATION

0.98+

tonightDATE

0.98+

HP Green LightORGANIZATION

0.98+

SplunkTITLE

0.98+

oneQUANTITY

0.98+

two casesQUANTITY

0.98+

Blue ovalORGANIZATION

0.97+

about 17 timesQUANTITY

0.97+

three wayQUANTITY

0.97+

Elias AlgnaPERSON

0.96+

splunk.comOTHER

0.96+

Hb Green LakeORGANIZATION

0.95+

AzureTITLE

0.95+

Splunk H P E N.ORGANIZATION

0.93+

last 18 monthsDATE

0.92+

80QUANTITY

0.92+

HP EORGANIZATION

0.91+

one cloudQUANTITY

0.91+

Deloitte SplunkORGANIZATION

0.9+

first half of this yearDATE

0.9+

Elias Alanya Master TechnologistsORGANIZATION

0.89+

over nine um global economiesQUANTITY

0.89+

single paneQUANTITY

0.88+

couple guestsQUANTITY

0.86+

ApiORGANIZATION

0.85+

Deloitte Splunk NHPORGANIZATION

0.8+

few minutes agoDATE

0.79+

one pane ofQUANTITY

0.73+

John Gromala


 

>>Welcome back to HP discover 2021 the virtual version my name is Dave valentin. You're watching the cubes continuous coverage of the event, john Ramallah is here. He's the senior director of product management for HP. Green Lake Lighthouse. New offering from HP. We're gonna talk about that. We're gonna talk about cloud native. Hey john, welcome to the cube. Good to see you again. >>Awesome. Great to be with you again. >>All right. So what is Green Lake Lighthouse? >>Yeah, it's very excited. Another new offering and innovation from H P E to support our broader Green Lake strategy and plans. It's really a brand new, purpose built cloud native platform that we've developed and created. That pulls together all of our infrastructure leadership with our platform software leadership into a single integrated system built to run green light cloud services. So think of it as you know, fully integrated, deploy it any place you want on your premises at a co location provider or at the edge wherever you need. Um, they'll inter operate, work together sharing data, you know, running apps together. Great capability for people to bring the cloud where they want as we talk about what Green like it's the cloud that comes to you. >>So should we think of this as a as a management platform? Is it also sort of a quasi development platform kind of, where does it fit in that spectrum? >>Well, it's really more of an integrated system with all of the integrated control planes needed to run it in a distributed fashion. So it's a it's a true distributed cloud intended to run at any client location that's needed, connects back to Green Lake Central and our Green Light cloud operations teams to go ahead and run any cloud services that they want. So you get the benefit of running those workloads wherever you need but that, you know, uh centralized control that people want in terms of how they run their class. >>Okay so we think of these things like for instance how is it different from a WS outposts or things like you know as your stack or as your hub? >>Yeah, very simply this is because it's a distributed cloud intended to make it so you can run it wherever you need. You don't need to be tethered to any of the public cause or the various public clouds out there so people can now run their systems wherever they want however they need without that required tethering that much of those other vendors require. So you can really sort of own your own cloud or have that cloud come to wherever you need it within your overall. I. T. >>Can I tether to a public cloud if I want to. >>Yes. The cloud services like many other cloud services can interconnect together, so no issue, if you want to run or even do fail over between public cloud or on premises, it's all how you want to set it up. But that connection to public cut again through Green Lake is done at that cloud services level. Uh you know where you would connect one of the screen, like lighthouse systems to the public cloud three services. >>Okay, so maybe we talk a little bit about the use cases in a minute, but but how flexible is this? How do I configure Lighthouse? You know what what comes standard? What what what what are my options? >>Yes, so we've designed it in a very modular fashion so that people can really configure it to whatever their needs are at any given location. So there's a basic set of modules that align to a lot of the compute and storage instances that people are familiar with from all of the cloud providers, you simply tell us which workloads you want to be running on it and how much capacity you want and that will get configured and deployed to that given site. In terms of the different types, we have what we're calling to series or a set of series that are available for this to meet different sets of needs, one being more mainstream for broad use cases that people need virtualized container, any other type of enterprise workloads and another more technically focused with higher performance networking. For higher performance deployments, you can choose which of those fits your needs for those given areas. >>So maybe you could talk a little bit more about the workloads and what specifically is uh supported and how they get deployed. >>Again, all of it is managed and run through Green Lake Central. That's our one location where people can go to watch these things manage them. You can run container as a service VM as a service as needed on these different platforms. You can actually mix and match those as well. So one of these platforms can run multiple of those and you can vary the mix of those as your business needs change over time. So think of it as a very flexible way to manage this, which is really what cloud native is all about, having that flexibility to run those workloads wherever and however you need. In addition, we can build more advanced type of solutions on top of those sort of foundational capabilities with things like HPC is a service and collapses a service to better enable clients to deploy any of their given enterprise workloads. >>John what about the security model for lighthouse? Um, that's obviously a big deal. Everybody's talking about these days. It can't open the, the news without seeing some kind of, you know, hack de jure, how does Lighthouse operate in the, you know, secure environment? >>Well, you know, first of all, there's sort of a new standard that was established, um, you know, within these cloud operating models and HPV was leading in terms of infrastructure innovation with our Silicon Root of Trust, where we came out with the world's most secure infrastructure a few years ago. And what we're doing now, since this is a full platform and integrated system will be extending that capability beyond just how we create a root of trust in our manufacturing facilities to ensure that it's secure. Running it within the infrastructure itself, will be extending that vertically up into the software stacks of containers and VMS sort of using that root of trust up to make sure everything's secure in that sense. And then eventually up to the workloads themselves. So by being able to go back to that root of trust, it really makes a big difference in how people can run things in an enterprise secure way. Great innovations continued and one of our big focus areas throughout this year. >>So where does it fit in the portfolio, john I mean, how is it uh, compliment or how is it different from, you know, the typical HP systems, the hardware and software that we're used to? >>You might think of this as sort of a best of bringing together all the great innovations of H P D. You know, we we've got awesome infrastructure that we lead for many, many years. We've got, you know, great more cloud native software that's being developed. We've got great partnerships that we've got with a lot of the leading vendors out there. This allows us to bring all of those things together into an integrated platform that is really intended to run these cloud native services. So uh it builds on top of that leadership fits uh in that sense with the portfolio, but it's ultimately about how it allows us to run and extend our green light capabilities as we know them, to make them more uh more consumable if you want to call it for a lot of our enterprise clients and whatever location. So >>when would I when would I use Lighthouse? And when would I use sort of a traditional H P E system? >>Again, it's a matter of which level of integration people want. Cloud is really also in terms of experience about simplifying what people are purchasing and making it easier for them to consume easier for them to roll out a lot of these things. That's when you'd want to purchase a Lighthouse versus our other infrastructure products, we'll always have those leading infrastructure products where people can put together everything exactly the way that they want and go through the qualification and certification of a lot of those workloads or they can go ahead and select this green like Lighthouse, where they have a lot of these things available in a catalog. We do validation of, of the workloads and, and uh, platform systems so that it's all sort of ready for people to roll out in a much more secure, tested and agile fashion. >>So if I have a cloud first strategy, but I don't want to put it in the public cloud, but I want that cloud experience. Uh, and I want to go fast. It sounds like Lighthouse is that I'm the perfect customer for for Lighthouse, >>precisely. You know, this is taking that cloud experience that people are wanting the simplicity of those deployments and making it where it can come to them in whichever location that they want. You know, running it on a consumption basis so that it's a lot easier wait for them to go ahead and manage and deploy those things with out a lot of the internal qualifications and certifications that they had to do over the years >>versus okay. But or, and or if I want to customize it, maybe I want to, maybe I'm a channel partner. I want to bring some of my own value. I got a specific use case that's not covered by something like Lighthouse. That's where I would go with the more traditional infrastructure, >>correct? Yes. If anyone wants to do customization, we've got a great set of products for that. We really want to use Lighthouses, a mechanism for us to standardize and focus on more enabling these broader cloud capabilities for crime >>and Lighthouse. Uh Talk a little bit more about the automation that that I get, you know, things like patching and software updates that's sort of included in this integrated system. >>Is that correct? Absolutely. You know, when, when when people think about, you know, managing workloads in the cloud, they don't worry about taking care of from we're updating and a lot of those things that's all taken care of by the provider. So uh in that same experience, Lighthouse comes with all of the firmware, updating, all of the software, updating all included, all managed through our Green Lake managed services teams. So that's just part of how the system takes care of itself. Um You know, that's a new level of capability and experience that's consistent with all other cloud providers >>and that's that's okay, so that's that's something that is a managed service. Um So let's say I have a lighthouse on prem, you're gonna you're gonna that managed services doing all the patching and the releases and the updates and that lives that lives in the cloud, that lives in hp, that lives in my prem. >>Well, yeah, ultimately it all goes through Green Lake Central and it's managed. Um you know, all of those deployments are are automated in nature so that people don't have to worry about them. Um There's multiple ways that that can get delivered to them. We have some automation and control plane technology that brings that all together for them. Um You know, it can vary based on the client on their degree of of how they want to manage some of that, but it's all taken care of for them. >>And you've got Green Lake in the name and my to infer from that that it's sort of dovetails in is one of the puzzles in the Green Lake mosaic. >>Yeah, exactly. So think of think of Green Lake as our broader initiative for everything cloud and how do we start enabling not only these cloud services, but make it easier for people to deploy those and consume them, consume them wherever they need. And this is the enablement piece. This is that portion of green light that helps them enable that connected to green like Central where they can manage everything centrally. And then we've got that broad catalog of services available. >>And when can I get it? When you go G. A. >>Yes. So it'll july is when our first set of shipments and availability are there. So just a very few days after you discover here, and we'll expand the portfolio over time with more of a mainstream version early, more technical or performance oriented ones available soon thereafter. And we've got plans even for edge type offerings, uh more in the, in the future as well. So a case where we'll continue to build and expand more targeting these platforms to folks needs whether their enterprise or maybe there are vertical offerings that they want in terms of how they move all these things together. Think of Telco is a great case where people want this. Healthcare is another area where we can add the value of these integrated systems in a very purpose built way. >>Can I ask you what, like what's inside, you know, what, what can I get in terms of, you know, basic infrastructure, compute storage, networking, what are my options, >>all of the above. You know, what we'll do is we'll we'll go through the basic selection of all of that greatest hits uh within our complete portfolio. Pulling together, give you a few simple choices, you know, you think about it as you want, general purpose compute modules you might want compute optimized or memory optimized modules. Each of those are simple choices that you'll make that come together underlying all that are the great infrastructure pieces that you've known for years, but we take care of simplifying that for you so you don't have to worry about those details. >>Great well, john, congratulations on the new new product and uh and thank you for sharing the the the update with the cube. >>Thank you very much appreciate. >>All right, thank you for watching the cubes coverage of HP discover 2021. My name is Dave Volonte. Keep it right there right back with more coverage. Right after this short break. >>Yeah. Mhm.

Published Date : Jun 2 2021

SUMMARY :

Good to see you again. Great to be with you again. So what is Green Lake Lighthouse? So think of it as you know, So you get the benefit of running those workloads wherever to make it so you can run it wherever you need. so no issue, if you want to run or even do fail over between from all of the cloud providers, you simply tell us which workloads you want So maybe you could talk a little bit more about the workloads and what specifically is uh supported platforms can run multiple of those and you can vary the mix of those as your business the, you know, secure environment? Well, you know, first of all, there's sort of a new standard that was established, We've got, you know, great more cloud native software that's platform systems so that it's all sort of ready for people to roll out in a much more So if I have a cloud first strategy, but I don't want to put it in the public cloud, of the internal qualifications and certifications that they had to do over the years I want to bring some of my own value. We really want to use Lighthouses, a mechanism for us to standardize you know, things like patching and software updates that's sort of included in this integrated you know, managing workloads in the cloud, they don't worry about taking care that lives in the cloud, that lives in hp, that lives in my prem. Um you know, of dovetails in is one of the puzzles in the Green Lake mosaic. connected to green like Central where they can manage everything centrally. When you go G. A. So just a very few days after you discover here, but we take care of simplifying that for you so you don't have to worry about those details. Great well, john, congratulations on the new new product and uh and thank you for sharing the All right, thank you for watching the cubes coverage of HP discover 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave valentinPERSON

0.99+

john RamallahPERSON

0.99+

John GromalaPERSON

0.99+

LighthouseORGANIZATION

0.99+

johnPERSON

0.99+

HPORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

H P EORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green Lake CentralORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

JohnPERSON

0.99+

LighthousesORGANIZATION

0.99+

GreenORGANIZATION

0.99+

2021DATE

0.98+

Green LightORGANIZATION

0.98+

EachQUANTITY

0.98+

Green Lake LighthouseORGANIZATION

0.98+

three servicesQUANTITY

0.98+

Dave VPERSON

0.98+

first setQUANTITY

0.97+

this yearDATE

0.97+

HPVORGANIZATION

0.97+

first strategyQUANTITY

0.96+

oneQUANTITY

0.96+

one locationQUANTITY

0.96+

julyDATE

0.96+

singleQUANTITY

0.95+

WSORGANIZATION

0.9+

H P D.ORGANIZATION

0.89+

few years agoDATE

0.88+

lighthouseORGANIZATION

0.85+

one of the puzzlesQUANTITY

0.81+

Lake CentralLOCATION

0.77+

LakeLOCATION

0.76+

a minuteQUANTITY

0.68+

yearsQUANTITY

0.63+

G.LOCATION

0.57+

GreenTITLE

0.54+

olontePERSON

0.45+

Andrew Rafla & Ravi Dhaval, Deloitte & Touche LLP | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hey, welcome back already, Jeffrey here with the Cube coming to you from Palo Alto studios today for our ongoing coverage of aws reinvent 2020. It's a digital event like everything else in 2020. We're excited for our next segment, so let's jump into it. We're joined in our next segment by Andrew Rafa. He is the principal and zero trust offering lead at the Light and Touche LLP. Andrew, great to see you. >>Thanks for having me. >>Absolutely. And joining him is Robbie Deval. He is the AWS cyber risk lead for Deloitte and Touche LLP. Robbie, Good to see you as well. >>Hey, Jeff, good to see you as well. >>Absolutely. So let's jump into it. You guys are all about zero trust and I know a little bit about zero trust I've been going to are safe for a number of years and I think one of the people that you like to quote analysts chase Cunningham from Forrester, who's been doing a lot of work around zero trust. But for folks that aren't really familiar with it. Andrew, why don't you give us kind of the 101? About zero trust. What is it? What's it all about? And why is it important? >>Sure thing. So is your trust is, um, it's a conceptual framework that helps organizations deal with kind of the ubiquitous nature of modern enterprise environments. Um, and then its course. Your trust commits to a risk based approach to enforcing the concept of least privileged across five key pillars those being users, workloads, data networks and devices. And the reason we're seeing is your trust really come to the forefront is because modern enterprise environments have shifted dramatically right. There is no longer a defined, clearly defined perimeter where everything on the outside is inherently considered, considered untrusted, and everything on the inside could be considered inherently trusted. There's a couple what I call macro level drivers that are, you know, changing the need for organizations to think about securing their enterprises in a more modern way. Um, the first macro level driver is really the evolving business models. So as organizations are pushing to the cloud, um, maybe expanding into into what they were considered high risk geography is dealing with M and A transactions and and further relying on 3rd and 4th parties to maintain some of their critical business operations. Um, the data and the assets by which the organization, um transact are no longer within the walls of the data center. Right? So, again, the perimeter is very much dissolved. The second, you know, macro level driver is really the shifting and evolving workforce. Um, especially given the pandemic and the need for organizations to support almost an entirely remote workforce nowadays, um, organizations, they're trying to think about how they revamp their traditional VPN technologies in order to provide connectivity to their employees into other third parties that need to get access to, uh, the enterprise. So how do we do so in a secure, scalable and reliable way and then the last kind of macro level driver is really the complexity of the I t landscape. So, you know, in legacy environment organizations on Lee had to support managed devices, and today you're seeing the proliferation of unmanaged devices, whether it be you know, B y o d devices, um, Internet of things, devices or other smart connected devices. So organizations are now, you know, have the need to provide connectivity to some of these other types of devices. But how do you do so in a way that, you know limits the risk of the expanding threat surface that you might be exposing your organization to by supporting from these connected devices? So those are some three kind of macro level drivers that are really, you know, constituting the need to think about security in a different >>way. Right? Well, I love I downloaded. You guys have, ah zero trust point of view document that that I downloaded. And I like the way that you you put real specificity around those five pillars again users, workloads, data networks and devices. And as you said, you have to take this kind of approach that it's kind of on a need to know basis. The less, you know, at kind of the minimum they need to know. But then, to do that across all of those five pillars, how hard is that to put in place? I mean, there's a There's a lot of pieces of this puzzle. Um, and I'm sure you know, we talk all the time about baking security and throughout the entire stack. How hard is it to go into a large enterprise and get them started or get them down the road on this zero trust journey? >>Yeah. So you mentioned the five pillars. And one thing that we do in our framework because we put data at the center of our framework and we do that on purpose because at the end of the day, you know, data is the center of all things. It's important for an organization to understand. You know what data it has, what the criticality of that data is, how that data should be classified and the governance around who and what should access it from a no users workloads, uh, networks and devices perspective. Um, I think one misconception is that if an organization wants to go down the path of zero trust, there's a misconception that they have to rip out and replace everything that they have today. Um, it's likely that most organizations are already doing something that fundamentally aligned to the concept of these privilege as it relates to zero trust. So it's important to kind of step back, you know, set a vision and strategy as faras What it is you're trying to protect, why you're trying to protect it. And what capability do you have in place today and take more of an incremental and iterative approach towards adoption, starting with some of your kind of lower risk use cases or lower risk parts of your environment and then implementing lessons learned along the way along the journey? Um, before enforcing, you know more of those robust controls around your critical assets or your crown jewels, if you >>will. Right? So, Robbie, I want to follow up with you, you know? And you just talked about a lot of the kind of macro trends that are driving this and clearly covert and work from anywhere is a big one. But one of the ones that you didn't mention that's coming right around the pike is five g and I o t. Right, so five g and and I o. T. We're going to see, you know, the scale and the volume and the mass of machine generated data, which is really what five g is all about, grow again exponentially. We've seen enough curves up into the right on the data growth, but we've barely scratched the surface and what's coming on? Five G and I o t. How does that work into your plans? And how should people be thinking about security around this kind of new paradigm? >>Yeah, I think that's a great question, Jeff. And as you said, you know, I UT continues to accelerate, especially with the recent investments and five G that you know pushing, pushing more and more industries and companies to adopt a coyote. Deloitte has been and, you know, helping our customers leverage a combination of these technologies cloud, Iot, TML and AI to solve their problems in the industry. For instance, uh, we've been helping restaurants automate their operations. Uh, we've helped automate some of the food safety audit processes they have, especially given the code situation that's been helping them a lot. We are currently working with companies to connect smart, wearable devices that that send the patient vital information back to the cloud. And once it's in the cloud, it goes through further processing upstream through applications and data. Let's etcetera. The way we've been implementing these solutions is largely leveraging a lot of the native services that AWS provides, like device manager that helps you onboard hundreds of devices and group them into different categories. Uh, we leveraged device Defender. That's a monitoring service for making sure that the devices are adhering to a particular security baseline. We also have implemented AWS green grass on the edge, where the device actually resides. Eso that it acts as a central gateway and a secure gateway so that all the devices are able to connect to this gateway and then ultimately connect to the cloud. One common problem we run into is ah, lot of the legacy i o t devices. They tend to communicate using insecure protocols and in clear text eso we actually had to leverage AWS lambda Function on the edge to convert these legacy protocols. Think of very secure and Q t t protocol that ultimately, you know, sense data encrypted to the cloud eso the key thing to recognize. And then the transformational shift here is, um, Cloud has the ability today to impact security off the device and the edge from the cloud using cloud native services, and that continues to grow. And that's one of the key reasons we're seeing accelerated growth and adoption of Iot devices on did you brought up a point about five G and and that's really interesting. And a recent set of investments that eight of us, for example, has been making. And they launched their AWS Waveland zones that allows you to deploy compute and storage infrastructure at the five G edge. So millions of devices they can connect securely to the computer infrastructure without ever having to leave the five g network Our go over the Internet insecurely talking to the cloud infrastructure. Uh, that allows us to actually enable our customers to process large volumes of data in a short, near real time. And also it increases the security of the architectures. Andi, I think truly, uh, this this five g combination with I o t and cloudy, I m l the are the technologies of the future that are collectively pushing us towards a a future where we're gonna Seymour smart cities that come into play driverless connected cars, etcetera. >>That's great. Now I wanna impact that a little bit more because we are here in aws re invent and I was just looking up. We had Glenn Goran 2015, introducing a W S s I O T Cloud. And it was a funny little demo. They had a little greenhouse, and you could turn on the water and open up the windows. But it's but it's a huge suite of services that you guys have at your disposal. Leveraging aws. I wonder, I guess, Andrew, if you could speak a little bit more suite of tools that you can now bring to bear when you're helping your customers go to the zero trust journey. >>Yeah, sure thing. So, um, obviously there's a significant partnership in place, and, uh, we work together, uh, pretty tremendously in the market, one of the service are one of solution offering that we've built out which we dub Delight Fortress, um is a is a concept that plays very nicely into our zero trust framework. More along the kind of horizontal components of our framework, which is really the fabric that ties it all together. Um s o the two horizontal than our framework around telemetry and analytics. A swell the automation orchestration. If I peel back the automation orchestration capability just a little bit, um, we we built this avoid fortress capability in order for organizations to kind of streamline um, some of the vulnerability management aspect of the enterprise. And so we're able through integration through AWS, Lambda and other functions, um, quickly identify cloud configuration issues and drift eso that, um, organizations cannot only, uh, quickly identify some of those issues that open up risk to the enterprise, but also in real time. Um, take some action to close down those vulnerabilities and ultimately re mediate them. Right? So it's way for, um, to have, um or kind of proactive approach to security rather than a reactive approach. Everyone knows that cloud configuration issues are likely the number one kind of threat factor for Attackers. And so we're able to not only help organizations identify those, but then closed them down in real time. >>Yeah, it's interesting because we hear that all the time. If there's a breach and if if they w s involved often it's a it's a configuration. You know, somebody left the door open basically, and and it really drives something you were talking about. Ravi is the increasing important of automation, um, and and using big data. And you talked about this kind of horizontal tele metrics and analytics because without automation, these systems are just getting too big and and crazy for people Thio manage by themselves. But more importantly, it's kind of a signal to noise issue when you just have so much traffic, right? You really need help surfacing. That signals you said so that your pro actively going after the things that matter and not being just drowned in the things that don't matter. Ravi, you're shaking your head up and down. I think you probably agree with this point. >>Yeah, yeah, Jeff and definitely agree with you. And what you're saying is truly automation is a way off dealing with problems at scale. When when you have hundreds of accounts and that spans across, you know, multiple cloud service providers, it truly becomes a challenge to establish a particular security baseline and continue to adhere to it. And you wanna have some automation capabilities in place to be able to react, you know, and respond to it in real time versus it goes down to a ticketing system and some person is having to do you know, some triaging and then somebody else is bringing in this, you know, solution that they implement. And eventually, by the time you're systems could be compromised. So ah, good way of doing this and is leveraging automation and orchestration is just a capability that enhances your operational efficiency by streamlining summed Emmanuel in repetitive tasks, there's numerous examples off what automation and orchestration could do, but from a security context. Some of the key examples are automated security operations, automated identity provisioning, automated incident response, etcetera. One particular use case that Deloitte identified and built a solution around is the identification and also the automated remediation of Cloud security. Miss Consideration. This is a common occurrence and use case we see across all our customers. So the way in the context of a double as the way we did this is we built a event driven architectures that's leveraging eight of us contribute config service that monitors the baselines of these different services. Azzan. When it detects address from the baseline, it fires often alert. That's picked up by the Cloudwatch event service that's ultimately feeding it upstream into our workflow that leverages event bridge service. From there, the workflow goes into our policy engine, which is a database that has a collection off hundreds of rules that we put together uh, compliance activities. It also matched maps back to, ah, large set of controls frameworks so that this is applicable to any industry and customer, and then, based on the violation that has occurred, are based on the mis configuration and the service. The appropriate lambda function is deployed and that Lambda is actually, uh, performing the corrective actions or the remediation actions while, you know, it might seem like a lot. But all this is happening in near real time because it is leveraging native services. And some of the key benefits that our customers see is truly the ease of implementation because it's all native services on either worse and then it can scale and, uh, cover any additional eight of those accounts as the organization continues to scale on. One key benefit is we also provide a dashboard that provides visibility into one of the top violations that are occurring in your ecosystem. How many times a particular lambda function was set off to go correct that situation. Ultimately, that that kind of view is informing. Thea Outfront processes off developing secure infrastructure as code and then also, you know, correcting the security guard rails that that might have drifted over time. Eso That's how we've been helping our customers and this particular solution that we developed. It's called the Lloyd Fortress, and it provides coverage across all the major cloud service providers. >>Yeah, that's a great summary. And I'm sure you have huge demand for that because he's mis configuration things. We hear about him all the time and I want to give you the last word for we sign off. You know, it's easy to sit on the side of the desk and say, Yeah, we got a big security and everything and you got to be thinking about security from from the time you're in, in development all the way through, obviously deployment and production and all the minutes I wonder if you could share. You know, you're on that side of the glass and you're out there doing this every day. Just a couple of you know, kind of high level thoughts about how people need to make sure they're thinking about security not only in 2020 but but really looking down the like another road. >>Yeah, yeah, sure thing. So, you know, first and foremost, it's important to align. Uh, any transformation initiative, including your trust to business objectives. Right? Don't Don't let this come off as another I t. Security project, right? Make sure that, um, you're aligning to business priorities, whether it be, you know, pushing to the cloud, uh, for scalability and efficiency, whether it's digital transformation initiative, whether it be a new consumer identity, Uh uh, an authorization, um, capability of china built. Make sure that you're aligning to those business objectives and baking in and aligning to those guiding principles of zero trust from the start. Right, Because that will ultimately help drive consensus across the various stakeholder groups within the organization. Uh, and build trust, if you will, in the zero trust journey. Um, one other thing I would say is focus on the fundamentals. Very often, organizations struggle with some. You know what we call general cyber hygiene capabilities. That being, you know, I t asset management and data classifications, data governance. Um, to really fully appreciate the benefits of zero trust. It's important to kind of get some of those table six, right? Right. So you have to understand, you know what assets you have, what the criticality of those assets are? What business processes air driven by those assets. Um, what your data criticality is how it should be classified intact throughout the ecosystem so that you could really enforce, you know, tag based policy, uh, decisions within, within the control stack. Right. And then finally, in order to really push the needle on automation orchestration, make sure that you're using technology that integrate with each other, right? So taken a p I driven approach so that you have the ability to integrate some of these heterogeneous, um, security controls and drive some level of automation and orchestration in order to enhance your your efficiency along the journey. Right. So those were just some kind of lessons learned about some of the things that we would, uh, you know, tell our clients to keep in mind as they go down the adoption journey. >>That's a great That's a great summary s So we're gonna have to leave it there. But Andrew Robbie, thank you very much for sharing your insight and and again, you know, supporting this This move to zero trust because that's really the way it's got to be as we continue to go forward. So thanks again and enjoy the rest of your reinvent. >>Yeah, absolutely. Thanks for your time. >>All right. He's Andrew. He's Robbie. I'm Jeff. You're watching the Cube from AWS reinvent 2020. Thanks for watching. See you next time.

Published Date : Dec 8 2020

SUMMARY :

It's the Cube with digital coverage He is the principal and zero trust offering lead at the Light Robbie, Good to see you as well. Andrew, why don't you give us kind of the 101? So organizations are now, you know, have the need to provide connectivity And I like the way that you you put real specificity around those five pillars to kind of step back, you know, set a vision and strategy as faras What it is you're trying to protect, Right, so five g and and I o. T. We're going to see, you know, the scale and the volume so that all the devices are able to connect to this gateway and then ultimately connect to the cloud. that you can now bring to bear when you're helping your customers go to the zero trust journey. Everyone knows that cloud configuration issues are likely the number But more importantly, it's kind of a signal to noise issue when you just have so much traffic, some person is having to do you know, some triaging and then somebody else is bringing in this, You know, it's easy to sit on the side of the desk and say, Yeah, we got a big security and everything and you got to be thinking so that you have the ability to integrate some of these heterogeneous, um, thank you very much for sharing your insight and and again, you know, supporting this This move to Thanks for your time. See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

AndrewPERSON

0.99+

AWSORGANIZATION

0.99+

Robbie DevalPERSON

0.99+

Andrew RafaPERSON

0.99+

RobbiePERSON

0.99+

2020DATE

0.99+

Andrew RaflaPERSON

0.99+

Andrew RobbiePERSON

0.99+

DeloitteORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

RaviPERSON

0.99+

five key pillarsQUANTITY

0.99+

3rdQUANTITY

0.99+

secondQUANTITY

0.99+

chase CunninghamPERSON

0.98+

five pillarsQUANTITY

0.98+

todayDATE

0.98+

Ravi DhavalPERSON

0.98+

Lloyd FortressORGANIZATION

0.98+

oneQUANTITY

0.98+

one thingQUANTITY

0.98+

eightQUANTITY

0.98+

IntelORGANIZATION

0.98+

EmmanuelPERSON

0.98+

One key benefitQUANTITY

0.97+

twoQUANTITY

0.97+

zero trustQUANTITY

0.97+

threeQUANTITY

0.97+

OneQUANTITY

0.97+

2015DATE

0.97+

awsORGANIZATION

0.96+

IotTITLE

0.96+

one misconceptionQUANTITY

0.96+

4th partiesQUANTITY

0.96+

pandemicEVENT

0.95+

Light and Touche LLPORGANIZATION

0.95+

Glenn GoranPERSON

0.95+

Deloitte & Touche LLPORGANIZATION

0.95+

hundreds of devicesQUANTITY

0.94+

hundreds of accountsQUANTITY

0.94+

table sixQUANTITY

0.94+

millions of devicesQUANTITY

0.94+

Deloitte and Touche LLPORGANIZATION

0.91+

CubeCOMMERCIAL_ITEM

0.91+

CloudwatchTITLE

0.9+

LambdaTITLE

0.9+

hundreds of rulesQUANTITY

0.9+

101QUANTITY

0.9+

chinaLOCATION

0.89+

Delight FortressTITLE

0.88+

firstQUANTITY

0.86+

doubleQUANTITY

0.85+

zeroQUANTITY

0.83+

One particular use caseQUANTITY

0.78+

SeymourORGANIZATION

0.77+

EsoORGANIZATION

0.77+

five GTITLE

0.77+

HPE Spotlight Segment v2


 

>>from around the globe. It's the Cube with digital coverage of HP Green Lake day made possible by Hewlett Packard Enterprise. Okay, we're not gonna dive right into some of the news and get into the Green Lake Announcement details. And with me to do that is Keith White is the senior vice president and general manager for Green Lake Cloud Services and Hewlett Packard Enterprise. Keith, thanks for your time. Great to see you. >>Hey, thanks so much for having me. I'm really excited to be here. >>You're welcome. And so listen, before we get into the hard news, can you give us an update on just Green Lake and the business? How's it going? >>You bet. No, it's fantastic. And thanks, you know, for the opportunity again. And hey, I hope everyone's at home staying safe and healthy. It's been a great year for HP Green Lake. There's a ton of momentum that we're seeing in the market place. Uh, we've booked over $4 billion of total contract value to date, and that's over 1000 customers worldwide, and frankly, it's worldwide. It's in 50 50 different countries, and this is a variety of solutions. Variety of workloads. So really just tons of momentum. But it's not just about accelerating the current momentum. It's really about listening to our customers, staying ahead of their demands, delivering more value to them and really executing on the HB Green Lake. Promise. >>Great. Thanks for that and really great detail. Congratulations on the progress, but I know you're not done. So let's let's get to the news. What do people need to know? >>Awesome. Yeah, you know, there's three things that we want to share with you today. So first is all about it's computing. So I could go into some details on that were actually delivering new industry work clothes, which I think will be exciting for a lot of the major industries that are out there. And then we're expanding RHP capabilities just to make things easier and more effective. So first off, you know, we're excited to announce today, um, acceleration of mainstream as adoption for high performance computing through HP Green Lake. And you know, in essence, what we're really excited about is this whole idea of it's a. It's a unique opportunity to write customers with the power of an agile, elastic paper use cloud experience with H. P s market. See systems. So pretty soon any enterprise will be able to tackle their most demanding compute and did intensive workloads, power, artificial intelligence and machine learning initiatives toe provide better business insights and outcomes and again providing things like faster time to incite and accelerated innovation. So today's news is really, really gonna help speed up deployment of HPC projects by 75% and reduced TCO by upto 40% for customers. >>That's awesome. Excited to learn more about the HPC piece, especially. So tell us what's really different about the news today From your perspective. >>No, that's that's a great thing. And the idea is to really help customers with their business outcomes, from building safer cars to improving their manufacturing lines with sustainable materials. Advancing discovery for drug treatment, especially in this time of co vid or making critical millisecond decisions for those finance markets. So you'll see a lot of benefits and a lot of differentiation for customers in a variety of different scenarios and industries. >>Yeah, so I wonder if you could talk a little bit mawr about specifically, you know exactly what's new. Can you unpack some of that for us? >>You bet. Well, what's key is that any enterprise will be able to run their modeling and simulation work clothes in a fully managed because we manage everything for them pre bundled. So we'll give folks this idea of small, medium and large H p e c h piece services to operate in any data center or in a cold a location. These were close air, almost impossible to move to the public cloud because the data so large or it needs to be close by for Leighton see issues. Oftentimes, people have concerns about I p protection or applications and how they run within that that local environment. So if customers are betting their business on this insight and analytics, which many of them are, they need business, critical performance and experts to help them with implementation and migration as well as they want to see resiliency. >>So is this a do it yourself model? In other words, you know the customers have toe manage it on their own. Or how are you helping there? >>No, it's a great question. So the fantastic thing about HP Green Lake is that we manage it all for the customer. And so, in essence, they don't have to worry about anything on the back end, we can flow that we manage capacity. We manage performance, we manage updates and all of those types of things. So we really make it. Make it super simple. And, you know, we're offering these bundled solutions featuring RHP Apollo systems that are purpose built for running things like modeling and simulation workloads. Um, and again, because it's it's Green Lake. And because it's cloud services, this provides itself. Service provides automation. And, you know, customers can actually, um, manage however they want to. We can do it all for them. They could do some on their own. It's really super easy, and it's really up to them on how they want to manage that system. >>What about analytics? You know, you had a lot of people want to dig deeper into the data. How are you supporting that? >>Yeah, Analytics is key. And so one of the best things about this HPC implementation is that we provide unopened platform so customers have the ability to leverage whatever tools they want to do for analytics. They can manage whatever systems they want. Want to pull data from so they really have a ton of flexibility. But the key is because it's HP Green Lake, and because it's HP es market leading HPC systems, they get the fastest they get the it all managed for them. They only pay for what they use, so they don't need to write a huge check for a large up front. And frankly, they get the best of all those worlds together in order to come up with things that matter to them, which is that true business outcome, True Analytics s so that they could make the decisions they need to run their business. >>Yeah, that's awesome. You guys clearly making some good progress here? Actually, I see it really is a game changer for the types of customers that you described. I mean, particularly those folks that you like. You said You think they can't move stuff into the cloud. They've got to stay on Prem. But they want that cloud experience. I mean, that's that's really exciting. We're gonna have you back in a few minutes to talk about the Green Lake Cloud services and in some of the new industry platforms that you see evolving >>awesome. Thanks so much. I look forward to it. >>Yeah, us too. So Okay, right now we're gonna check out the conversation that I had earlier with Pete Ungaro and Addison Snell on HPC. Let's watch welcome everybody to the spotlight session here green. Late day, We're gonna dig into high performance computing. Let me first bring in Pete Ungaro, Who's the GM for HPC and Mission Critical solutions, that Hewlett Packard Enterprise. And then we're gonna pivot Addison Snell, who is the CEO of research firm Intersect 3. 60. So, Pete, starting with you Welcome. And really a pleasure to have you here. I want to first start off by asking you what is the key trends that you see in the HPC and supercomputing space? And I really appreciate if you could talk about how customer consumption patterns are changing. >>Yeah, I appreciate that, David, and thanks for having me. You know, I think the biggest thing that we're seeing is just the massive growth of data. And as we get larger and larger data sets larger and larger models happen, and we're having more and more new ways to compute on that data. So new algorithms like A. I would be a great example of that. And as people are starting to see this, especially they're going through a digital transformations. You know, more and more people I believe can take advantage of HPC but maybe don't know how and don't know how to get started on DSO. They're looking for how to get going into this environment and many customers that are longtime HBC customers, you know, just consume it on their own data centers. They have that capability, but many don't and so they're looking at. How can I do this? Do I need to build up that capability myself? Do I go to the cloud? What about my data and where that resides. So there's a lot of things that are going into thinking through How do I start to take advantage of this new infrastructure? >>Excellent. I mean, we all know HPC workloads. You're talking about supporting research and discovery for some of the toughest and most complex problems, particularly those that affecting society. So I'm interested in your thoughts on how you see Green Lake helping in these endeavors specifically, >>Yeah, One of the most exciting things about HPC is just the impact that it has, you know, everywhere from, you know, building safer cars and airplanes. Thio looking at climate change, uh, to, you know, finding new vaccines for things like Covic that we're all dealing with right now. So one of the biggest things is how do we take advantage event and use that to, you know, benefit society overall. And as we think about implementing HPC, you know, how do we get started? And then how do we grow and scale as we get more and more capability? So that's the biggest things that we're seeing on that front. >>Yes. Okay, So just about a year ago, you guys launched the Green Lake Initiative and the whole, you know, complete focus on as a service. So I'm curious as to how the new Green Lake services the HPC services specifically as it relates to Greenlee. How do they fit in the H. P s overall high performance computing portfolio and the strategy? >>Yeah, great question. You know, Green Lake is a new consumption model for eso. It's a very exciting We keep our entire HPC portfolio that we have today, but extend it with Green Lake and offer customers you know, expanded consumption choices. So, you know, customers that potentially are dealing with the growth of their data or they're moving toe digital transformation applications they can use green light just easily scale up from workstations toe, you know, manage their system costs or operational costs, or or if they don't have staff to expand their environment. Green Light provides all of that in a manage infrastructure for them. So if they're going from like a pilot environment up into a production environment over time, Green Lake enables them to do that very simply and easily without having toe have all that internal infrastructure people, computer data centers, etcetera. Green Lake provides all that for them so they can have a turnkey solution for HBC. >>So a lot easier entry strategies. A key key word that you use. There was choice, though. So basically you're providing optionality. You're not necessarily forcing them into a particular model. Is that correct? >>Yeah, 100%. Dave. What we want to do is just expand the choices so customers can buy a new choir and use that technology to their advantage is whether they're large or small. Whether they're you know, a startup or Fortune 500 company, whether they have their own data centers or they wanna, you know, use a Coehlo facility whether they have their own staff or not, we want to just provide them the opportunity to take advantage of this leading edge resource. >>Very interesting, Pete. It really appreciate the perspective that you guys have bring into the market. I mean, it seems to me it's gonna really accelerate broader adoption of high performance computing, toe the masses, really giving them an easier entry point I want to bring in now. Addison Snell to the discussion. Addison. He's the CEO is, I said of Intersect 3 60 which, in my view, is the world's leading market research company focused on HPC. Addison, you've been following the space for a while. You're an expert. You've seen a lot of changes over the years. What do you see is the critical aspect in the market, specifically as it relates toward this as a service delivery that we were just discussing with Pete and I wonder if you could sort of work in their the benefits in terms of, in your view, how it's gonna affect HPC usage broadly. Yeah, Good morning, David. Thanks very much for having me, Pete. It's great to see you again. So we've been tracking ah lot of these utility computing models in high performance computing for years, particularly as most of the usage by revenue is actually by commercial endeavors. Using high performance computing for their R and D and engineering projects and the like. And cloud computing has been a major portion of that and has the highest growth rate in the market right now, where we're seeing this double digit growth that accounted for about $1.4 billion of the high performance computing industry last year. But the bigger trend on which makes Green like really interesting is that we saw an additional about a billion dollars worth of spending outside what was directly measured in the cloud portion of the market in in areas that we deemed to be cloud like, which were as a service types of contracts that were still utility computing. But they might be under a software as a service portion of the budget under software or some other managed services type of contract that the user wasn't reported directly is cloud, but it was certainly influenced by utility computing, and I think that's gonna be a really dominant portion of the market going forward. And when we look at growth rate and where the market's been evolving, so that's interesting. I mean, basically, you're saying this, you know, the utility model is not brand new. We've seen that for years. Cloud was obviously a catalyst that gave that a boost. What is new, you're saying is and I'll say it this way. I'd love to get your independent perspective on this is so The definition of cloud is expanding where it's you know, people always say it's not a place, it's an experience and I couldn't agree more. But I wonder if you could give us your independent perspective on that, both on the thoughts of what I just said. But also, how would you rate H. P. E s position in this market? Well, you're right, absolutely, that the definition of cloud is expanding, and that's a challenge when we run our surveys that we try to be pedantic in a sense and define exactly what we're talking about. And that's how we're able to measure both the direct usage of ah, typical public cloud, but also ah more flexible notion off as a service. Now you asked about H P E. In particular, And that's extremely relevant not only with Green Lake but with their broader presence in high performance computing. H P E is the number one provider of systems for high performance computing worldwide, and that's largely based on the breath of H. P s offerings, in addition to their performance in various segments. So picking up a lot of the commercial market with their HP apology and 10 plus, they hit a lot of big memory configurations with Superdome flex and scale up to some of the most powerful supercomputers in the world with the HP Cray X platforms that go into some of the leading national labs. Now, Green Light gives them an opportunity to offer this kind of flexibility to customers rather than committing all it wants to a particular purchase price. But if you want to do position those on a utility computing basis pay for them as a service without committing to ah, particular public cloud. I think that's an interesting role for Green Lake to play in the market. Yeah, it's interesting. I mean earlier this year, we celebrated Exa scale Day with support from HP, and it really is all about a community and an ecosystem is a lot of camaraderie going on in the space that you guys are deep into, Addison says. We could wrap. What should observers expect in this HPC market in this space over the next a few years? Yeah, that's a great question. What to expect because of 2020 has taught us anything. It's the hazards of forecasting where we think the market is going. When we put out a market forecast, we tend not to look at huge things like unexpected pandemics or wars. But it's relevant to the topic here because, as I said, we were already forecasting Cloud and as a service, models growing. Any time you get into uncertainty, where it becomes less easy to plan for where you want to be in two years, three years, five years, that model speaks well to things that are cloud or as a service to do very well, flexibly, and therefore, when we look at the market and plan out where we think it is in 2020 2021 anything that accelerates uncertainty actually is going. Thio increase the need for something like Green Lake or and as a service or cloud type of environment. So we're expecting those sorts of deployments to come in over and above where we were already previously expected them in 2020 2021. Because as a service deals well with uncertainty. And that's just the world we've been in recently. I think there's a great comments and in a really good framework. And we've seen this with the pandemic, the pace at which the technology industry in particular, of course, HP specifically have responded to support that your point about agility and flexibility being crucial. And I'll go back toe something earlier that Pete said around the data, the sooner we can get to the data to analyze things, whether it's compressing the time to a vaccine or pivoting our business is the better off we are. So I wanna thank Pete and Addison for your perspectives today. Really great stuff, guys. Thank you. >>Yeah, Thank you. >>Alright, keep it right there from, or great insights and content you're watching green leg day. Alright, Great discussion on HPC. Now we're gonna get into some of the new industry examples and some of the case studies and new platforms. Keith HP, Green Lake It's moving forward. That's clear. You're picking up momentum with customers, but can you give us some examples of platforms for industry use cases and some specifics around that? >>You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like the market efforts in just a little bit. But specifically, I want to highlight some examples where we provide cloud services to help solve some of the most demanding workloads on the planet. So, first off in financial services, for example, traditional banks are facing increased competition and evolving customer expectations they need to transform so that they can reduce risk, manage cop and provided differentiated customer experience. We'll talk about a platform for Splunk that does just that. Second, in health care institutions, they face the growing list of challenges, some due to the cove in 19 Pandemic and others. Years in the making, like our aging population and rise in chronic disease, is really driving up demands, and it's straining capital budgets. These global trance create a critical need for transformation. Thio improve that patient experience and their business outcomes. Another example is in manufacturing. They're facing many challenges in order to remain competitive, right, they need to be able to identify new revenue streams run more efficiently from an operation standpoint and scale. Their resource is so you'll hear more about how we're optimizing and delivery for manufacturing with S. A P Hana and always gonna highlight a little more detail on today's news how we're delivering supercomputing through HP Green Lake It's scale and finally, how we have a robust ecosystem of partners to help enterprises easily deploy these solutions. For example, I think today you're gonna be talking to Skip Bacon from Splunk. >>Yeah, absolutely. We sure are. And some really great examples there, especially a couple industries that that stood out. I mean, financial services and health care. They're ripe for transformation and maybe disruption if if they don't move fast enough. So Keith will be coming back to you a little later today to wrap things up. So So thank you. Now, now we're gonna take a look at how HP is partnering with Splunk and how Green Lake compliments, data rich workloads. Let's watch. We're not going to dig deeper into a data oriented workload. How HP Green Lake fits into this use case and with me, a Skip Bacon vice president, product management at Splunk Skip. Good to see >>you. Good to see you as well there. >>So let's talk a little bit about Splunk. I mean, you guys are a dominant player and security and analytics and you know, it's funny, Skip, I used to comment that during the big data, the rise of big data Splunk really never positioned themselves is this big data player, and you know all that hype. But But you became kind of the leader in big data without really, even, you know, promoting it. It just happened overnight, and you're really now rapidly moving toward a subscription model. You're making some strategic moves in the M and a front. Give us your perspective on what's happening at the company and why customers are so passionate about your software. >>Sure, a great, great set up, Dave. Thanks. So, yeah, let's start with the data that's underneath big data, right? I think I think it is usual. The industry sort of seasons on a term and never stops toe. Think about what it really means. Sure, one big part of big data is your transaction and stuff, right? The things that catch generated by all of your Oracle's USC Cheops that reflect how the business actually occurred. But a much bigger part is all of your digital artifacts, all of the machine generated data that tells you the whole story about what led up to the things that actually happened right within the systems within the interactions within those systems. That's where Splunk is focused. And I think what the market is the whole is really validating is that that machine generated data those digital artifacts are a tely least is important, if not more so, than the transactional artifacts to this whole digital transformation problem right there. Critical to showing I t. How to get better developing and deploying and operating software, how to get better securing these systems, and then how to take this real time view of what the business looks like as it's executing in the software right now. And hold that up to and inform the business and close that feedback loop, right? So what is it we want to do differently digitally in order to do different better on the transformation side of the house. So I think a lot of splints. General growth is proof of the value crop and the need here for sure, as we're seeing play out specifically in the domains of ICTs he operations Dev, ops, Cyber Security, right? As well as more broadly in that in that cloak closing the business loop Splunk spin on its hair and growing our footprint overall with our customers and across many new customers, we've been on its hair with moving parts of that footprints who and as a service offering and spawn cloud. But a lot of that overall growth is really fueled by just making it simpler. Quicker, faster, cheaper, easier toe operates Plunkett scale because the data is certainly not slowing down right. There's more and more and more of it every day, more late, their potential value locked up in it. So anything that we can do and that our partners conducive to improve the cost economics to prove the agility to improve the responsiveness of these systems is huge. That that customer value crop and that's where we get so excited about what's going on with green life >>Yeah, so that makes sense. I mean, the digital businesses, a data business. And that means putting data at the core. And Splunk is obviously you keep part of that. So, as I said earlier, spunk your leader in this space, what's the deal with your HP relationship? You touched on that? What should we know about your your partnership? And what's that solution with H h p E? What's that customer Sweet spot. >>Yep. Good. All good questions. So we've been working with HP for quite a while on on a number of different fronts. This Green lake peace is the most interesting and sort of the intersection of, you know, purist intersection of both of these threads of these factories, if you will. So we've been working to take our core data platform deployed on an enterprise operator for kubernetes. Stick that a top H P s green like which is really kubernetes is a service platform and go prove performance, scalability, agility, flexibility, cost economics, starting with some of slugs, biggest customers. And we've proven, you know, alot of those things In great measure, I think the opportunity you know, the ability to vertically scale Splunk in containers that taught beefy boxes and really streamline the automation, the orchestration, the operations, all of that yields what, in the words of one of our mutual customers, literally put it as This is a transformational platform for deploying and operating spot for us so hard at work on the engineering side, hard at work on the architectural referencing, sizing, you know, capacity planning sides, and then increasing really rolling up our sleeves and taking the stuff the market together. >>Yeah, I mean, we're seeing the just the idea of cloud. The definition of cloud expanding hybrid brings in on Prem. We talked about the edge and and I really We've seen Splunk rapidly transitioning its pricing model to a subscription, you know, platform, if you will. And of course, that's what Green Lakes all about. What makes Splunk a good fit for Green Lake and vice versa? What does it mean for customers? >>Sure, So a couple different parts, I think, make make this a perfect marriage. Splunk at its core, if you're using it well, you're using it in a very iterative discovery driven kind of follow you the path to value basis that makes it a little hard to plan the infrastructure and decides these things right. We really want customers to be focused on how to get more data in how to get more value out. And if you're doing it well, those things, they're going to go up and up and up over time. You don't wanna be constrained by size and capacity planning, procurement cycles for infrastructure. So the Green Lake model, you know, customers got already deployed systems already deployed, capacity available in and as the service basis, very fast, very agile. If they need a next traunch of capacity to bring in that next data set or run, that next set of analytics right it's available immediately is a service, not hey, we've got to kick off the procurement cycle for a whole bunch more hardware boxes. So that flexibility, that agility or key to the general pattern for using Splunk and again that ability to vertically scale stick multiple Splunk instances into containers and load more and more those up on these physical boxes right gives you great cost economics. You know, Splunk has a voracious appetite for data for doing analytics against that data less expensive, we can make that processing the better and the ability to really fully sweat, you know, sweat the assets fully utilize those assets. That kind of vertical scale is the other great element of the Green Lake solution. >>Yes. I mean, when you think about the value prop for for customers with Splunk and HP green, that gets a lot of what you would expect from what we used to talk about with the early days of cloud. Uh, that that flexibility, uh, it takes it away. A lot of the sort of mundane capacity planning you can shift. Resource is you talked about, you know, scale in a in a number of of use cases. So that's sort of another interesting angle, isn't it? >>Yeah. Faster. It's the classic text story. Faster, quicker, cheaper, easier, right? Just take in the whole whole new holy levels and hold the extremes with these technologies. >>What do you see? Is the differentiators with Splunk in HP, Maybe what's different from sort of the way we used to do things, but also sort of, you know, modern day competition. >>Yeah. Good. All good. All good questions. So I think the general attributes of splinter differentiated green Laker differentiated. I think when you put them together, you get this classic one plus one equals three story. So what? I hear from a lot of our target customers, big enterprises, big public sector customers. They can see the path to these benefits. They understand in theory how these different technologies would work together. But they're concerned about their own skills and abilities to go building. Run those and the rial beauty of Green Lake and Splunk is this. All comes sort of pre design, pre integrated right pre built HP is then they're providing these running containers as a service. So it's taking a lot of the skills and the concerns off the customers plate right, allowing them to fast board to, you know, cutting edge technology without any of the wrist. And then, most importantly, allowing customers to focus their very finite resource is their peoples their time, their money, their cycles on the things that are going to drive differentiated value back to the business. You know, let's face facts. Buying and provisioning Hardware is not a differentiating activity, running containers successfully, not differentiating running the core of Splunk. Not that differentiating. He can take all of those cycles and focus them instead on in the simple mechanics. How do we get more data in? Run more analytics on it and get more value out? Right then you're on the path to really delivering differentiated, you know, sustainable competitive basis type stuff back to the business, back to that digital transformation effort. So taking the skills out, taking the worries out, taking the concerns about new tech, out taking the procurement cycles, that improving scalability again quicker, faster, cheaper. Better for sure. >>It's kind of interesting when you when you look at the how the parlance has evolved from cloud and then you had Private Cloud. We talk a lot about hybrid, but I'm interested in your thoughts on why Splunk and HP Green Light green like now I mean, what's happening in the market that makes this the right place and in the right time, so to speak. >>Yeah, again, I put cloud right up there with big data is one of those really overloaded terms. Everything we keep keep redefining as we go if we define it. One way is as an experience instead of outcomes that customers looking for right, what does anyone of our mutual customers really want Well, they want capabilities that air quick to get up and running that air fast, to get the value that are aligned with how the price wise, with how they deliver value to the business and that they can quickly change right as the needs of the business and the operation shift. I think that's the outcome set that people are looking thio. Certainly the early days of cloud we thought were synonymous with public cloud. And hey, the way that you get those outcomes is you push things out. The public cloud providers, you know, what we saw is a lot of that motion in cases where there wasn't the best of alignment, right? You didn't get all those outcomes that you were hoping for. The cost savings weren't there or again. These big enterprises, these big organizations have a whole bunch of other work clothes that aren't necessarily public cloud amenable. But what they want is that same cloud experience. And this is where you see the evolution in the hybrid clouds and into private clouds. Yeah, any one of our customers is looking across the entirety of this landscape, things that are on Prem that they're probably gonna be on Prem forever. Things that they're moving into private cloud environments, things that they're moving into our growing or expanding or landing net new public cloud. They want those same outcomes, the same characteristics across all of that. That's a lot of Splunk value. Crop is a provider, right? Is we can go monitor and help you operate and developed and secure exactly all of that, no matter where it's located. Splunk on Green Lake is all about that stack, you know, working in that very cloud native way even where it made sense for customers to deploy and operate their own software. Even if this want, they're running over here themselves is hoping the modern, secure other work clothes that they put into their public cloud environments. >>Well, it Z another key proof point that we're seeing throughout the day here. Your software leader, you know, HP bring it together. It's ecosystem partners toe actually deliver tangible value. The customers skip. Great to hear your perspective today. Really appreciate you coming on the program. >>My pleasure. And thanks so much for having us take care. Stay well, >>Yeah, Cheers. You too. Okay, keep it right there. We're gonna go back to Keith now. Have him on a close out this segment of the program. You're watching HP Green Lake Day on the Cube. All right, We're So we're seeing some great examples of how Green Lake is supporting a lot of different industries. A lot of different workloads we just heard from Splunk really is part of the ecosystem. Really? A data heavy workload. And we're seeing the progress. HPC example Manufacturing. We talked about healthcare financial services, critical industries that are really driving towards the subscription model. So, Keith, thanks again for joining us. Is there anything else that we haven't hit that you feel are audience should should know about? >>Yeah, you bet. You know, we didn't cover some of the new capabilities that are really providing customers with the holistic experience to address their most demanding workloads with HP Green Lake. So first is our Green Lake managed security services. So this provides customers with an enterprise grade manage security solution that delivers lower costs and frees up a lot of their resource is the second is RHP advisory and Professional Services Group. So they help provide customers with tools and resource is to explore their needs for their digital transformation. Think about workshops and trials and proof of concepts and all of that implementation. Eso You get the strategy piece, you get the advisory piece, and then you get the implementation piece that's required to help them get started really quickly. And then third would be our H. P s moral software portfolio. So this provides customers with the ability to modernize their absent data unify, hybrid cloud and edge computing and operationalized artificial intelligence and machine learning and analytics. >>You know, I'm glad that you brought in the sort of machine intelligence piece in the machine learning because that's, ah, lot of times. That's the reason why people want to go to the cloud at the same time you bring in the security piece a lot of reasons why people want to keep things on Prem. And, of course, the use cases here. We're talking about it, really bringing that cloud experience that consumption model on Prem. I think it's critical critical for companies because they're expanding their notion of cloud computing really extending into hybrid and and the edge with that similar experience or substantially the same experience. So I think folks are gonna look at today's news as real progress. We're pushing you guys on some milestones and some proof points towards this vision is a critical juncture for organizations, especially those look, they're looking for comprehensive offerings to drive their digital transformations. Your thoughts keep >>Yeah, I know you. You know, we know as many as 70% of current and future APS and data are going to remain on Prem. They're gonna be in data centers, they're gonna be in Colo's, they're gonna be at the edge and, you know, really, for critical reasons. And so hybrid is key. As you mentioned, the number of times we wanna help customers transform their businesses and really drive business outcomes in this hybrid, multi cloud world with HP Green Lake and are targeted solutions. >>Excellent. Keith, Thanks again for coming on the program. Really appreciate your time. >>Always. Always. Thanks so much for having me and and take Take care of. Stay healthy, please. >>Alright. Keep it right there. Everybody, you're watching HP Green Lake day on the Cube

Published Date : Dec 2 2020

SUMMARY :

It's the Cube with digital coverage I'm really excited to be here. And so listen, before we get into the hard news, can you give us an update on just And thanks, you know, for the opportunity again. So let's let's get to the news. And you know, really different about the news today From your perspective. And the idea is to really help customers with Yeah, so I wonder if you could talk a little bit mawr about specifically, experts to help them with implementation and migration as well as they want to see resiliency. In other words, you know the customers have toe manage it on So the fantastic thing about HP Green Lake is that we manage it all for the You know, you had a lot of people want to dig deeper into the data. And so one of the best things about this HPC implementation is and in some of the new industry platforms that you see evolving I look forward to it. And really a pleasure to have you here. customers that are longtime HBC customers, you know, just consume it on their own for some of the toughest and most complex problems, particularly those that affecting society. that to, you know, benefit society overall. the new Green Lake services the HPC services specifically as it relates to Greenlee. today, but extend it with Green Lake and offer customers you know, A key key word that you use. Whether they're you know, a startup or Fortune 500 is a lot of camaraderie going on in the space that you guys are deep into, but can you give us some examples of platforms for industry use cases and some specifics You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like So Keith will be coming back to you a little later Good to see you as well there. I mean, you guys are a dominant player and security and analytics and you that tells you the whole story about what led up to the things that actually happened right within And that means putting data at the And we've proven, you know, alot of those things you know, platform, if you will. So the Green Lake model, you know, customers got already deployed systems A lot of the sort of mundane capacity planning you can shift. Just take in the whole whole new holy levels and hold the extremes with these different from sort of the way we used to do things, but also sort of, you know, modern day competition. of the skills and the concerns off the customers plate right, allowing them to fast board It's kind of interesting when you when you look at the how the parlance has evolved from cloud And hey, the way that you get those outcomes is Your software leader, you know, HP bring it together. And thanks so much for having us take care. hit that you feel are audience should should know about? Eso You get the strategy piece, you get the advisory piece, That's the reason why people want to go to the cloud at the same time you bring in the security they're gonna be at the edge and, you know, really, for critical reasons. Really appreciate your time. Thanks so much for having me and and take Take care of. Keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

PetePERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

AddisonPERSON

0.99+

HPORGANIZATION

0.99+

Pete UngaroPERSON

0.99+

KeithPERSON

0.99+

2020DATE

0.99+

Addison SnellPERSON

0.99+

DavePERSON

0.99+

Keith WhitePERSON

0.99+

SplunkORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green Lake Cloud ServicesORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green LightORGANIZATION

0.99+

100%QUANTITY

0.99+

75%QUANTITY

0.99+

OracleORGANIZATION

0.99+

last yearDATE

0.99+

Arwa QadouraPERSON

0.99+

thirdQUANTITY

0.99+

three yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

CoehloORGANIZATION

0.99+

SecondQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

pandemicEVENT

0.99+

over $4 billionQUANTITY

0.99+

secondQUANTITY

0.98+

HP Green LakeORGANIZATION

0.98+

Keith HPPERSON

0.98+

HBCORGANIZATION

0.98+

Addison SnellPERSON

0.98+

bothQUANTITY

0.98+

Exa scale DayEVENT

0.98+

over 1000 customersQUANTITY

0.98+

Intersect 3. 60ORGANIZATION

0.98+

todayDATE

0.98+

two yearsQUANTITY

0.98+

three storyQUANTITY

0.98+

three thingsQUANTITY

0.98+

about a billion dollarsQUANTITY

0.97+

Green Lake CloudORGANIZATION

0.97+

H P EORGANIZATION

0.97+

oneQUANTITY

0.97+

HPCORGANIZATION

0.97+

Making AI Real – A practitioner’s view | Exascale Day


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of Exascale day, made possible by Hewlett Packard Enterprise. >> Hey, welcome back Jeff Frick here with the cube come due from our Palo Alto studios, for their ongoing coverage in the celebration of Exascale day 10 to the 18th on October 18th, 10 with 18 zeros, it's all about big powerful giant computing and computing resources and computing power. And we're excited to invite back our next guest she's been on before. She's Dr. Arti Garg, head of advanced AI solutions and technologies for HPE. Arti great to see you again. >> Great to see you. >> Absolutely. So let's jump into before we get into Exascale day I was just looking at your LinkedIn profile. It's such a very interesting career. You've done time at Lawrence Livermore, You've done time in the federal government, You've done time at GE and industry, I just love if you can share a little bit of your perspective going from hardcore academia to, kind of some government positions, then into industry as a data scientist, and now with originally Cray and now HPE looking at it really from more of a vendor side. >> Yeah. So I think in some ways, I think I'm like a lot of people who've had the title of data scientists somewhere in their history where there's no single path, to really working in this industry. I come from a scientific background. I have a PhD in physics, So that's where I started working with large data sets. I think of myself as a data scientist before the term data scientist was a term. And I think it's an advantage, to be able to have seen this explosion of interest in leveraging data to gain insights, whether that be into the structure of the galaxy, which is what I used to look at, or whether that be into maybe new types of materials that could advance our ability to build lightweight cars or safety gear. It's allows you to take a perspective to not only understand what the technical challenges are, but what also the implementation challenges are, and why it can be hard to use data to solve problems. >> Well, I'd just love to get your, again your perspective cause you are into data, you chose that as your profession, and you probably run with a whole lot of people, that are also like-minded in terms of data. As an industry and as a society, we're trying to get people to do a better job of making database decisions and getting away from their gut and actually using data. I wonder if you can talk about the challenges of working with people who don't come from such an intense data background to get them to basically, I don't know if it's understand the value of more of a data kind decision making process or board just it's worth the effort, cause it's not easy to get the data and cleanse the data, and trust the data and get the right context, working with people that don't come from that background. And aren't so entrenched in that point of view, what surprises you? How do you help them? What can you share in terms of helping everybody get to be a more data centric decision maker? >> So I would actually rephrase the question a little bit Jeff, and say that actually I think people have always made data driven decisions. It's just that in the past we maybe had less data available to us or the quality of it was not as good. And so as a result most organizations have developed organize themselves to make decisions, to run their processes based on a much smaller and more refined set of information, than is currently available both given our ability to generate lots of data, through software and sensors, our ability to store that data. And then our ability to run a lot of computing cycles and a lot of advanced math against that data, to learn things that maybe in the past took, hundreds of years of experiments in scientists to understand. And so before I jumped into, how do you overcome that barrier? Just I'll use an example because you mentioned, I used to work in industry I used to work at GE. And one of the things that I often joked about, is the number of times I discovered Bernoulli's principle, in data coming off a GE jet engines you could do that overnight processing these large data but of course historically that took hundreds of years, to really understand these physical principles. And so I think when it comes to how do we bridge the gap between people who are adapt at processing large amounts of data, and running algorithms to pull insights out? I think it's both sides. I think it's those of us who are coming from the technical background, really understanding the way decisions are currently made, the way process and operations currently work at an organization. And understanding why those things are the way they are maybe their security or compliance or accountability concerns, that a new algorithm can't just replace those. And so I think it's on our end, really trying to understand, and make sure that whatever new approaches we're bringing address those concerns. And I think for folks who aren't necessarily coming from a large data set, and analytical background and when I say analytical, I mean in the data science sense, not in the sense of thinking about things in an abstract way to really recognize that these are just tools, that can enhance what they're doing, and they don't necessarily need to be frightening because I think that people who have been say operating electric grids for a long time, or fixing aircraft engines, they have a lot of expertise and a lot of understanding, and that's really important to making any kind of AI driven solution work. >> That's great insight but that but I do think one thing that's changed you come from a world where you had big data sets, so you kind of have a big data set point of view, where I think for a lot of decision makers they didn't have that data before. So we won't go through all the up until the right explosions of data, and obviously we're talking about Exascale day, but I think for a lot of processes now, the amount of data that they can bring to bear, is so dwarfs what they had in the past that before they even consider how to use it they still have to contextualize it, and they have to manage it and they have to organize it and there's data silos. So there's all this kind of nasty processes stuff, that's in the way some would argue has been kind of a real problem with the promise of BI, and does decision support tools. So as you look at at this new stuff and these new datasets, what are some of the people in process challenges beyond the obvious things that we can think about, which are the technical challenges? >> So I think that you've really hit on, something I talk about sometimes it was kind of a data deluge that we experienced these days, and the notion of feeling like you're drowning in information but really lacking any kind of insight. And one of the things that I like to think about, is to actually step back from the data questions the infrastructure questions, sort of all of these technical questions that can seem very challenging to navigate. And first ask ourselves, what problems am I trying to solve? It's really no different than any other type of decision you might make in an organization to say like, what are my biggest pain points? What keeps me up at night? or what would just transform the way my business works? And those are the problems worth solving. And then the next question becomes, if I had more data if I had a better understanding of something about my business or about my customers or about the world in which we all operate, would that really move the needle for me? And if the answer is yes, then that starts to give you a picture of what you might be able to do with AI, and it starts to tell you which of those data management challenges, whether they be cleaning the data, whether it be organizing the data, what it, whether it be building models on the data are worth solving because you're right, those are going to be a time intensive, labor intensive, highly iterative efforts. But if you know why you're doing it, then you will have a better understanding of why it's worth the effort. And also which shortcuts you can take which ones you can't, because often in order to sort of see the end state you might want to do a really quick experiment or prototype. And so you want to know what matters and what doesn't at least to that. Is this going to work at all time. >> So you're not buying the age old adage that you just throw a bunch of data in a data Lake and the answers will just spring up, just come right back out of the wall. I mean, you bring up such a good point, It's all about asking the right questions and thinking about asking questions. So again, when you talk to people, about helping them think about the questions, cause then you've got to shape the data to the question. And then you've got to start to build the algorithm, to kind of answer that question. How should people think when they're actually building algorithm and training algorithms, what are some of the typical kind of pitfalls that a lot of people fall in, haven't really thought about it before and how should people frame this process? Cause it's not simple and it's not easy and you really don't know that you have the answer, until you run multiple iterations and compare it against some other type of reference? >> Well, one of the things that I like to think about just so that you're sort of thinking about, all the challenges you're going to face up front, you don't necessarily need to solve all of these problems at the outset. But I think it's important to identify them, is I like to think about AI solutions as, they get deployed being part of a kind of workflow, and the workflow has multiple stages associated with it. The first stage being generating your data, and then starting to prepare and explore your data and then building models for your data. But sometimes I think where we don't always think about it is the next two phases, which is deploying whatever model or AI solution you've developed. And what will that really take especially in the ecosystem where it's going to live. If is it going to live in a secure and compliant ecosystem? Is it actually going to live in an outdoor ecosystem? We're seeing more applications on the edge, and then finally who's going to use it and how are they going to drive value from it? Because it could be that your AI solution doesn't work cause you don't have the right dashboard, that highlights and visualizes the data for the decision maker who will benefit from it. So I think it's important to sort of think through all of these stages upfront, and think through maybe what some of the biggest challenges you might encounter at the Mar, so that you're prepared when you meet them, and you can kind of refine and iterate along the way and even upfront tweak the question you're asking. >> That's great. So I want to get your take on we're celebrating Exascale day which is something very specific on 1018, share your thoughts on Exascale day specifically, but more generally I think just in terms of being a data scientist and suddenly having, all this massive compute power. At your disposal yoy're been around for a while. So you've seen the development of the cloud, these huge data sets and really the ability to, put so much compute horsepower against the problems as, networking and storage and compute, just asymptotically approach zero, I mean for as a data scientist you got to be pretty excited about kind of new mysteries, new adventures, new places to go, that we just you just couldn't do it 10 years ago five years ago, 15 years ago. >> Yeah I think that it's, it'll--only time will tell exactly all of the things that we'll be able to unlock, from these new sort of massive computing capabilities that we're going to have. But a couple of things that I'm very excited about, are that in addition to sort of this explosion or these very large investments in large supercomputers Exascale super computers, we're also seeing actually investment in these other types of scientific instruments that when I say scientific it's not just academic research, it's driving pharmaceutical drug discovery because we're talking about these, what they call light sources which shoot x-rays at molecules, and allow you to really understand the structure of the molecules. What Exascale allows you to do is, historically it's been that you would go take your molecule to one of these light sources and you shoot your, x-rays edit and you would generate just masses and masses of data, terabytes of data it was each shot. And being able to then understand, what you were looking at was a long process, getting computing time and analyzing the data. We're on the precipice of being able to do that, if not in real time much closer to real time. And I don't really know what happens if instead of coming up with a few molecules, taking them, studying them, and then saying maybe I need to do something different. I can do it while I'm still running my instrument. And I think that it's very exciting, from the perspective of someone who's got a scientific background who likes using large data sets. There's just a lot of possibility of what Exascale computing allows us to do in from the standpoint of I don't have to wait to get results, and I can either stimulate much bigger say galaxies, and really compare that to my data or galaxies or universes, if you're an astrophysicist or I can simulate, much smaller finer details of a hypothetical molecule and use that to predict what might be possible, from a materials or drug perspective, just to name two applications that I think Exascale could really drive. >> That's really great feedback just to shorten that compute loop. We had an interview earlier in some was talking about when the, biggest workload you had to worry about was the end of the month when you're running your financial, And I was like, why wouldn't that be nice to be the biggest job that we have to worry about? But now I think we saw some of this at animation, in the movie business when you know the rendering for whether it's a full animation movie, or just something that's a heavy duty three effects. When you can get those dailies back to the, to the artist as you said while you're still working, or closer to when you're working versus having this, huge kind of compute delay, it just changes the workflow dramatically and the pace of change and the pace of output. Because you're not context switching as much and you can really get back into it. That's a super point. I want to shift gears a little bit, and talk about explainable AI. So this is a concept that a lot of people hopefully are familiar with. So AI you build the algorithm it's in a box, it runs and it kicks out an answer. And one of the things that people talk about, is we should be able to go in and pull that algorithm apart to know, why it came out with the answer that it did. To me this just sounds really really hard because it's smart people like you, that are writing the algorithms the inputs and the and the data that feeds that thing, are super complex. The math behind it is very complex. And we know that the AI trains and can change over time as you you train the algorithm it gets more data, it adjusts itself. So it's explainable AI even possible? Is it possible at some degree? Because I do think it's important. And my next question is going to be about ethics, to know why something came out. And the other piece that becomes so much more important, is as we use that output not only to drive, human based decision that needs some more information, but increasingly moving it over to automation. So now you really want to know why did it do what it did explainable AI? Share your thoughts. >> It's a great question. And it's obviously a question that's on a lot of people's mind these days. I'm actually going to revert back to what I said earlier, when I talked about Bernoulli's principle, and just the ability sometimes when you do throw an algorithm at data, it might come the first thing it will find is probably some known law of physics. And so I think that really thinking about what do we mean by explainable AI, also requires us to think about what do we mean by AI? These days AI is often used anonymously with deep learning which is a particular type of algorithm that is not very analytical at its core. And what I mean by that is, other types of statistical machine learning models, have some underlying theory of what the population of data that you're studying. And whereas deep learning doesn't, it kind of just learns whatever pattern is sitting in front of it. And so there is a sense in which if you look at other types of algorithms, they are inherently explainable because you're choosing your algorithm based on what you think the is the sort of ground truth, about the population you're studying. And so I think we going to get to explainable deep learning. I think it's kind of challenging because you're always going to be in a position, where deep learning is designed to just be as flexible as possible. I'm sort of throw more math at the problem, because there may be are things that your sort of simpler model doesn't account for. However deep learning could be, part of an explainable AI solution. If for example, it helps you identify what are important so called features to look at what are the important aspects of your data. So I don't know it depends on what you mean by AI, but are you ever going to get to the point where, you don't need humans sort of interpreting outputs, and making some sets of judgments about what a set of computer algorithms that are processing data think. I think it will take, I don't want to say I know what's going to happen 50 years from now, but I think it'll take a little while to get to the point where you don't have, to maybe apply some subject matter understanding and some human judgment to what an algorithm is putting out. >> It's really interesting we had Dr. Robert Gates on a years ago at another show, and he talked about the only guns in the U.S. military if I'm getting this right, that are automatic, that will go based on what the computer tells them to do, and start shooting are on the Korean border. But short of that there's always a person involved, before anybody hits a button which begs a question cause we've seen this on the big data, kind of curve, i think Gartner has talked about it, as we move up from kind of descriptive analytics diagnostic analytics, predictive, and then prescriptive and then hopefully autonomous. So I wonder so you're saying will still little ways in that that last little bumps going to be tough to overcome to get to the true autonomy. >> I think so and you know it's going to be very application dependent as well. So it's an interesting example to use the DMZ because that is obviously also a very, mission critical I would say example but in general I think that you'll see autonomy. You already do see autonomy in certain places, where I would say the States are lower. So if I'm going to have some kind of recommendation engine, that suggests if you look at the sweater maybe like that one, the risk of getting that wrong. And so fully automating that as a little bit lower, because the risk is you don't buy the sweater. I lose a little bit of income I lose a little bit of revenue as a retailer, but the risk of I make that turn, because I'm going to autonomous vehicle as much higher. So I think that you will see the progression up that curve being highly dependent on what's at stake, with different degrees of automation. That being said you will also see in certain places where there's, it's either really expensive or it's humans aren't doing a great job. You may actually start to see some mission critical automation. But those would be the places where you're seeing them. And actually I think that's one of the reasons why you see actually a lot more autonomy, in the agriculture space, than you do in the sort of passenger vehicle space. Because there's a lot at stake and it's very difficult for human beings to sort of drive large combines. >> plus they have a real they have a controlled environment. So I've interviewed Caterpillar they're doing a ton of stuff with autonomy. Cause they're there control that field, where those things are operating, and whether it's a field or a mine, it's actually fascinating how far they've come with autonomy. But let me switch to a different industry that I know is closer to your heart, and looking at some other interviews and let's talk about diagnosing disease. And if we take something specific like reviewing x-rays where the computer, and it also brings in the whole computer vision and bringing in computer vision algorithms, excuse me they can see things probably fast or do a lot more comparisons, than potentially a human doctor can. And or hopefully this whole signal to noise conversation elevate the signal for the doctor to review, and suppress the noise it's really not worth their time. They can also review a lot of literature, and hopefully bring a broader potential perspective of potential diagnoses within a set of symptoms. You said before you both your folks are physicians, and there's a certain kind of magic, a nuance, almost like kind of more childlike exploration to try to get out of the algorithm if you will to think outside the box. I wonder if you can share that, synergy between using computers and AI and machine learning to do really arduous nasty things, like going through lots and lots and lots and lots of, x-rays compared to and how that helps with, doctor who's got a whole different kind of set of experience a whole different kind of empathy, whole different type of relationship with that patient, than just a bunch of pictures of their heart or their lungs. >> I think that one of the things is, and this kind of goes back to this question of, is AI for decision support versus automation? And I think that what AI can do, and what we're pretty good at these days, with computer vision is picking up on subtle patterns right now especially if you have a very large data set. So if I can train on lots of pictures of lungs, it's a lot easier for me to identify the pictures that somehow these are not like the other ones. And that can be helpful but I think then to really interpret what you're seeing and understand is this. Is it actually bad quality image? Is it some kind of some kind of medical issue? And what is the medical issue? I think that's where bringing in, a lot of different types of knowledge, and a lot of different pieces of information. Right now I think humans are a little bit better at doing that. And some of that's because I don't think we have great ways to train on, sort of sparse datasets I guess. And the second part is that human beings might be 40 years of training a model. They 50 years of training a model as opposed to six months, or something with sparse information. That's another thing that human beings have their sort of lived experience, and the data that they bring to bear, on any type of prediction or classification is actually more than just say what they saw in their medical training. It might be the people they've met, the places they've lived what have you. And I think that's that part that sort of broader set of learning, and how things that might not be related might actually be related to your understanding of what you're looking at. I think we've got a ways to go from a sort of artificial intelligence perspective and developed. >> But it is Exascale day. And we all know about the compound exponential curves on the computing side. But let's shift gears a little bit. I know you're interested in emerging technology to support this effort, and there's so much going on in terms of, kind of the atomization of compute store and networking to be able to break it down into smaller, smaller pieces, so that you can really scale the amount of horsepower that you need to apply to a problem, to very big or to very small. Obviously the stuff that you work is more big than small. Work on GPU a lot of activity there. So I wonder if you could share, some of the emerging technologies that you're excited about to bring again more tools to the task. >> I mean, one of the areas I personally spend a lot of my time exploring are, I guess this word gets used a lot, the Cambrian  explosion of new AI accelerators. New types of chips that are really designed for different types of AI workloads. And as you sort of talked about going down, and it's almost in a way where we were sort of going back and looking at these large systems, but then exploring each little component on them, and trying to really optimize that or understand how that component contributes to the overall performance of the whole. And I think one of the things that just, I don't even know there's probably close to a hundred active vendors in the space of developing new processors, and new types of computer chips. I think one of the things that that points to is, we're moving in the direction of generally infrastructure heterogeneity. So it used to be when you built a system you probably had one type of processor, or you probably had a pretty uniform fabric across your system you usually had, I think maybe storage we started to get tearing a little bit earlier. But now I think that what we're going to see, and we're already starting to see it with Exascale systems where you've got GPUs and CPUs on the same blades, is we're starting to see as the workloads that are running at large scales are becoming more complicated. Maybe I'm doing some simulation and then I'm running I'm training some kind of AI model, and then I'm inferring it on some other type, some other output of the simulation. I need to have the ability to do a lot of different things, and do them in at a very advanced level. Which means I need very specialized technology to do it. And I think it's an exciting time. And I think we're going to test, we're going to break a lot of things. I probably shouldn't say that in this interview, but I'm hopeful that we're going to break some stuff. We're going to push all these systems to the limit, and find out where we actually need to push a little harder. And I some of the areas I think that we're going to see that, is there We're going to want to move data, and move data off of scientific instruments, into computing, into memory, into a lot of different places. And I'm really excited to see how it plays out, and what you can do and where the limits are of what you can do with the new systems. >> Arti I could talk to you all day. I love the experience and the perspective, cause you've been doing this for a long time. So I'm going to give you the final word before we sign out and really bring it back, to a more human thing which is ethics. So one of the conversations we hear all the time, is that if you are going to do something, if you're going to put together a project and you justify that project, and then you go and you collect the data and you run that algorithm and you do that project. That's great but there's like an inherent problem with, kind of data collection that may be used for something else down the road that maybe you don't even anticipate. So I just wonder if you can share, kind of top level kind of ethical take on how data scientists specifically, and then ultimately more business practitioners and other people that don't carry that title. Need to be thinking about ethics and not just kind of forget about it. That these are I had a great interview with Paul Doherty. Everybody's data is not just their data, it's it represents a person, It's a representation of what they do and how they lives. So when you think about kind of entering into a project and getting started, what do you think about in terms of the ethical considerations and how should people be cautious that they don't go places that they probably shouldn't go? >> I think that's a great question out a short answer. But I think that I honestly don't know that we have a great solutions right now, but I think that the best we can do is take a very multifaceted, and also vigilant approach to it. So when you're collecting data, and often we should remember a lot of the data that gets used isn't necessarily collected for the purpose it's being used, because we might be looking at old medical records, or old any kind of transactional records whether it be from a government or a business. And so as you start to collect data or build solutions, try to think through who are all the people who might use it? And what are the possible ways in which it could be misused? And also I encourage people to think backwards. What were the biases in place that when the data were collected, you see this a lot in the criminal justice space is the historical records reflect, historical biases in our systems. And so is I there are limits to how much you can correct for previous biases, but there are some ways to do it, but you can't do it if you're not thinking about it. So I think, sort of at the outset of developing solutions, that's important but I think equally important is putting in the systems to maintain the vigilance around it. So one don't move to autonomy before you know, what potential new errors you might or new biases you might introduce into the world. And also have systems in place to constantly ask these questions. Am I perpetuating things I don't want to perpetuate? Or how can I correct for them? And be willing to scrap your system and start from scratch if you need to. >> Well Arti thank you. Thank you so much for your time. Like I said I could talk to you for days and days and days. I love the perspective and the insight and the thoughtfulness. So thank you for sharing your thoughts, as we celebrate Exascale day. >> Thank you for having me. >> My pleasure thank you. All right she's Arti I'm Jeff it's Exascale day. We're covering on the queue thanks for watching. We'll see you next time. (bright upbeat music)

Published Date : Oct 16 2020

SUMMARY :

Narrator: From around the globe, Arti great to see you again. I just love if you can share a little bit And I think it's an advantage, and you probably run with and that's really important to making and they have to manage it and it starts to tell you which of those the data to the question. and then starting to prepare that we just you just and really compare that to my and pull that algorithm apart to know, and some human judgment to what the computer tells them to do, because the risk is you the doctor to review, and the data that they bring to bear, and networking to be able to break it down And I some of the areas I think Arti I could talk to you all day. in the systems to maintain and the thoughtfulness. We're covering on the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

50 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

JeffPERSON

0.99+

Paul DohertyPERSON

0.99+

GEORGANIZATION

0.99+

both sidesQUANTITY

0.99+

ArtiPERSON

0.99+

six monthsQUANTITY

0.99+

BernoulliPERSON

0.99+

Arti GargPERSON

0.99+

second partQUANTITY

0.99+

GartnerORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

firstQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

oneQUANTITY

0.99+

10 years agoDATE

0.99+

1018DATE

0.98+

Dr.PERSON

0.98+

ExascaleTITLE

0.98+

each shotQUANTITY

0.98+

CaterpillarORGANIZATION

0.98+

Robert GatesPERSON

0.98+

15 years agoDATE

0.98+

LinkedInORGANIZATION

0.98+

HPEORGANIZATION

0.98+

first stageQUANTITY

0.97+

bothQUANTITY

0.96+

five years agoDATE

0.95+

Exascale dayEVENT

0.95+

two applicationsQUANTITY

0.94+

October 18thDATE

0.94+

two phasesQUANTITY

0.92+

18thDATE

0.91+

10DATE

0.9+

one thingQUANTITY

0.86+

U.S. militaryORGANIZATION

0.82+

one typeQUANTITY

0.81+

a years agoDATE

0.81+

each little componentQUANTITY

0.79+

single pathQUANTITY

0.79+

Korean borderLOCATION

0.72+

hundredQUANTITY

0.71+

terabytes of dataQUANTITY

0.71+

18 zerosQUANTITY

0.71+

three effectsQUANTITY

0.68+

one of these lightQUANTITY

0.68+

Exascale DayEVENT

0.68+

ExascaleEVENT

0.67+

thingsQUANTITY

0.66+

CrayORGANIZATION

0.61+

Exascale day 10EVENT

0.6+

Lawrence LivermorePERSON

0.56+

vendorsQUANTITY

0.53+

fewQUANTITY

0.52+

reasonsQUANTITY

0.46+

lotsQUANTITY

0.46+

CambrianOTHER

0.43+

DMZORGANIZATION

0.41+

ExascaleCOMMERCIAL_ITEM

0.39+

Redefining Healthcare in the Post COVID 19 Era, New Operating Models


 

>>Hi, everyone. Good afternoon. Thank you for joining this session. I feel honored to be invited to speak here today. And I also appreciate entity research Summit members for organ organizing and giving this great opportunity. Please let me give a quick introduction. First, I'm a Takashi from Marvin American population, and I'm leading technology scouting and global ation with digital health companies such as Business Alliance and Strategically Investment in North America. And since we started to focus on this space in 2016 our team is growing. And in order to bring more new technologies and services to Japan market Thesis year, we founded the new service theories for digital health business, especially, uh, in medical diagnosis space in Japan. And today I would like to talk how health care has been transformed for my micro perspective, and I hope you enjoy reasoning it. So what's happened since the US identify the first case in the middle of January, As everyone knows, unfortunately, is the damaged by this pandemic was unequal amongst the people in us. It had more determined tal impact on those who are socially and economically vulnerable because of the long, long lasting structural program off the U. S. Society and the Light Charity about daily case rating elevator country shows. Even in the community, the infection rate off the low income were 4.5 times higher than, uh, those of the high income and due to czar straight off the Corvette, about 14 million people are unemployed. The unique point off the U. S. Is that more than 60% of insurance is tied with employment, so losing a job can mean losing access to health care. And the point point here is that the Corvette did not create healthcare disparity but, uh nearly highlighted the underlying program and necessity off affordable care for all. And when the country had a need to increase the testing capacity and geographic out, treat the pharmacies and retails joined forces with existing stakeholders more than 90% off the U. S Corporation live within five miles off a community pharmacy such as CVS and Walgreen, so they can technically provide the test to everyone in all the community. And they also have a huge workforce memory pharmacist who are eligible to perform the testing scale, and this very made their potential in community based health care. Stand out and about your health has provided on alternative way for people to access to health care. At affordable applies under the unusual setting where social distancing, which required required mhm and people have a fear of infection. So they are afraid to take a public transportacion and visit >>the doctor the same thing supplied to doctor and the chart. Here is a number of total visit cranes by service type after stay at home order was issued across the U. S. By Ali April patient physical visits to doctor's offices or clinics declined by ALAN 70%. On the other hand, that share, or telehealth, accounted for 25% of the total total. Doctor's visit in April, while many states studied to re opening face to face visit is gradually recovering. And overall Tele Health Service did not offset the crime. Physician Physical doctor's visit and telehealth John never fully replace in person care. However, Telehealth has established a new way to provide affordable care, especially to vulnerable people, and I don't explain each player's today. But as an example, the chart shows the significant growth of the tell a dog who is one of the largest badger care and tell his provider, I believe there are three factors off paradox. Success under the pandemic. First, obviously tell Doc could reach >>the job between those patients and doctors. Majority of the patients who needed to see doctors who are those who have underlying health conditions and are high risk for Kelowna, Bilis and Secondary. They showed their business model is highly scalable. In the first quarter of this year, they moved quickly to expand their physical physicians network to increase their capacity and catch up growing demand. To some extent, they also contributed to create flexible job for the doctors who suffered from Lydia's appointment and surgery. They utilized. There are legalism to maximize the efficiency for doctors and doing so, uh, they have university maintained high quality care at affordable applies Yeah, and at the same time, the government recognize the body of about your care and de regulated traditional rules to sum up she m s temporary automated to pay a wide range of tell Her services, including hospital visit and HHS temporarily waived hip hop minorities for telehealth cases and they're changed allowed provider to use communication tools such as facetime and the messenger. During their appointment on August start, the government issued a new executive order to expand tell his services beyond the pandemic. So the government is also moving to support about your health care. So it was a quick review of the health care challenges and somewhat advancement in the pandemic. But as you understand, since those challenges are not caused by the pandemic, problems will stay remain and events off this year will continuously catalyze the transformation. So how was his cherished reshaped and where will we go? The topic from here can be also applied to Japan market. Okay, I believe democratization and decentralization healthcare more important than ever. So what does A. The traditional healthcare was defined in a framework over patient and a doctor. But in the new normal, the range of beneficiaries will be expanded from patient to all citizens, including the country uninsured people. Thanks to the technology evolution, as you can download health management off for free on iTunes stores while the range of the digital health services unable everyone to participate in new health system system. And in this slide, I put three essential element to fully realize democratization and decentralization off health care, health, literacy, data sharing and security, privacy and safety in addition, taken. In addition, technology is put at the bottom as a foundation off three point first. Health stimulus is obviously important because if people don't understand how the system works, what options are available to them or what are the pros and cons of each options? They can not navigate themselves and utilize the service. It can even cause a different disparity. Issue and secondary data must be technically flee to transfer. While it keeps interoperability ease. More options are becoming available to patient. But if data cannot be shared among stakeholders, including patient hospitals in strollers and budget your providers, patient data will be fragmented and people cannot yet continue to care which they benefited under current centralized care system. And this is most challenging part. But the last one is that the security aspect more players will involving decentralized health care outside of conventional healthcare system. So obviously, both the number of healthcare channels and our frequency of data sharing will increase more. It's create ah, higher data about no beauty, and so, under the new health care framework, we needed to ensure patient privacy and safety and also re examine a Scott write lines for sharing patient data and off course. Corbett Wasa Stone Catalyst off this you saved. But what folly. Our drivers in Macro and Micro Perspective from Mark Lowe. The challenges in healthcare system have been widely recognized for decades, and now he's a big pain. The pandemic reminded us all the key values. Misha, our current pain point as I left the church shores. Those are increasing the population, health sustainability for doctors and other social system and value based care for better and more affordable care. And all the elements are co dependent on each other. The light chart explained that providing preventive care and Alan Dimension is the best way threes to meet the key values here. Similarly, the direction of community based care and about your care is in line with thes three values, and they are acting to maximize the number of beneficiaries form. A micro uh, initiative by nonconventional players is a big driver, and both CBS and Walmart are being actively engaged in healthcare healthcare businesses for many years. And CBS has the largest walking clinic called MinuteClinic, Ottawa 1100 locations, and Walmart also has 20 primary clinics. I didn't talk to them. But the most interesting things off their recent innovation, I believe, is that they are adjusted and expanded their focus, from primary care to community health Center to out less to every every customer's needs. And CBS Front to provide affordable preventive health and chronic health monitoring services at 1500 CBS Health have, which they are now setting up and along a similar line would Mark is deploying Walmart Health Center, where, utilizing tech driven solutions, they provide affordable one stop service for core healthcare. They got less, uh, insurance status. For example, more than 40% of the people in U. S visit will not every big, so liberating the huge customer base and physical locations. Both companies being reading decentralization off health care and consumer device company such as Apple and Fitbit also have helped in transform forming healthcare in two ways. First, they are growing the boundaries between traditional healthcare and consumer product after their long development airport available, getting healthcare device and secondary. They acted as the best healthcare educators to consumers and increase people's healthcare awareness because they're taking an important role in the enhancement, health, literacy and healthcare democratization. And based on the story so far, I'd like to touch to business concept which can be applied to both Japan and the US and one expected change. It will be the emergence of data integration plot home while the telehealth. While the healthcare data data volume has increased 15 times for the last seven years and will continuously increase, we have a chance to improve the health care by harnessing the data. So meaning the new system, which unify the each patient data from multiple data sources and create 360 degrees longitudinal view each individual and then it sensitized the unified data to gain additional insights seen from structure data and unable to provide personal lives care. Finally, it's aggregate each individual data and reanalyzed to provide inside for population health. This is one specific model I envision. And, uh, health care will be provided slew online or offline and at the hospital or detail store. In order to amplify the impact of health care. The law off the mediator between health care between hospital and citizen will become more important. They can be a pharmacy toe health stand out about your care providers. They provide wide range of fundamental care and medication instruction and management. They also help individuals to manage their health care data. I will not explain the details today, but Japan has similar challenges in health care, such as increasing healthcare expenditure and lack of doctors and care givers. For example, they people in Japan have physical physician visit more than 20 times a year on average, while those in the U. S. On >>the do full times it sounds a joke, but people say because the artery are healthy, say visit hospitals to see friends. So we need to utilize thes mediators to reduce cost while they maintained social place for citizens in Japan, the government has promoted, uh, usual family, pharmacist and primary doctors and views the community based medical system as a policy. There was division of dispensing fees in Japan this year to ship the core load or pharmacist to the new role as a health management service providers. And so >>I believe we will see the change in those spaces not only in the U. S, but also in Japan, and we went through so unprecedented times. But I believe it's been resulting accelerating our healthcare transformation and creating a new business innovation. And this brings me to the end of my presentation. Thank you for your attention and hope you could find something somehow useful for your business. And if you have any questions >>or comments, please for you feel free to contact me.

Published Date : Sep 24 2020

SUMMARY :

provide the test to everyone in all the community. the doctor the same thing supplied to doctor and the chart. And based on the story so far, I'd like to touch to business concept which can be applied but people say because the artery are healthy, say visit hospitals And this brings me to the end of my presentation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CBSORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JapanLOCATION

0.99+

WalgreenORGANIZATION

0.99+

15 timesQUANTITY

0.99+

2016DATE

0.99+

AprilDATE

0.99+

FitbitORGANIZATION

0.99+

MishaPERSON

0.99+

U. S. SocietyORGANIZATION

0.99+

CVSORGANIZATION

0.99+

U. SLOCATION

0.99+

4.5 timesQUANTITY

0.99+

360 degreesQUANTITY

0.99+

U. SLOCATION

0.99+

HHSORGANIZATION

0.99+

U. S.LOCATION

0.99+

MarkPERSON

0.99+

25%QUANTITY

0.99+

LydiaPERSON

0.99+

AugustDATE

0.99+

20 primary clinicsQUANTITY

0.99+

Alan DimensionPERSON

0.99+

FirstQUANTITY

0.99+

five milesQUANTITY

0.99+

Mark LowePERSON

0.99+

todayDATE

0.99+

ScottPERSON

0.99+

JohnPERSON

0.99+

more than 60%QUANTITY

0.99+

Tele Health ServiceORGANIZATION

0.99+

bothQUANTITY

0.99+

pandemicEVENT

0.99+

more than 90%QUANTITY

0.99+

this yearDATE

0.99+

TelehealthORGANIZATION

0.99+

Business AllianceORGANIZATION

0.99+

North AmericaLOCATION

0.99+

two waysQUANTITY

0.99+

Walmart Health CenterORGANIZATION

0.98+

Both companiesQUANTITY

0.98+

each playerQUANTITY

0.98+

Ali AprilPERSON

0.98+

Light CharityORGANIZATION

0.98+

U. S CorporationORGANIZATION

0.98+

each individualQUANTITY

0.98+

iTunesTITLE

0.98+

oneQUANTITY

0.98+

CBS HealthORGANIZATION

0.98+

about 14 million peopleQUANTITY

0.98+

each optionsQUANTITY

0.97+

more than 20 times a yearQUANTITY

0.97+

middle of JanuaryDATE

0.97+

first caseQUANTITY

0.97+

first quarter of this yearDATE

0.97+

three valuesQUANTITY

0.96+

three factorsQUANTITY

0.95+

OttawaLOCATION

0.95+

firstQUANTITY

0.95+

1100 locationsQUANTITY

0.94+

USLOCATION

0.93+

three pointQUANTITY

0.93+

MinuteClinicORGANIZATION

0.93+

Kelowna, Bilis and SecondaryORGANIZATION

0.93+

each individual dataQUANTITY

0.91+

Strategically InvestmentORGANIZATION

0.91+

decadesQUANTITY

0.9+

TakashiPERSON

0.9+

one specific modelQUANTITY

0.87+

CBS FrontORGANIZATION

0.86+

each patient dataQUANTITY

0.83+

more than 40% of the peopleQUANTITY

0.82+

last seven yearsDATE

0.78+

Redefining Healthcare in the Post COVID 19 Era, New Operating Models


 

>>Hi, everyone. Good afternoon. Thank you for joining this session. I feel honored to be invited to speak here today. And I also appreciate entity research Summit members for organ organizing and giving this great opportunity. Please let me give a quick introduction. First, I'm a Takashi from Marvin American population, and I'm leading technology scouting and global ation with digital health companies such as Business Alliance and Strategically Investment in North America. And since we started to focus on this space in 2016 our team is growing. And in order to bring more new technologies and services to Japan market Thesis year, we founded the new service theories for digital health business, especially, uh, in medical diagnosis space in Japan. And today I would like to talk how health care has been transformed for my micro perspective, and I hope you enjoy reasoning it. So what's happened since the US identify the first case in the middle of January, As everyone knows, unfortunately, is the damaged by this pandemic was unequal amongst the people in us. It had more determined tal impact on those who are socially and economically vulnerable because of the long, long lasting structural program off the U. S. Society and the Light Charity about daily case rating elevator country shows. Even in the community, the infection rate off the low income were 4.5 times higher than, uh, those of the high income and due to czar straight off the Corvette, about 14 million people are unemployed. The unique point off the U. S. Is that more than 60% of insurance is tied with employment, so losing a job can mean losing access to health care. And the point point here is that the Corvette did not create healthcare disparity but, uh nearly highlighted the underlying program and necessity off affordable care for all. And when the country had a need to increase the testing capacity and geographic out, treat the pharmacies and retails joined forces with existing stakeholders more than 90% off the U. S Corporation live within five miles off a community pharmacy such as CVS and Walgreen, so they can technically provide the test to everyone in all the community. And they also have a huge workforce memory pharmacist who are eligible to perform the testing scale, and this very made their potential in community based health care. Stand out and about your health has provided on alternative way for people to access to health care. At affordable applies under the unusual setting where social distancing, which required required mhm and people have a fear of infection. So they are afraid to take a public transportacion and visit >>the doctor the same thing supplied to doctor and the chart. Here is a number of total visit cranes by service type after stay at home order was issued across the U. S. By Ali April patient physical visits to doctor's offices or clinics declined by ALAN 70%. On the other hand, that share, or telehealth, accounted for 25% of the total total. Doctor's >>visit in April, while many states studied to re opening face to face visit is gradually recovering. And overall Tele Health Service did not offset the crime. Physician Physical doctor's visit and telehealth John never fully replace in person care. However, Telehealth has established a new way to provide affordable care, especially to vulnerable people, and I don't explain each player's today. But as an example, the chart shows the significant growth of >>the tell a dog who is one of the largest badger care and tell his provider, I believe there are three factors off paradox. Success under the pandemic. First, obviously tell Doc could reach >>the job between those patients and doctors. Majority of the patients who needed to see doctors who are those who have underlying health conditions and are high risk for Kelowna, Bilis and Secondary. They showed their business model is highly scalable. In the first quarter of this year, they moved quickly to expand their physical physicians network to increase their capacity and catch up growing demand. To some extent, they also contributed to create flexible job for the doctors who suffered from Lydia's appointment and surgery. They utilized. There are legalism to maximize the efficiency for doctors and doing so, uh, they have university maintained high quality care at affordable applies Yeah, and at the same time, the government recognize the body of about your care and de regulated traditional rules to sum up she m s temporary automated to pay a wide range of tell Her services, including hospital visit and HHS temporarily waived hip hop minorities for telehealth cases and they're changed allowed provider to use communication tools such as facetime and the messenger. During their appointment on August start, the government issued a new executive order to expand tell his services beyond the pandemic. So the government is also moving to support about your health care. So it was a quick review of the health care challenges and somewhat advancement in the pandemic. But as you understand, since those challenges are not caused by the pandemic, problems will stay remain and events off this year will continuously catalyze the transformation. So how was his cherished reshaped and where will we go? The topic from here can be also applied to Japan market. Okay, I believe democratization and decentralization healthcare more important than ever. So what does A. The traditional healthcare was defined in a framework over patient and a doctor. But in the new normal, the range of beneficiaries will be expanded from patient to all citizens, including the country uninsured people. Thanks to the technology evolution, as you can download health management off for free on iTunes stores while the range of the digital health services unable everyone to participate in new health system system. And in this slide, I put three essential element to fully realize democratization and decentralization off health care, health, literacy, data sharing and security, privacy and safety in addition, taken. In addition, technology is put at the bottom as a foundation off three point first. Health stimulus is obviously important because if people don't understand how the system works, what options are available to them or what are the pros and cons of each options? They can not navigate themselves and utilize the service. It can even cause a different disparity. Issue and secondary data must be technically flee to transfer. While it keeps interoperability ease. More options are becoming available to patient. But if data cannot be shared among stakeholders, including patient hospitals in strollers and budget your providers, patient data will be fragmented and people cannot yet continue to care which they benefited under current centralized care system. And this is most challenging part. But the last one is that the security aspect more players will involving decentralized health care outside of conventional healthcare system. So obviously, both the number of healthcare channels and our frequency of data sharing will increase more. It's create ah, higher data about no beauty, and so, under the new health care framework, we needed to ensure patient privacy and safety and also re examine a Scott write lines for sharing patient data and off course. Corbett Wasa Stone Catalyst off this you saved. But what folly. Our drivers in Macro and Micro Perspective from Mark Lowe. The challenges in healthcare system have been widely recognized for decades, and now he's a big pain. The pandemic reminded us all the key values. Misha, our current pain point as I left the church shores. Those are increasing the population, health sustainability for doctors and other social system and value based care for better and more affordable care. And all the elements are co dependent on each other. The light chart explained that providing preventive care and Alan Dimension is the best way threes to meet the key values here. Similarly, the direction of community based care and about your care is in line with thes three values, and they are acting to maximize the number of beneficiaries form. A micro uh, initiative by nonconventional players is a big driver, and both CBS and Walmart are being actively engaged in healthcare healthcare businesses for many years. And CBS has the largest walking clinic called MinuteClinic, Ottawa 1100 locations, and Walmart also has 20 primary clinics. I didn't talk to them. But the most interesting things off their recent innovation, I believe, is that they are adjusted and expanded their focus, from primary care to community health Center to out less to every every customer's needs. And CBS Front to provide affordable preventive health and chronic health monitoring services at 1500 CBS Health have, which they are now setting up and along a similar line would Mark is deploying Walmart Health Center, where, utilizing tech driven solutions, they provide affordable one stop service for core healthcare. They got less, uh, insurance status. For example, more than 40% of the people in U. S visit will not every big, so liberating the huge customer base and physical locations. Both companies being reading decentralization off health care and consumer device company such as Apple and Fitbit also have helped in transform forming healthcare in two ways. First, they are growing the boundaries between traditional healthcare and consumer product after their long development airport available, getting healthcare device and secondary. They acted as the best healthcare educators to consumers and increase people's healthcare awareness because they're taking an important role in the enhancement, health, literacy and healthcare democratization. And based on the story so far, I'd like to touch to business concept which can be applied to both Japan and the US and one expected change. It will be the emergence of data integration plot home while the telehealth. While the healthcare data data volume has increased 15 times for the last seven years and will continuously increase, we have a chance to improve the health care by harnessing the data. So meaning the new system, which unify the each patient data from multiple data sources and create 360 degrees longitudinal view each individual and then it sensitized the unified data to gain additional insights seen from structure data and unable to provide personal lives care. Finally, it's aggregate each individual data and reanalyzed to provide inside for population health. This is one specific model I envision. And, uh, health care will be provided slew online or offline and at the hospital or detail store. In order to amplify the impact of health care. The law off the mediator between health care between hospital and citizen will become more important. They can be a pharmacy toe health stand out about your care providers. They provide wide range of fundamental care and medication instruction and management. They also help individuals to manage their health care data. I will not explain the details today, but Japan has similar challenges in health care, such as increasing healthcare expenditure and lack of doctors and care givers. For example, they people in Japan have physical physician visit more than 20 times a year on average, while those in the U. S. On the do full times it sounds a joke, but people say because the artery are healthy, say visit hospitals to see friends. So we need to utilize thes mediators to reduce cost while they maintained social place for citizens in Japan, the government has promoted, uh, usual family, pharmacist and primary doctors and views the community based medical system as a policy. There was division of dispensing fees in Japan this year to ship the core load or pharmacist to the new role as a health management service providers. And so I believe we will see the change in those spaces not only in the U. S, but also in Japan, and we went through so unprecedented times. But I believe it's been resulting accelerating our healthcare transformation and creating a new business innovation. And this brings me to the end of my presentation. Thank you for your attention and hope you could find something somehow useful for your business. And if you have any questions >>or comments, please for you feel free to contact me. Thank you.

Published Date : Sep 21 2020

SUMMARY :

provide the test to everyone in all the community. the doctor the same thing supplied to doctor and the chart. But as an example, the chart shows the significant the tell a dog who is one of the largest badger care and tell his provider, And based on the story so far, I'd like to touch to business concept which can be applied or comments, please for you feel free to contact me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CBSORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

AppleORGANIZATION

0.99+

WalgreenORGANIZATION

0.99+

2016DATE

0.99+

15 timesQUANTITY

0.99+

JapanLOCATION

0.99+

FitbitORGANIZATION

0.99+

U. S. SocietyORGANIZATION

0.99+

U. SLOCATION

0.99+

MishaPERSON

0.99+

CVSORGANIZATION

0.99+

4.5 timesQUANTITY

0.99+

360 degreesQUANTITY

0.99+

AugustDATE

0.99+

AprilDATE

0.99+

25%QUANTITY

0.99+

HHSORGANIZATION

0.99+

more than 40%QUANTITY

0.99+

20 primary clinicsQUANTITY

0.99+

FirstQUANTITY

0.99+

LydiaPERSON

0.99+

U. S.LOCATION

0.99+

Mark LowePERSON

0.99+

five milesQUANTITY

0.99+

todayDATE

0.99+

more than 60%QUANTITY

0.99+

MarkPERSON

0.99+

this yearDATE

0.99+

bothQUANTITY

0.99+

more than 90%QUANTITY

0.99+

pandemicEVENT

0.99+

TelehealthORGANIZATION

0.99+

Business AllianceORGANIZATION

0.99+

North AmericaLOCATION

0.99+

ScottPERSON

0.99+

JohnPERSON

0.99+

each playerQUANTITY

0.99+

Alan DimensionPERSON

0.99+

CBS HealthORGANIZATION

0.98+

Ali AprilPERSON

0.98+

Light CharityORGANIZATION

0.98+

oneQUANTITY

0.98+

U. S CorporationORGANIZATION

0.98+

iTunesTITLE

0.98+

Both companiesQUANTITY

0.98+

Tele Health ServiceORGANIZATION

0.98+

two waysQUANTITY

0.98+

about 14 million peopleQUANTITY

0.98+

Walmart Health CenterORGANIZATION

0.97+

each patientQUANTITY

0.97+

each individualQUANTITY

0.97+

middle of JanuaryDATE

0.97+

each optionsQUANTITY

0.97+

first caseQUANTITY

0.97+

more than 20 times a yearQUANTITY

0.97+

first quarter of this yearDATE

0.96+

three valuesQUANTITY

0.96+

firstQUANTITY

0.95+

USLOCATION

0.95+

CBS FrontORGANIZATION

0.95+

MinuteClinicORGANIZATION

0.93+

decadesQUANTITY

0.93+

Strategically InvestmentORGANIZATION

0.91+

three factorsQUANTITY

0.91+

TakashiPERSON

0.9+

OttawaLOCATION

0.88+

three pointQUANTITY

0.88+

1100 locationsQUANTITY

0.85+

three essential elementQUANTITY

0.79+

one specific modelQUANTITY

0.78+

Kelowna, Bilis and SecondaryORGANIZATION

0.75+

SEAGATE AI FINAL


 

>>C G technology is focused on data where we have long believed that data is in our DNA. We help maximize humanity's potential by delivering world class, precision engineered data solutions developed through sustainable and profitable partnerships. Included in our offerings are hard disk drives. As I'm sure many of you know, ah, hard drive consists of a slider also known as a drive head or transducer attached to a head gimbal assembly. I had stack assembly made up of multiple head gimbal assemblies and a drive enclosure with one or more platters, or just that the head stacked assembles into. And while the concept hasn't changed, hard drive technology has progressed well beyond the initial five megabytes, 500 quarter inch drives that Seagate first produced. And, I think 1983. We have just announced in 18 terabytes 3.5 inch drive with nine flatters on a single head stack assembly with dual head stack assemblies this calendar year, the complexity of these drives further than need to incorporate Edge analytics at operation sites, so G Edward stemming established the concept of continual improvement and everything that we do, especially in product development and operations and at the end of World War Two, he embarked on a mission with support from the US government to help Japan recover from its four time losses. He established the concept of continual improvement and statistical process control to the leaders of prominent organizations within Japan. And because of this, he was honored by the Japanese emperor with the second order of the sacred treasure for his teachings, the only non Japanese to receive this honor in hundreds of years. Japan's quality control is now world famous, as many of you may know, and based on my own experience and product development, it is clear that they made a major impact on Japan's recovery after the war at Sea Gate. The work that we've been doing and adopting new technologies has been our mantra at continual improvement. As part of this effort, we embarked on the adoption of new technologies in our global operations, which includes establishing machine learning and artificial intelligence at the edge and in doing so, continue to adopt our technical capabilities within data science and data engineering. >>So I'm a principal engineer and member of the Operations and Technology Advanced Analytics Group. We are a service organization for those organizations who need to make sense of the data that they have and in doing so, perhaps introduce a different way to create an analyzed new data. Making sense of the data that organizations have is a key aspect of the work that data scientist and engineers do. So I'm a project manager for an initiative adopting artificial intelligence methodologies for C Gate manufacturing, which is the reason why I'm talking to you today. I thought I'd start by first talking about what we do at Sea Gate and follow that with a brief on artificial intelligence and its role in manufacturing. And I'd like them to discuss how AI and machine Learning is being used at Sea Gate in developing Edge analytics, where Dr Enterprise and Cooper Netease automates deployment, scaling and management of container raised applications. So finally, I like to discuss where we are headed with this initiative and where Mirant is has a major role in case some of you are not conversant in machine learning, artificial intelligence and difference outside some definitions. To cite one source, machine learning is the scientific study of algorithms and statistical bottles without computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference Instead, thus, being seen as a subset of narrow artificial intelligence were analytics and decision making take place. The intent of machine learning is to use basic algorithms to perform different functions, such as classify images to type classified emails into spam and not spam, and predict weather. The idea and this is where the concept of narrow artificial intelligence comes in, is to make decisions of a preset type basically let a machine learn from itself. These types of machine learning includes supervised learning, unsupervised learning and reinforcement learning and in supervised learning. The system learns from previous examples that are provided, such as images of dogs that are labeled by type in unsupervised learning. The algorithms are left to themselves to find answers. For example, a Siris of images of dogs can be used to group them into categories by association that's color, length of coat, length of snout and so on. So in the last slide, I mentioned narrow a I a few times, and to explain it is common to describe in terms of two categories general and narrow or weak. So Many of us were first exposed to General Ai in popular science fiction movies like 2000 and One, A Space Odyssey and Terminator General Ai is a I that can successfully perform any intellectual task that a human can. And if you ask you Lawn Musk or Stephen Hawking, this is how they view the future with General Ai. If we're not careful on how it is implemented, so most of us hope that is more like this is friendly and helpful. Um, like Wally. The reality is that machines today are not only capable of weak or narrow, a I AI that is focused on a narrow, specific task like understanding, speech or finding objects and images. Alexa and Google Home are becoming very popular, and they can be found in many homes. Their narrow task is to recognize human speech and answer limited questions or perform simple tasks like raising the temperature in your home or ordering a pizza as long as you have already defined the order. Narrow. AI is also very useful for recognizing objects in images and even counting people as they go in and out of stores. As you can see in this example, so artificial intelligence supplies, machine learning analytics inference and other techniques which can be used to solve actual problems. The two examples here particle detection, an image anomaly detection have the potential to adopt edge analytics during the manufacturing process. Ah, common problem in clean rooms is spikes in particle count from particle detectors. With this application, we can provide context to particle events by monitoring the area around the machine and detecting when foreign objects like gloves enter areas where they should not. Image Anomaly detection historically has been accomplished at sea gate by operators in clean rooms, viewing each image one at a time for anomalies, creating models of various anomalies through machine learning. Methodologies can be used to run comparative analyses in a production environment where outliers can be detected through influence in an automated real Time analytics scenario. So anomaly detection is also frequently used in machine learning to find patterns or unusual events in our data. How do you know what you don't know? It's really what you ask, and the first step in anomaly detection is to use an algorithm to find patterns or relationships in your data. In this case, we're looking at hundreds of variables and finding relationships between them. We can then look at a subset of variables and determine how they are behaving in relation to each other. We use this baseline to define normal behavior and generate a model of it. In this case, we're building a model with three variables. We can then run this model against new data. Observations that do not fit in the model are defined as anomalies, and anomalies can be good or bad. It takes a subject matter expert to determine how to classify the anomalies on classify classification could be scrapped or okay to use. For example, the subject matter expert is assisting the machine to learn the rules. We then update the model with the classifications anomalies and start running again, and we can see that there are few that generate these models. Now. Secret factories generate hundreds of thousands of images every day. Many of these require human toe, look at them and make a decision. This is dull and steak prone work that is ideal for artificial intelligence. The initiative that I am project managing is intended to offer a solution that matches the continual increased complexity of the products we manufacture and that minimizes the need for manual inspection. The Edge Rx Smart manufacturing reference architecture er, is the initiative both how meat and I are working on and sorry to say that Hamid isn't here today. But as I said, you may have guessed. Our goal is to introduce early defect detection in every stage of our manufacturing process through a machine learning and real time analytics through inference. And in doing so, we will improve overall product quality, enjoy higher yields with lesser defects and produce higher Ma Jin's. Because this was entirely new. We established partnerships with H B within video and with Docker and Amaranthus two years ago to develop the capability that we now have as we deploy edge Rx to our operation sites in four continents from a hardware. Since H P. E. And in video has been an able partner in helping us develop an architecture that we have standardized on and on the software stack side doctor has been instrumental in helping us manage a very complex project with a steep learning curve for all concerned. To further clarify efforts to enable more a i N M l in factories. Theobald active was to determine an economical edge Compute that would access the latest AI NML technology using a standardized platform across all factories. This objective included providing an upgrade path that scales while minimizing disruption to existing factory systems and burden on factory information systems. Resource is the two parts to the compute solution are shown in the diagram, and the gateway device connects to see gates, existing factory information systems, architecture ER and does inference calculations. The second part is a training device for creating and updating models. All factories will need the Gateway device and the Compute Cluster on site, and to this day it remains to be seen if the training devices needed in other locations. But we do know that one devices capable of supporting multiple factories simultaneously there are also options for training on cloud based Resource is the stream storing appliance consists of a kubernetes cluster with GPU and CPU worker notes, as well as master notes and docker trusted registries. The GPU nodes are hardware based using H B E l 4000 edge lines, the balance our virtual machines and for machine learning. We've standardized on both the H B E. Apollo 6500 and the NVIDIA G X one, each with eight in video V 100 GP use. And, incidentally, the same technology enables augmented and virtual reality. Hardware is only one part of the equation. Our software stack consists of Docker Enterprise and Cooper Netease. As I mentioned previously, we've deployed these clusters at all of our operations sites with specific use. Case is planned for each site. Moran Tous has had a major impact on our ability to develop this capability by offering a stable platform in universal control plane that provides us, with the necessary metrics to determine the health of the Kubernetes cluster and the use of Dr Trusted Registry to maintain a secure repository for containers. And they have been an exceptional partner in our efforts to deploy clusters at multiple sites. At this point in our deployment efforts, we are on prem, but we are exploring cloud service options that include Miranda's next generation Docker enterprise offering that includes stack light in conjunction with multi cluster management. And to me, the concept of federation of multi cluster management is a requirement in our case because of the global nature of our business where our operation sites are on four continents. So Stack Light provides the hook of each cluster that banks multi cluster management and effective solution. Open source has been a major part of Project Athena, and there has been a debate about using Dr CE versus Dr Enterprise. And that decision was actually easy, given the advantages that Dr Enterprise would offer, especially during a nearly phase of development. Cooper Netease was a natural addition to the software stack and has been widely accepted. But we have also been a work to adopt such open source as rabbit and to messaging tensorflow and tensor rt, to name three good lab for developments and a number of others. As you see here, is well, and most of our programming programming has been in python. The results of our efforts so far have been excellent. We are seeing a six month return on investment from just one of seven clusters where the hardware and software cost approached close to $1 million. The performance on this cluster is now over three million images processed per day for their adoption has been growing, but the biggest challenge we've seen has been handling a steep learning curve. Installing and maintaining complex Cooper needs clusters in data centers that are not used to managing the unique aspect of clusters like this. And because of this, we have been considering adopting a control plane in the cloud with Kubernetes as the service supported by Miranda's. Even without considering, Kubernetes is a service. The concept of federation or multi cluster management has to be on her road map, especially considering the global nature of our company. Thank you.

Published Date : Sep 15 2020

SUMMARY :

at the end of World War Two, he embarked on a mission with support from the US government to help and the first step in anomaly detection is to use an algorithm to find patterns

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SeagateORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

two partsQUANTITY

0.99+

pythonTITLE

0.99+

six monthQUANTITY

0.99+

World War TwoEVENT

0.99+

C GateORGANIZATION

0.99+

oneQUANTITY

0.99+

Stephen HawkingPERSON

0.99+

Sea GateORGANIZATION

0.99+

JapanLOCATION

0.99+

Lawn MuskPERSON

0.99+

TerminatorTITLE

0.99+

1983DATE

0.99+

one partQUANTITY

0.99+

two examplesQUANTITY

0.99+

A Space OdysseyTITLE

0.99+

five megabytesQUANTITY

0.99+

3.5 inchQUANTITY

0.99+

second partQUANTITY

0.99+

18 terabytesQUANTITY

0.99+

first stepQUANTITY

0.99+

hundredsQUANTITY

0.99+

bothQUANTITY

0.98+

NVIDIAORGANIZATION

0.98+

over three million imagesQUANTITY

0.98+

firstQUANTITY

0.98+

each siteQUANTITY

0.98+

H B E. Apollo 6500COMMERCIAL_ITEM

0.98+

each clusterQUANTITY

0.98+

each imageQUANTITY

0.98+

one sourceQUANTITY

0.98+

todayDATE

0.98+

G X oneCOMMERCIAL_ITEM

0.98+

CooperPERSON

0.98+

second orderQUANTITY

0.98+

JapanORGANIZATION

0.98+

HamidPERSON

0.97+

Dr EnterpriseORGANIZATION

0.97+

Cooper NeteaseORGANIZATION

0.97+

eachQUANTITY

0.97+

OneTITLE

0.97+

TheobaldPERSON

0.97+

nine flattersQUANTITY

0.97+

one devicesQUANTITY

0.96+

SirisTITLE

0.96+

hundreds of thousands of imagesQUANTITY

0.96+

Docker EnterpriseORGANIZATION

0.95+

DockerORGANIZATION

0.95+

seven clustersQUANTITY

0.95+

two years agoDATE

0.95+

US governmentORGANIZATION

0.95+

MirantORGANIZATION

0.95+

Operations and Technology Advanced Analytics GroupORGANIZATION

0.94+

four time lossesQUANTITY

0.94+

WallyPERSON

0.94+

JapaneseOTHER

0.93+

two categoriesQUANTITY

0.93+

H B E l 4000COMMERCIAL_ITEM

0.9+

H BORGANIZATION

0.9+

three variablesQUANTITY

0.9+

General AiTITLE

0.87+

G EdwardPERSON

0.87+

Google HomeCOMMERCIAL_ITEM

0.87+

$1 millionQUANTITY

0.85+

MirandaORGANIZATION

0.85+

Sea GateLOCATION

0.85+

AlexaTITLE

0.85+

500 quarter inch drivesQUANTITY

0.84+

KubernetesTITLE

0.83+

single headQUANTITY

0.83+

eightQUANTITY

0.83+

DrTITLE

0.82+

variablesQUANTITY

0.81+

this calendar yearDATE

0.78+

H P. E.ORGANIZATION

0.78+

2000DATE

0.73+

Project AthenaORGANIZATION

0.72+

Rx SmartCOMMERCIAL_ITEM

0.69+

dualQUANTITY

0.68+

V 100COMMERCIAL_ITEM

0.65+

closeQUANTITY

0.65+

four continentsQUANTITY

0.64+

GPQUANTITY

0.62+

ON DEMAND MIRANTIS OPENSTACK ON K8S FINAL


 

>> Hi, I'm Adrienne Davis, Customer Success Manager on the CFO-side of the house at Mirantis. With me today is Artem Andreev, Product Manager and expert, who's going to enlighten us today. >> Hello everyone. It's great to hear all of you listening to our discussion today. So my name is Artem Andreev. I'm a Product Manager for Mirantis OpenStack line of products. That includes the current product line that we have in the the next generation product line that we're about to launch quite soon. And actually this is going to be the topic of our presentation today. So the new product that we are very, very, very excited about, and that is going to be launched in a matter of several weeks, is called Mirantis OpenStack on Kubernetes. For those of you who have been in Mirantis quite a while already, Mirantis OpenStack on Kubernetes is essentially a reincarnation of our Miranti Cloud Platform version one, as we call it these days. So, and the theme has reincarnated into something more advanced, more robust, and altogether modern, that provides the same, if not more, value to our customers, but packaged in a different shape. And well, we're very excited about this new launch, and we would like to share this excitement with you Of course. As you might know, recently a few months ago, Mirantis acquired Docker Enterprise together with the advanced Kubernetes technology that Docker Enterprise provides. And we made this technology the piece and parcel of our product suite, and this naturally includes OpenStack Mirantis, OpenStack on Kubernetes as well, since this is a part of our product suite. And well, the Kubernetes technology in question, we call Docker Enterprise Container Cloud these days, I'm going to refer to this name a lot over the course of the presentation. So I would like to split today's discussions to several major parts. So for those of you who do not know what OpenStack is in general, a quick recap might be helpful to understand the value that it provides. I will discuss why someone still needs OpenStack in 2020. We will talk about what a modern OpenStack distribution is supposed to do to the expectation that is there. And of course, we will go into a bit of details of how exactly Mirantis OpenStack on Kubernetes works, how it helps to deploy and manage OpenStack clouds. >> So set the stage for me here. What's the base environment we were trying to get to? >> So what is OpenStack? One can think of OpenStack as a free and open source alternative to VMware, and it's a fair comparison. So OpenStack, just as VMware, operates primarily on Virtual Machines. So it gives you as a user, a clean and crispy interface to launch a virtual VM, to configure the virtual networking to plug this VM into it to configure and provision virtual storage, to attach to your VM, and do a lot of other things that actually a modern application requires to run. So the idea behind OpenStack is that you have a clean and crispy API exposed to you as a user, and alters little details and nuances of the physical infrastructure configuration provision that need to happen just for the virtual application to work are hidden, and spread across multiple components that comprise OpenStack per se. So as compared again, to a VMware, the functionality is pretty much similar, but actually OpenStack can do much more than just Vms, and it does that, at frankly speaking much less price, if we do the comparison. So what OpenStack has to offer. Naturally, the virtualization, networking, storage systems out there, it's just the basic entry level functionality. But of course, what comes with it is the identity and access management features, or practical user interface together with the CLI and command line tools to manage the cloud, orchestration functionality, to deploy your application in the form of templates, ability to manage bare metal machines, and of course, some nice and fancy extras like DNSaaS service, Metering, Secret Management, and Load Balancing. And frankly speaking, OpenStack can actually do even more, depending on the needs that you have. >> We hear so much about containers today. Do applications even need VMs anymore? Can't Kubernetes provide all these services? And even if IaaS is still needed, why would one bother with building their own private platform, if there's a wide choice of public solutions for virtualization, like Amazon web services, Microsoft Azure, and Google cloud platform? >> Well, that's a very fair question. And you're absolutely correct. So the whole trend (audio blurs) as the States. Everybody's talking about containers, everybody's doing containers, but to be realistic, yes, the market still needs VMs. There are certain use cases in the modern world. And actually these use cases are quite new, like 5G, where you require high performance in the networking for example. You might need high performance computing as well. So when this takes quite special hardware and configuration to be provided within your infrastructure, that is much more easily solved with the Vms, and not containers. Of course not to mention that, there are still legacy applications that you need to deal with, and that well, they have just switched from the server-based provision into VM-based provision, and they need to run somewhere. So they're not just ready for containers. And well, if we think about, okay, VMs are still needed, but why don't I just go to a public infrastructure as a service provider and run my workloads there? Now if you can do that, but well, you have to be prepared to pay a lot of money, once you start running your workloads at scale. So public IaaSes, they actually tend to hit your pockets heavily. And of course, if you're working in a highly regulated area, like enterprises cover (audio blurs) et cetera, so you have to comply with a lot of security regulations and data placement regulations. And well, public IaaSes, let's be frank, they're not good at providing you with this transparency. So you need to have full control over your whole stack, starting from the hardware to the very, very top. And this is why private infrastructure as a service is still a theme these days. And I believe that it's going to be a theme for at least five years more, if not more. >> So if private IaaSes are useful and demanded, why does Mirantis just stick to the OpenStack that we already have? Why did we decide to build a new product, rather than keep selling the current one? >> Well, to answer this question, first, we need to see what actually our customers believe more in infrastructure as a service platform should be able to provide. And we've compiled this list into like five criteria. Naturally, private IaaS needs to be reliable and robust, meaning that whatever happens on the underneath the API, that should not be impacting the business generated workloads, this is a must, or impacting them as little as possible, the platform needs to be secure and transparent, going back to the idea of working in the highly regulated areas. And this is again, a table stake to enter the enterprise market. The platform needs to be simple to deploy (audio blurs) 'cause well, you as an operator, you should not be thinking about the internals, but try to focus in on enabling your users with the best possible experience. Updates, updates are very important. So the platform needs to keep up with the latest software patches, bug fixes, and of course, features, and upgrading to a new version must not take weeks or months, and has as little impact on the running workloads as possible. And of course, to be able to run modern application, the platform needs to provide the comparable set of services, just as a public cloud so that you can move your application across your terms in the private or public cloud without having to change it severally, so-called the feature parity, it needs to be there. And if we look at the architecture of OpenStack, and we know OpenStack is powerful, it can do a lot. We've just discussed that, right? But the architecture of OpenStack is known to be complex. And well, tell me, how would you enable the robustness and robustness and reliability in this complex system? It's not easy, right? So, and actually this diagrams shelves, just like probably a third part of the modern update OpenStack cloud. So it's just a little illustration. It's not the whole picture. So imagine how hard it is to make a very solid platform out of this architecture. And well, naturally this also imposes some challenges to provide the transparency and security, 'cause well, the more complex the system is, the harder it is to manage, and well the harder it is to see what's on the inside, and well upgrades, yeah. One of the biggest challenges that we learned from our past previous history, well that many of our customers prefer to stay on the older version of OpenStack, just because, well, they were afraid of upgraded, cause they saw upgrades as time-consuming and risky and divorce. And well, instead of just switching to the latest and greatest software, they preferred reliability by sticking to the old stuff. Well, why? Well, 'cause potentially that meant implied certain impact on their workloads and well an upgrade required thorough planning and execution, just to be as as riskless as possible. And we are solving all of these challenges, of managing a system as complex as OpenStack is with Kubernetes. >> So how does Kubernetes solve these problems? >> Well, we look at OpenStack as a typical microservice architecture application, that is organized into multiple little moving parts, demons that are connected to each other and that talk to each other through the standard API. And altogether, that feels as very good feet to run on top of a Kubernetes cluster, because many of the modern applications, they fall exactly on the same pattern. >> How exactly did you put OpenStack on Kubernetes? >> Well, that's not easy. I'm going to be frank with you. And if you look at the architectural diagram, so this is a stack of Miranda's products represented with a focus of course, on the Mirantis OpenStack, as a central part. So what you see in the middle shelving pink, is Mirantis OpenStack on Kubernetes itself. And of course around that are supporting components that are needed to be there, to run OpenStack on Kubernetes successfully. So on the very bottom, there is hardware, networking, storage, computing, hardware that somebody needs to configure provision and manage, to be able to deploy the operating system on top of it. And this is just another layer of complexity that abstracts the Mirantis OpenStack on Kubernetes just from the under lake. So once we have operating system there, there needs to be a Kubernetes cluster, deployed and managed. And as I mentioned previously, we are using the capabilities that this Kuberenetes cluster provides to run OpenStack itself, the control plane that way, because everything in Mirantis OpenStack on Kuberentes is a container, or whatever you can think of. Of course naturally, it doesn't sound like an easy task to manage this multi-layered pie. And this is where Docker Enterprise Container Cloud comes into play, 'cause this is our single pane of glass into day one and day two operations for the hardware itself, for the operating system, and for Docker Enterprise Kubernetes. So it solves the need to have this underlay ready and prepared. And once the underlay is there, you go ahead, and deploy Mirantis OpenStack on Kubernetes, just as another Kubernetes application, application following the same practices and tools as you use with any other applications. So naturally of course, once you have OpenStack up and running, you can use it to create your own... To give your users ability to create their own private little Kubernetes clusters inside OpenStack projects. And this is one of the measure just cases for OpenStack these days, again, being an underlay for containers. So if you look at the operator experience, how does it look like for a human operator who is responsible for deployment the management of the cloud to deal with Mirantis OpenStack on Kubernetes? So first, you deploy Docker Enterprise Container Cloud, and you use the built-in capabilities that it provides to provision your physical infrastructure, that you discover the hardware nodes, you deploy operating system there, you do configuration of the network interfaces in storage devices there, and then you deploy Kubernetes cluster on top of that. This Kubernetes cluster is going to be dedicated to Mirantis OpenStack on Kuberenetes itself. So it's a special (indistinct) general purpose thing, that well is dedicated to OpenStack. And that means that inside of this cluster, there are a bunch of life cycle management modules, running as Kubernetes operators. So OpenStack itself has its own LCM module or operator. There is a dedicated operator for Ceph, cause Ceph is our major storage solution these days, that we integrate with. Naturally, there is a dedicated lifecycle management module for Stack Light. Stack Light is our operator, logging monitoring alerting solution for OpenStack on Kubernetes, that we bundle toegether with the whole product suite. So Kubernetes operators, directly through, it keeps the TL command or through the practical records that are provided by Docker Enterprise Container Cloud, as a part of it, to deploy the OpenStack staff and Stack Light clusters one by one, and connect them together. So instead of dealing with hundreds of YAML files, while it's five definitions, five specifications, that you're supposed to provide these days and that's safe. And although data management is performed through these APIs, just as the deployment as easily. >> All of this assumes that OpenStack has containers. Now, Mirantis was containerizing back long before Kubernetes even came along. Why did we think this would be important? >> That is true. Well, we've been containerizing OpenStack for quite a while already, it's not a new thing at all. However, is the way that we deploy OpenStack as a Kubernetes application that matters, 'cause Kubernetes solves a whole bunch of challenges that we have used to deal with, with MCP1, when deploying OpenStack on top of bare operating systems as packages. So, naturally Kubernetes provides us with... Allows us to achieve reliability through the self (audio blurs) auto-scaling mechanisms. So you define a bunch of policies that describe the behavior of OpenStack control plane. And Kubernetes follows these policies when things happen, and without actually any need for human interaction. So isolation of the dependencies or OpenStack services within Docker images is a good thing, 'cause previously we had to deal with packages and conflicts in between the versions of different libraries. So now we just ship everything together as a Docker image, and I think that early in updates is an advanced feature that Kubernetes provides natively. So updating OpenStack has never been as easy as with Kubernetes. Kubernetes also provides some fancy building blocks for network and like hold balancing, and of course, collegial tunnels, and service meshes. They're also quite helpful when dealing with such a complex application like OpenStack when things need to talk to each other and without any problem in the configuration. So Helm Reconciling is a place that also has a great deal of role. So it actually is our soul for Kubernetes. We're using Helm Bubbles, which are for opens, provide for OpenStack into upstream, as our low level layer of logic to deploy OpenStack app services and connect them to each other. And they'll naturally automatic scale-up of control plane. So adding in, YouNote is easy, you just add a new Kubernetes work up with a bunch of labels there and well, it handles the distribution of the necessary service automatically. Naturally, there are certain drawbacks. So there's fancy features come at a cost. Human operators, they need to understand Kubernetes and how it works. But this is also a good thing because everything is moving towards Kubernetes these days, so you would have to learn at some point anyway. So you can use this as a chance to bring yourself to the next level of knowledge. OpenStack is not 100% Cloud Native Application by itself. Unfortunately, there are certain components that are stateful like databases, or NOAA compute services, or open-the-switch demons, and that have to be dealt with very carefully when doing operates, updates, and all the whole deployment. So there's extra life cycle management logic build team that handles these components carefully for you. So, a bit of a complexity we had to have. And naturally, Kubernetes requires resources, and keeping the resources itself to run. So you need to have this resources available and dedicated to Kubernetes control plane, to be able to control your application, that is all OpenStack and stuff. So a bit of investment is required. >> Can anybody just containerize OpenStack services and get these benefits? >> Well, yes, the idea is not new, there's a bunch of OpStream open, sorry, community projects doing pretty much the same thing. So we are not inventing a rocket here, let's be fair. However, it's only the way that Kubernetes cooks OpenStack, gives you the robustness and reliability that enterprise and like big customers actually need. And we're doing a great deal of a job, ultimating all the possible day to work polls and all these caveats complexities of the OpenStack management inside our products. Okay, at this point, I believe we shall wrap this discussion a bit up. So let me conclude for you. So OpenStack is an opensource infrastructure as a service platform, that still has its niche in 2020th, and it's going to have it's niche for at least five years. OpenStack is a powerful but very complex tool. And the complexities of OpenStack and OpenStack life cycle management, are successfully solved by Mirantis, through the capabilities of Kubernetes distribution, that provides us with the old necessary primitives to run OpenStack, just as another containerized application these days.

Published Date : Sep 14 2020

SUMMARY :

on the CFO-side of the house at Mirantis. and that is going to be launched So set the stage for me here. So as compared again, to a VMware, And even if IaaS is still needed, and they need to run somewhere. So the platform needs to keep up and that talk to each other of the cloud to deal with All of this assumes that and keeping the resources itself to run. and it's going to have it's

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adrienne DavisPERSON

0.99+

Artem AndreevPERSON

0.99+

2020DATE

0.99+

five specificationsQUANTITY

0.99+

five definitionsQUANTITY

0.99+

MirantisORGANIZATION

0.99+

100%QUANTITY

0.99+

OpenStackTITLE

0.99+

hundredsQUANTITY

0.99+

CephORGANIZATION

0.99+

MicrosoftORGANIZATION

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

five criteriaQUANTITY

0.98+

firstQUANTITY

0.98+

KubernetesTITLE

0.97+

2020thDATE

0.96+

oneQUANTITY

0.95+

GoogleORGANIZATION

0.93+

MCP1TITLE

0.92+

twoQUANTITY

0.92+

Mirantis OpenStackTITLE

0.91+

Mirantis OpenStackTITLE

0.91+

YouNoteTITLE

0.9+

Docker EnterpriseORGANIZATION

0.9+

Helm BubblesTITLE

0.9+

KubernetesORGANIZATION

0.9+

least five yearsQUANTITY

0.89+

singleQUANTITY

0.89+

Mirantis OpenStack on KubernetesTITLE

0.88+

few months agoDATE

0.86+

OpenStack on KubernetesTITLE

0.86+

Docker EnterpriseTITLE

0.85+

K8STITLE

0.84+

IBM DataOps in Action Panel | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome to this special noob digital event where we're focusing in on data ops data ops in Acton with generous support from friends at IBM let me set up the situation here there's a real problem going on in the industry and that's that people are not getting the most out of their data data is plentiful but insights perhaps aren't what's the reason for that well it's really a pretty complicated situation for a lot of organizations there's data silos there's challenges with skill sets and lack of skills there's tons of tools out there sort of a tools brief the data pipeline is not automated the business lines oftentimes don't feel as though they own the data so that creates some real concerns around data quality and a lot of finger-point quality the opportunity here is to really operationalize the data pipeline and infuse AI into that equation and really attack their cost-cutting and revenue generation opportunities that are there in front of you think about this virtually every application this decade is going to be infused with AI if it's not it's not going to be competitive and so we have organized a panel of great practitioners to really dig in to these issues first I want to introduce Victoria Stassi with who's an industry expert in a top at Northwestern you two'll very great to see you again thanks for coming on excellent nice to see you as well and Caitlin Alfre is the director of AI a vai accelerator and also part of the peak data officers organization at IBM who has actually eaten some of it his own practice what a creep let me say it that way Caitlin great to see you again and Steve Lewis good to see you again see vice president director of management associated a bank and Thompson thanks for coming on thanks Dave make speaker alright guys so you heard my authority with in terms of operationalizing getting the most insight hey data is wonderful insights aren't but getting insight in real time is critical in this decade each of you is a sense as to where you are on that journey or Victoria your taste because you're brand new to Northwestern Mutual but you have a lot of deep expertise in in health care and manufacturing financial services but where you see just the general industry climate and we'll talk about the journeys that you are on both personally and professionally so it's all fair sure I think right now right again just me going is you need to have speech insight right so as I experienced going through many organizations are all facing the same challenges today and a lot of those pounds is hard where do my to live is my data trust meaning has a bank curated has been Clinton's visit qualified has a big a lot of that is ready what we see often happen is businesses right they know their KPIs they know their business metrics but they can't find where that data Linda Barragan asked there's abundant data disparity all over the place but it is replicated because it's not well managed it's a lot of what governance in the platform of pools that governance to speak right offer fact it organizations pay is just that piece of it I can tell you where data is I can tell you what's trusted that when you can quickly access information and bring back answers to business questions that is one answer not many answers leaving the business to question what's the right path right which is the correct answer which which way do I go at the executive level that's the biggest challenge where we want the industry to go moving forward right is one breaking that down along that information to be published quickly and to an emailing data virtualization a lot of what you see today is most businesses right it takes time to build out large warehouses at an enterprise level we need to pivot quicker so a lot of what businesses are doing is we're leaning them towards taking advantage of data virtualization allowing them to connect to these data sources right to bring that information back quickly so they don't have to replicate that information across different systems or different applications right and then to be able to provide that those answers back quickly also allowing for seamless access to from the analysts that are running running full speed right try and find the answers as quickly as they find great okay and I want to get into that sort of how news Steve let me go to you one of the things that we talked about earlier was just infusing this this mindset of a data cult and thinking about data as a service so talk a little bit about how you got started what was the starting NICUs through that sure I think the biggest thing for us there is to change that mindset from data being just for reporting or things that have happened in the past to do some insights on us and some data that already existed well we've tried to shift the mentality there is to start to use data and use that into our actual applications so that we're providing those insight in real time through the applications as they're consumed helping with customer experience helping with our personalization and an optimization of our application the way we've started down that path or kind of the journey that we're still on was to get the foundation laid birch so part of that has been making sure we have access to all that data whether it's through virtualization like vic talked about or whether it's through having more of the the data selected in a data like that that where we have all of that foundational data available as opposed to waiting for people to ask for it that's been the biggest culture shift for us is having that availability of data to be ready to be able to provide those insights as opposed to having to make the businesses or the application or asked for that day Oh Kailyn when I first met into pulp andari the idea wobble he paid up there yeah I was asking him okay where does a what's the role of that at CBO and and he mentioned a number of things but two of the things that stood out is you got to understand how data affect the monetization of your company that doesn't mean you know selling the data what role does it play and help cut cost or ink revenue or productivity or no customer service etc the other thing he said was you've got a align with the lines of piss a little sounded good and this is several years ago and IBM took it upon itself Greek its own champagne I was gonna say you know dogfooding whatever but it's not easy just flip a switch and an infuse a I and automate the data pipeline you guys had to go you know some real of pain to get there and you did you were early on you took some arrows and now you're helping your customers better on thin debt but talk about some of the use cases that where you guys have applied this obviously the biggest organization you know one of the biggest in the world the real challenge is they're sure I'm happy today you know we've been on this journey for about four years now so we stood up our first book to get office 2016 and you're right it was all about getting what data strategy offered and executed internally and we want to be very transparent because as you've mentioned you know a lot of challenges possible think differently about the value and so as we wrote that data strategy at that time about coming to enterprise and then we quickly of pivoted to see the real opportunity and value of infusing AI across all of our needs were close to your question on a couple of specific use cases I'd say you know we invested that time getting that platform built and implemented and then we were able to take advantage of that one particular example that I've been really excited about I have a practitioner on my team who's a supply chain expert and a couple of years ago he started building out supply chain solution so that we can better mitigate our risk in the event of a natural disaster like the earthquake hurricane anywhere around the world and be cuz we invest at the time and getting the date of pipelines right getting that all of that were created and cleaned and the quality of it we were able to recently in recent weeks add the really critical Kovach 19 data and deliver that out to our employees internally for their preparation purposes make that available to our nonprofit partners and now we're starting to see our first customers take advantage too with the health and well-being of their employees mine so that's you know an example I think where and I'm seeing a lot of you know my clients I work with they invest in the data and AI readiness and then they're able to take advantage of all of that work work very quickly in an agile fashion just spin up those out well I think one of the keys there who Kaelin is that you know we can talk about that in a covet 19 contact but it's that's gonna carry through that that notion of of business resiliency is it's gonna live on you know in this post pivot world isn't it absolutely I think for all of us the importance of investing in the business continuity and resiliency type work so that we know what to do in the event of either natural disaster or something beyond you know it'll be grounded in that and I think it'll only become more important for us to be able to act quickly and so the investment in those platforms and approach that we're taking and you know I see many of us taking will really be grounded in that resiliency so Vic and Steve I want to dig into this a little bit because you know we use this concept of data op we're stealing from DevOps and there are similarities but there are also differences now let's talk about the data pipeline if you think about the data pipeline as a sort of quasi linear process where you're investing data and you might be using you know tools but whether it's Kafka or you know we have a favorite who will you have and then you're transforming that that data and then you got a you know discovery you got to do some some exploration you got to figure out your metadata catalog and then you're trying to analyze that data to get some insights and then you ultimately you want to operationalize it so you know and and you could come up with your own data pipeline but generally that sort of concept is is I think well accepted there's different roles and unlike DevOps where it might be the same developer who's actually implementing security policies picking it the operations in in data ops there might be different roles and fact very often are there's data science there's may be an IT role there's data engineering there's analysts etc so Vic I wonder if you could you could talk about the challenges in in managing and automating that data pipeline applying data ops and how practitioners can overcome them yeah I would say a perfect example would be a client that I was just recently working for where we actually took a team and we built up a team using agile methodologies that framework right we're rapidly ingesting data and then proving out data's fit for purpose right so often now we talk a lot about big data and that is really where a lot of industries are going they're trying to add an enrichment to their own data sources so what they're doing is they're purchasing these third-party data sets so in doing so right you make that initial purchase but what many companies are doing today is they have no real way to vet that so they'll purchase the information they aren't going to vet it upfront they're going to bring it into an environment there it's going to take them time to understand if the data is of quality or not and by the time they do typically the sales gone and done and they're not going to ask for anything back but we were able to do it the most recent claim was use an instructure data source right bring that and ingest that with modelers using this agile team right and within two weeks we were able to bring the data in from the third-party vendor what we considered rapid prototyping right be able to profile the data understand if the data is of quality or not and then quickly figure out that you know what the data's not so in doing that we were able to then contact the vendor back tell them you know it sorry the data set up to snuff we'd like our money back we're not gonna go forward with it that's enabling businesses to be smarter with what they're doing with 30 new purchases today as many businesses right now um as much as they want to rely on their own data right they actually want to rely on cross the data from third-party sources and that's really what data Ops is allowing us to do it's allowing us to think at a broader a higher level right what to bring the information what structures can we store them in that they don't necessarily have to be modeled because a modeler is great right but if we have to take time to model all the information before we even know we want to use it that's gonna slow the process now and that's slowing the business down the business is looking for us to speed up all of our processes a lot of what we heard in the past raised that IP tends to slow us down and that's where we're trying to change that perception in the industry is no we're actually here to speed you up we have all the tools and technologies to do so and they're only getting better I would say also on data scientists right that's another piece of the pie for us if we can bring the information in and we can quickly catalog it in a metadata and burn it bring in the information in the backend data data assets right and then supply that information back to scientists gone are the days where scientists are going and asking for connections to all these different data sources waiting days for access requests to be approved just to find out that once they figure out how it with them the relationship diagram right the design looks like in that back-end database how to get to it write the code to get to it and then figure out this is not the information I need that Sally next to me right fold me the wrong information that's where the catalog comes in that's where due to absent data governance having that catalog that metadata management platform available to you they can go into a catalog without having to request access to anything quickly and within five minutes they can see the structures what if the tables look like what did the fields look like are these are these the metrics I need to bring back answers to the business that's data apps it's allowing us to speed up all of that information you know taking stuff that took months now down two weeks down two days down two hours so Steve I wonder if you could pick up on that and just help us understand what data means you we talked about earlier in our previous conversation I mentioned it upfront is this notion of you know the demand for for data access is it was through the roof and and you've gone from that to sort of more of a self-service environment where it's not IT owning the data it's really the businesses owning the data but what what is what is all this data op stuff meaning in your world sure I think it's very similar it's it's how do we enable and get access to that clicker showing the right controls showing the right processes and and building that scalability and agility and into all of it so that we're we're doing this at scale it's much more rapidly available we can discover new data separately determine if it's right or or more importantly if it's wrong similar to what what Vic described it's it's how do we enable the business to make those right decisions on whether or not they're going down the right path whether they're not the catalog is a big part of that we've also introduced a lot of frameworks around scale so just the ability to rapidly ingest data and make that available has been a key for us we've also focused on a prototyping environment so that sandbox mentality of how do we rapidly stand those up for users and and still provide some controls but have provide that ability for people to do that that exploration what we're finding is that by providing the platform and and the foundational layers that were we're getting the use cases to sort of evolve and come out of that as opposed to having the use cases prior to then go build things from we're shifting the mentality within the organization to say we don't know what we need yet let's let's start to explore that's kind of that data scientist mentality and culture it more of a way of thinking as opposed to you know an actual project or implement well I think that that cultural aspect is important of course Caitlin you guys are an AI company or at least that you know part of what you do but you know you've you for four decades maybe centuries you've been organized around different things by factoring plant but sales channel or whatever it is but-but-but-but how has the chief data officer organization within IBM been able to transform itself and and really infuse a data culture across the entire company one of the approaches you know we've taken and we talk about sort of the blueprint to drive AI transformation so that we can achieve and deliver these really high value use cases we talked about the data the technology which we've just pressed on with organizational piece of it duration are so important the change management enabling and equipping our data stewards I'll give one a civic example that I've been really excited about when we were building our platform and starting to pull districting structured unstructured pull it in our ADA stewards are spending a lot of time manually tagging and creating business metadata about that data and we identified that that was a real pain point costing us a lot of money valuable resources so we started to automate the metadata and doing that in partnership with our deep learning practitioners and some of the models that they were able to build that capability we pushed out into our contacts our product last year and one of the really exciting things for me to see is our data stewards who be so value exporters and the skills that they bring have reported that you know it's really changed the way they're able to work it's really sped up their process it's enabled them to then move on to higher value to abilities and and business benefits so they're very happy from an organizational you know completion point of view so I think there's ways to identify those use cases particularly for taste you know we drove some significant productivity savings we also really empowered and hold our data stewards we really value to make their job you know easier more efficient and and help them move on to things that they are more you know excited about doing so I think that's that you know another example of approaching taken yes so the cultural piece the people piece is key we talked a little bit about the process I want to get into a little bit into the tech Steve I wonder if you could tell us you know what's it what's the tech we have this bevy of tools I mentioned a number of them upfront you've got different data stores you've got open source pooling you've got IBM tooling what are the critical components of the technology that people should be thinking about tapping in architecture from ingestion perspective we're trying to do a lot of and a Python framework and scaleable ingestion pipe frameworks on the catalog side I think what we've done is gone with IBM PAC which provides a platform for a lot of these tools to stay integrated together so things from the discovery of data sources the cataloging the documentation of those data sources and then all the way through the actual advanced analytics and Python models and our our models and the open source ID combined with the ability to do some data prep and refinery work having that all in an integrated platform was a key to us for us that the rollout and of more of these tools in bulk as opposed to having the point solutions so that's been a big focus area for us and then on the analytic side and the web versus IDE there's a lot of different components you can go into whether it's meal soft whether it's AWS and some of the native functionalities out there you mentioned before Kafka and Anissa streams and different streaming technologies those are all the ones that are kind of in our Ketil box that we're starting to look at so and one of the keys here is we're trying to make decisions in as close to real time as possible as opposed to the business having to wait you know weeks or months and then by the time they get insights it's late and really rearview mirror so Vic your focus you know in your career has been a lot on data data quality governance master data management data from a data quality standpoint as well what are some of the key tools that you're familiar with that you've used that really have enabled you operationalize that data pipeline you know I would say I'm definitely the IBM tools I have the most experience with that also informatica though as well those are to me the two top players IBM definitely has come to the table with a suite right like Steve said cloud pack for data is really a one-stop shop so that's allowing that quick seamless access for business user versus them having to go into some of the previous versions that IBM had rolled out where you're going into different user interfaces right to find your information and that can become clunky it can add the process it can also create almost like a bad taste and if in most people's mouths because they don't want to navigate from system to system to system just to get their information so cloud pack to me definitely brings everything to the table in one in a one-stop shop type of environment in for me also though is working on the same thing and I would tell you that they haven't come up with a solution that really comes close to what IBM is done with cloud pack for data I'd be interested to see if they can bring that on the horizon but really IBM suite of tools allows for profiling follow the analytics write metadata management access to db2 warehouse on cloud those are the tools that I've worked in my past to implement as well as cloud object store to bring all that together to provide that one stop that at Northwestern right we're working right now with belieber I think calibra is a great set it pool are great garments catalog right but that's really what it's truly made for is it's a governance catalog you have to bring some other pieces to the table in order for it to serve up all the cloud pack does today which is the advanced profiling the data virtualization that cloud pack enables today the machine learning at the level where you can actually work with our and Python code and you put our notebooks inside of pack that's some of this the pieces right that are missing in some of the under vent other vendor schools today so one of the things that you're hearing here is the theme of openness others addition we've talked about a lot of tools and not IBM tools all IBM tools there there are many but but people want to use what they want to use so Kaitlin from an IBM perspective what's your commitment the openness number one but also to you know we talked a lot about cloud packs but to simplify the experience for your client well and I thank Stephen Victoria for you know speaking to their experience I really appreciate feedback and part of our approach has been to really take one the challenges that we've had I mentioned some of the capabilities that we brought forward in our cloud platform data product one being you know automating metadata generation and that was something we had to solve for our own data challenges in need so we will continue to source you know our use cases from and grounded from a practitioner perspective of what we're trying to do and solve and build and the approach we've really been taking is co-creation line and that we roll these capability about the product and work with our customers like Stephen light victorious you really solicit feedback to product route our dev teams push that out and just be very open and transparent I mean we want to deliver a seamless experience we want to do it in partnership and continue to solicit feedback and improve and roll out so no I think that will that has been our approach will continue to be and really appreciate the partnerships that we've been able to foster so we don't have a ton of time but I want to go to practitioners on the panel and ask you about key key performance indicators when I think about DevOps one of the things that we're measuring is the elapsed time the deploy applications start finished where we're measuring the amount of rework that has to be done the the quality of the deliverable what are the KPIs Victoria that are indicators of success in operationalizing date the data pipeline well I would definitely say your ability to deliver quickly right so how fast can you deliver is that is that quicker than what you've been able to do in the past right what is the user experience like right so have you been able to measure what what the amount of time was right that users are spending to bring information to the table in the past versus have you been able to reduce that time to delivery right of information business answers to business questions those are the key performance indicators to me that tell you that the suite that we've put in place today right it's providing information quickly I can get my business answers quickly but quicker than I could before and the information is accurate so being able to measure is it quality that I've been giving that I've given back or is this not is it the wrong information and yet I've got to go back to the table and find where I need to gather that from from somewhere else that to me tells us okay you know what the tools we've put in place today my teams are working quicker they're answering the questions they need to accurately that is when we know we're on the right path Steve anything you add to that I think she covered a lot of the people components the around the data quality scoring right for all the different data attributes coming up with a metric around how to measure that and and then showing that trend over time to show that it's getting better the other one that we're doing is just around overall date availability how how much data are we providing to our users and and showing that trend so when I first started you know we had somewhere in the neighborhood of 500 files that had been brought into the warehouse and and had been published and available in the neighborhood of a couple thousand fields we've grown that into weave we have thousands of cables now available so it's it's been you know hundreds of percent in scale as far as just the availability of that data how much is out there how much is is ready and available for for people to just dig in and put into their their analytics and their models and get those back into the other application so that's another key metric that we're starting to track as well so last question so I said at the top that every application is gonna need to be infused with AI this decade otherwise that application not going to be as competitive as it could be and so for those that are maybe stuck in their journey don't really know where to get started I'll start with with Caitlin and go to Victoria and then and then even bring us home what advice would you give the people that need to get going on this my advice is I think you pull the folks that are either producing or accessing your data and figure out what the rate is between I mentioned some of the data management challenges we were seeing this these processes were taking weeks and prone to error highly manual so part was ripe for AI project so identifying those use cases I think that are really causing you know the most free work and and manual effort you can move really quickly and as you build this platform out you're able to spin those up on an accelerated fashion I think identifying that and figuring out the business impact are able to drive very early on you can get going and start really seeing the value great yeah I would actually say kids I hit it on the head but I would probably add to that right is the first and foremost in my opinion right the importance around this is data governance you need to implement a data governance at an enterprise level many organizations will do it but they'll have silos of governance you really need an interface I did a government's platform that consists of a true framework of an operational model model charters right you have data domain owners data domain stewards data custodians all that needs to be defined and while that may take some work in in the beginning right the payoff down the line is that much more it's it it's allowing your business to truly own the data once they own the data and they take part in classifying the data assets for technologists and for analysts right you can start to eliminate some of the technical debt that most organizations have acquired today they can start to look at what are some of the systems that we can turn off what are some of the systems that we see valium truly build out a capability matrix we can start mapping systems right to capabilities and start to say where do we have wares or redundancy right what can we get rid of that's the first piece of it and then the second piece of it is really leveraging the tools that are out there today the IBM tools some of the other tools out there as well that enable some of the newer next-generation capabilities like unit nai right for example allowing automation for automation which right for all of us means that a lot of the analysts that are in place today they can access the information quicker they can deliver the information accurately like we've been talking about because it's been classified that pre works being done it's never too late to start but once you start that it just really acts as a domino effect to everything else where you start to see everything else fall into place all right thank you and Steve bring us on but advice for your your peers that want to get started sure I think the key for me too is like like those guys have talked about I think all everything they said is valid and accurate thing I would add is is from a starting perspective if you haven't started start right don't don't try to overthink that over plan it it started just do something and and and start the show that progress and value the use cases will come even if you think you're not there yet it's amazing once you have the national components there how some of these things start to come out of the woodwork so so it started it going may have it have that iterative approach to this and an open mindset it's encourage exploration and enablement look your organization in the eye to say why are their silos why do these things like this what are our problem what are the things getting in our way and and focus and tackle those those areas as opposed to trying to put up more rails and more boundaries and kind of encourage that silo mentality really really look at how do you how do you focus on that enablement and then the last comment would just be on scale everything should be focused on scale what you think is a one-time process today you're gonna do it again we've all been there you're gonna do it a thousand times again so prepare for that prepare forever that you're gonna do everything a thousand times and and start to instill that culture within your organization a great advice guys data bringing machine intelligence an AI to really drive insights and scaling with a cloud operating model no matter where that data live it's really great to have have three such knowledgeable practitioners Caitlyn Toria and Steve thanks so much for coming on the cube and helping support this panel all right and thank you for watching everybody now remember this panel was part of the raw material that went into a crowd chat that we hosted on May 27th Crouch at net slash data ops so go check that out this is Dave Volante for the cube thanks for watching [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Steve LewisPERSON

0.99+

Caitlyn ToriaPERSON

0.99+

StevePERSON

0.99+

Linda BarraganPERSON

0.99+

Dave VolantePERSON

0.99+

two weeksQUANTITY

0.99+

Victoria StassiPERSON

0.99+

Caitlin AlfrePERSON

0.99+

two hoursQUANTITY

0.99+

VicPERSON

0.99+

two daysQUANTITY

0.99+

May 27thDATE

0.99+

500 filesQUANTITY

0.99+

IBMORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

five minutesQUANTITY

0.99+

30 new purchasesQUANTITY

0.99+

last yearDATE

0.99+

CaitlinPERSON

0.99+

ClintonPERSON

0.99+

first pieceQUANTITY

0.99+

first bookQUANTITY

0.99+

DavePERSON

0.99+

second pieceQUANTITY

0.99+

BostonLOCATION

0.99+

SallyPERSON

0.99+

todayDATE

0.99+

AWSORGANIZATION

0.99+

hundreds of percentQUANTITY

0.98+

Stephen VictoriaPERSON

0.98+

oneQUANTITY

0.98+

Northwestern MutualORGANIZATION

0.98+

KaitlinPERSON

0.97+

four decadesQUANTITY

0.97+

firstQUANTITY

0.97+

two top playersQUANTITY

0.97+

several years agoDATE

0.96+

about four yearsQUANTITY

0.96+

first customersQUANTITY

0.95+

tons of toolsQUANTITY

0.95+

KailynPERSON

0.95+

bothQUANTITY

0.95+

twoQUANTITY

0.94+

NorthwesternORGANIZATION

0.94+

NorthwesternLOCATION

0.93+

eachQUANTITY

0.91+

CrouchPERSON

0.91+

CBOORGANIZATION

0.91+

DevOpsTITLE

0.91+

two ofQUANTITY

0.89+

AIORGANIZATION

0.87+

thingsQUANTITY

0.87+

three such knowledgeable practitionersQUANTITY

0.87+

Matt Carroll, Immuta | CUBEConversation, November 2019


 

>> From the Silicon Angle Media office, in Boston Massachusetts, it's the Cube. Now, here's your host, Dave Vellante. >> Hi everybody, welcome to this Cube Conversation here in our studios, outside of Boston. My name is Dave Vellante. I'm here with Matt Carroll, who's the CEO of Immuta. Matt, good to see ya. >> Good, nice to have me on. >> So we're going to talk about governance, how to automate governance, data privacy, but let me start with Immuta. What is Immuta, why did you guys start this company? >> Yeah, Immuta is an automated data governance platform. We started this company back in 2014 because we saw a gap in the market to be able to control data. What's happened in the market as changes is that every enterprise wants to leverage their data. Data's the new app. But, governments want to regulate it and consumers want to protect it. These were at odds with one another, so we saw a need of creating a platform that could meet the needs of everyone. To democratize access to data and in the enterprise, but at the same time, provide the necessary controls on the data to enforce any regulation, and ensure that there was transparency as to who is using it and why. >> So let's unpack that a little bit. Just try to dig into the problem here. So we all know about the data explosion, of course, and I often say data used to be a liability, now it's turned into an asset. People used to say get rid of the data, now everybody wants to mine it, and they want to take advantage of it, but that causes privacy concerns for individuals. We've seen this with Facebook and many others. Regulations now come into play, GDPR, different states applying different regulations, so you have all these competing forces. The business guys just want to go and get out to the market, but then the lawyers and the compliance officers and others. So are you attacking that problem? Maybe you could describe that problem a little further and talk about how you guys... >> Yeah, absolutely. As you described, there's over 150 privacy regulations being proposed over 25 states, just in 2019 alone. GDPR has created or opened the flood gates if you will, for people to start thinking about how do we want to insert our values into data? How should people use it? And so, the challenge now is, you're right, your most sensitive data in an enterprise is most likely going to give you the most insight into driving your business forward, creating new revenue channels, and be able to optimize your operational expenses. But the challenge is that consumers have awoken to, we're not exactly sure we're okay with that, right? We signed a YULU with you to just use our data for marketing, but now you're using it for other revenue channels? Why? And so, where Immuta is trying to play in there is how do we give the line of business the ability to access that instantaneously? But also give the CISO, the Chief Information Security Officer, and the governance seems the ability to take control back. So it's a delicate balance between speed and safety. And I think what's really happening in the market is we used to think about security from building firewalls, we invested in physical security controls around managing external adversaries from stealing our data. But now it's not necessarily someone trying to steal it, it's just potentially misusing it by accident in the enterprise. And the CISO is having to step in and provide that level of control. And it's also the collision of the cloud and these privacy regulations. Cause now, we have data everywhere, it's not just in our firewalls. And that's the big challenge. That's the opportunity at hand, democratization of data in the enterprise. The problem is data's not all in the enterprise. Data's in the cloud, data's in SaaS, data's in the infrastructure. >> It's distributed by it's very nature. All right, so there's a lot of things I want to follow up on. So first, there's GDPR. When GDPR came out of course, it was May of 2018 I think. It went into effect. It actually came out in 2017, but the penalties didn't take effect till '18. And I thought, okay, maybe this can be a framework for governments around the world and states. It sounds like yeah sort of, but not really. Maybe there's elements of GDPR that people are adopting, but then it sounds like they're putting in their own twists, which is going to be a nightmare for companies. So, are you not seeing a sort of, GDPR becoming this global standard? It sounds like, no. >> I don't think it's going to be necessarily global standard, but I do think the spirit of the GDPR, and at the core of it is, why are you using my data? What was the purpose? So traditionally, when we think about using data, we think about all right, who's the user, and what authorizations do they have, right? But now, there's a third question. Sure, you're authorized to see this data, depending on your role or organization right? But why are you using it? Are you using it for certain business use? Are you using it for personal use? Why are you using this? That's the spirit of GDPR that everyone is adopting across the board. And then of course, each state, or each federal organization is thinking about their unique lens on it, right? And so you're right. This is going to be incredibly complex. And the amount of policies being enforced at query time. I'm in my favorite, let's just say I'm in Tableau or Looker right? I'm just some simple analyst, I'm a young kid, I'm 22, my first job right? And I'm running these queries, I don't know where the data is, right? I don't know what I'm combining. And what we found is on average in these large enterprises, any query at any moment in time, might have over 500 thousand policies that need to be enforced in real time. >> Wow. >> And it's only getting worse. We have to automate it. No human can handle all those edge cases. We have to automate. >> So, I want to get into how you guys actually do that. Before I do, there seems to be... There's a lot of confusion in the marketplace. Take the word data management, data protection. All the backup guys are using that term, the database guys use that term, GOC folks use that term, so there's a lot of confusion there. You have all these adjacent markets coming together. You've got the whole governance risk and compliance space, you've got cyber security, there's privacy concerns, which is kind of two sides of the same coin. How do you see these adjacencies coming together? It seems like you sit in the middle of all that. >> Yeah, welcome to why my marketing budget is getting bigger and bigger. The challenge we're facing now is I think, who owns the problem right? The Chief Data Officer is taking on a much larger role in these organizations, the CISO is taking a much more larger role in reporting up to the board. You have the line of business who now is almost self-sustaining, they don't have to depend on IT as much any longer because of the cloud and because of the new compute layers to make it easier. So who owns it? At the end of the day, where we see it is we think there's a next generation of cyber tools that are coming out. We think that the CISO has to own this. And the reason is that the CISO's job is to protect the enterprise from cyber risk. And at the core of cyber risk is data. And they must own the data problem. The CDO must find the data, and explain what that data is, and make sure it's quality, but it is the CISO that must protect the enterprise from these threats. And so, I see us as part of this next wave of cyber tools that are coming out. There's other companies that are equally in our stratosphere, like BigID, we're seeing AWS with Macy doing sensitive data discovery, Google has their data loss prevention service. So the cloud players are starting to see, hey, we've got to identify sensitive data. There's other startups that are saying hey, we got to identify and catalog sensitive data. And for us, we're saying hey, we need to be able to consume all that cataloging, understand what's sensitive, and automatically apply policies to ensure that any regulation in that environment is met. >> I want to ask you about the cloud too. So much to talk to you about here, Matt. So, I also wanted to get your perspective on variances within industries. So you mentioned Chief Data Officers. The ascendancy of the Chief Data Officers started in financial services, healthcare, and government where we had highly regulation industries. And now it's sort of seeped into more commercial. But it terms of those regulated industries, take healthcare for example. There are specific nuances. Can you talk about what you're seeing in terms of industry variance. >> Yeah, it's a great point. Starting with like, healthcare. What does it mean to be HIPPA compliant anymore? There are different types of devices now where I can point it at your heartbeat from a distance away and I can have 99 percent accuracy of identifying you, right? It takes three data points in any data set to identify 87 percent of US citizens. If I have your age, sex, location, I can identify you. So, what does it mean anymore to be HIPPA compliant? So the challenge is how do we build guarantees of trust that we've de-identified these DESA's, cause we have to use it, right? No one's going to go into a hospital and say, "You know what, I don't want you to say my life. "Cause I want my data protected," right? No one's ever going to say that. So the challenges we face now across these regulated industries is the most sensitive data sets are critical for those businesses to operate. So there has to be a compromise. So, what we're trying to do in these organizations is help them leverage their data and build levels of proportionality, to access that right? So, the key isn't to stop people from using data. The key is to build the controls necessary to leverage a small bit of the data. Let's just say, we've made it indistinguishable. You can only ask Agriculture and Statistics the question. Well, you know what, we actually found some really interesting things there, we need to be a little bit more useful, it's this trade-off between privacy and utility. It's a pendulum that swings back and forth. As someone proves I need more of this, you can swing it, or just mask it. I need more of it? All right, we'll just redact some of the certain things. Nope, this is really important, it's going to save someone's life. Okay, completely unmasked, you have the raw data. But it's that control that's necessary in these environments, that's what's missing. You know, we came out of the US Intelligence community. We understood this better than anyone. Because highly regulated, very sensitive data, but we knew we needed the ability to rapidly control. Well is this just a hunch, or is this a 9-11 event? And you need the ability to switch like that. That's the difference and so, healthcare is going through a change of, we have all these new algorithms. Like Facebook the other day said, hey, we have machine learning algorithms that can look at MRI scans, and we're going to be better than anyone in the world at identifying these. Do you feel good about giving your data to Facebook? I don't know, but we can maybe provide guaranteed anonymization to them, to prove to the world they're going to do right. That's where we have to get to. >> Well, this is huge, especially for the consumer, cause you just gave several examples. Facebook's going to know a lot about me, a mobile device, a Fit Bit, and yet, if I want to get access to my own medical records, it's like Fort Knox to try to get, please, give this to my insurance company. You know, you got to go through all these forms. So, you've got those diverging objectives and so, as a consumer, I want to be able to trust that when I say yes you can use it, go, and I can get access to it, and other can get access to it. I want to understand exactly what it is that you guys do, what you sell. Is it software, is it SAS, and then let's get into how it works. So what is it? >> Yeah, so we're a software platform. We deploy into any infrastructure, but it is not multi-tenant so, we can deploy on any cloud, or on premises for any customer, and we do that with customers across the world. But if you think about at the core of what is Immuta, think of Immuta as a system of record for the CISO or the line of business where I can connect to any data, on any infrastructure, on any compute layer, and we connect into over 61 different storage platforms. We then have built a UI where lawyers... We actually have three lawyers as employees that act as product managers to help any lawyer of any stature take what's on paper, these regulations, these rules and policies, and they digitize it essentially, in active code. So they can build any policy they want on any data in the ecosystem, in the enterprise, and enforce it globally without having to write any code. And then because we're this plane where you can connect any tool to this data, and enforce any regulation because we're the man in the middle, we can audit who is using what data and why. In every action, in any change in policy. So, if you think about it, it's connect any tool to any data, control it, any regulation, and prove compliance in a court of law. >> So you can set the policy at the data set level? >> Correct. >> And so, how does one do that? Can you automate that on the creation of that data set? I mean you've got you know, dependencies. How does that all work? >> Yeah, what's a really interesting part of our secret sauce is that one, we could do that at the column level, we can do it at the row level, we can do it at the cell level. >> So very granular. >> Very, very granular. This is something again, we learned from the US Intelligence community, that we have to have very fine grained access to every little bit of the data. The reason is that, especially in the age of data, is people are going to combine many data sets together. The challenge isn't enforcing the policy on a static data set, the challenge is enforcing the policy across three data sets where you merge three pieces of data together, who have conflicting policies. What do you do then? That's the beauty of our system. We deal with that policy inheritance, we manage that lineage of the policy, and can tell you here's what the policy will be. >> In other words, you can manage to the highest common denominator as an example. >> Or we can automate it to the lowest common denominator, where you can work in projects together recognizing hey, we're going to bring someone into the project that's not going to have the level of access. Everyone else will automatically change it to the lowest common denominator. But then you share that work with another team and it'll automatically be brought to the highest common denominator. And we've built all these work flows in. That was what was missing and that's why I call it a system of record. It's really a symbiotic relationship between IT, the data owner, governance, the CISO, who are trying to protect the data, and the consumer, and all they want to do is access the data as fast as possible to make better, more informed decisions. >> So the other mega-trend you have is obviously, the super power of machine intelligence, or artificial intelligence, and then you've got edge devices and machine to machine communication, where it's just an explosion of IP addresses and data, and so, it sounds like you guys can attack that problem as well. >> Any of this data coming in on any system, the idea is that eventually it's going to land somewhere, right? And you got to protect it. We call that like rogue data, right? This is why I said earlier, when we talk about data, we have to start thinking about it as it's not in some building anymore. Data's everywhere. It's going to be on a cloud infrastructure, it's going to be on premises, and it's likely, in the future, going to be on many distributed data centers around the world cause business is global. And so, what's interesting to us is no matter where the data's sitting, we can protect it, we can connect to it, and we allow people to access it. And that's the key thing is not worrying about how to lock down your physical infrastructure, it's about logically separating it. And that's why what differentiates us from other people is one, we don't copy the data, right? That's the always the barrier for these types of platforms. We leave the data where it is. The second is we take all those regulations and we can actually, at query time, push it down to where that data is. So rather than bring it to us, we push the policy to the data. And what that does is that's what allows us, what differentiates us from everyone else is, it allows us to guarantee that protection, no matter where the data's living. >> So you're essentially virtualizing the data? >> Yeah, yeah. It's virtual views of data, but it's not all the data. What people have to realize is in the day of apps, we cared about storage. We put all the data into a database, we built some services on top of it and a UI, and it was controlled that way, right? You had all the nice business logic to control it. In the age of data, right? Data is the new app, right? We have all these automation tools, Data Robot, and H20, and Domino, and Tableau's building all these automation work flows. >> The robotic process automation. >> Yeah, RPA, UI Path, the Work Fusion, right? They're making it easier and easier for any user to connect to any data and then automate the process around it. They don't need an app to build a unique work flows, these new tools do that for them. The key is getting to the data. And the challenge with the supply chain of data is time to data is the most critical aspect of that. Cause, the time to insight is perishable. And so, what I always tell people, a little story, I came from the government, I worked in Baghdad, we had 42 minutes to know whether or not a bad guy in the environment, we could go after him. After that, that data was perishable, right? We didn't know where he was. It's the same thing in the real world. It's like imagine if Google told you, well, in 42 minutes it might be a good time to go 495. (laughter) It's not very useful, I need to know the information now. That's the key. What we see is policy enforcement and regulations are the key barrier of entry. So our ability to rapidly, with no latency, be able to connect anyone to that data and enforce those policies where the data lives, that's the critical nature. >> Okay, so you can apply the policies and you do it quickly, and so now you can help solve the problem. You mentioned a cloud before, or on prem. What is the strategy there with regard to various clouds and how do you approach multi-clouds? >> I think cloud is what used to be an infrastructure as a service game, is now becoming a compute game. I think large, regulated enterprises, government, healthcare, financial services, insurance, are all moving to cloud now in a different way. >> What do you mean by that? Cause people think infrastructure as service, they'll say oh that's compute storage and some networking. What do you mean by that? >> I think there's a whole new age of software that's being laid on top of the availability of compute and the availability of storage. That's companies like Databricks, companies like Snowflake, and what they're doing is dramatically changing how people interact with data. The availability zones, the different types of features, the ability to rip and replace legacy warehouses and main frames. It's changing the ability to not just access, but also the types of users that could even come on to leverage this data. And so these enterprises are now thinking through, "How do I move my entire infrastructure of data to them? "And what are these new capabilities "that I could get out of that?" Which, that is just happening now. A lot of people have been thinking, "Oh, this has been happening over the past five years," no, the compute game is now the new war. I used to think of like, Big Data, right? Big Data created, everyone started to understand, "Ah, if we've got our data assets together, "we can get value." Now they're thinking, "All right, let's move beyond that." The new cloud at our currents works is Snowflake and Databricks. What they're thinking about is, "How do I take all your meta-data "and allow anyone to connect any BI tool, "any data science tool, and provide highly performance, "and highly dependable compute services "to process petabytes of data?" It's pretty fantastic. >> And very cost efficient and being able to scale, compute independent of storage, from an architectural perspective. A lot of people claim they can do that, but it doesn't scale the same way. >> Yeah, when you're talking about... Cause that's the thing is you got to remember, these financial systems especially, they depend on these transactions. They cannot go down and they're processing petabytes of data. That's what the new war is over, is that data in the compute layer. >> And the opportunity for you is that data that can come from anywhere, it's not sitting in a God box, where you can enforce policies on that corpus. You don't know where it's coming from. >> We want to be invisible to that right? You're using Snowflake, it's just automatically enforced. You're using Databricks, it's automatically enforced. All these policies are enforced in flight. No one should even truly care about us. We just want to allow you to use the data the way you're used to using it. >> And you do this, this secret sauce you talked about is math, it's artificial intelligence? >> It's math. I wish I could say it was like super fancy, unsupervised neural nets or what not, it's 15 years of working in the most regulated, sticky environments. We learned about very simple novel ways of pushing it down. Great engineering's always simple. But what we've done is... At query time, what's really neat is we figured a way to take user attributes from identity management system and combine that with a purpose, and then what we do is we've built all these libraries to connect into all these dispert storage and compute systems, to push it in there. The nice thing about that is prior to this what people were doing, was making copies. They'd go to the data engineering team and they'd say hey, "I need to ETL this "and get a copy and it'll be anatomized." Think about that for a second. One, the load on your production systems, of all these copies, all the time, right? The second is CISO, the surface area. Now you've got all this data that in a snapshot in time, is legal and ethical, might change tomorrow. And so, now you've got an increase surface area of risk. Like that no-copy aspect. So the pushing it down and then the no-copy aspect really changed the game for enterprises. >> And you've got providence issues, like you say. You've got governance and compliance. >> And imagine trying, if someone said to you, imagine Congress said hey, "Any data source that you've processed "over the past five years, I want to know if "there was these three people in any of these data sources "and if there were, who touched that data "and why did they touch it?" >> Yeah and storage is cheap, but there's unintended consequences. People are, management isn't. >> We just don't have a unified way to look at all of the logs cross listed. >> So we started to talk about cloud and then I took you down a different path. But you offer your software on any cloud, is that right? >> Yeah, so right now, we are in production on Immuta's Marketplace. And that is a managed service, so you can go deploy in there, it'll go into your VPC, and we can manage the updates for you, we have no insight into your infrastructure, but we can push those updates, it'll automatically update, so you're getting our quarterly releases, we release every season. But yeah, we started with AWBS, and then we will grow out. We see cloud is just too ubiquitous. Currently, we still support though, Bigquery, Data Praq, we support Azure, Data Light Storage version two, as well as Azure Databricks. But you can get us through Immuta's Marketplace. We're also investing in ReInvent, we'll be out there in Vegas in a couple weeks. It's a big event for us just because obviously, the government has a very big stake in AWBS, but also commercial customers. It's been a massive endeavor to move. We've seen lots of infrastructure. Most of our deals now are on cloud infrastructure. >> Great, so tell us about the company. You've raised, I think in a Series B, about 28 million to date. Maybe you could give us the head count, and whatever you can share about momentum, maybe customer examples. >> Yeah, so we've raised 32 million to date. >> 32 million. >> From some great investors. The company's about 70 people now. So not too big, but not small anymore. Just this year, at this point, I haven't closed my fiscal year, so I don't want to give too much, but we've doubled our ARR and we've tripled our LOGO count this year alone and we've still got one more quarter here. We just started our fourth quarter. And some customer cases, the way I think about our business is I love healthcare, I love government, I love finance. To give you some examples is like, COGNO is a really great example. COGNO and what they're trying to solve is can they predict where a child is on the autism spectrum? And they're trying to use machine learning to be able to narrow these children down so that they can see patterns as to how a provider, a therapist is helping these families give these kids the skills to operate in the real world. And so it's like this symbiotic relationship utilizing software, surveys and video and what not, to help connect these kids that are in similar areas of the spectrum, to help say hey, this is a successful treatment, right? The problem with that is we need lots of training data. And this is children, one, two, this is healthcare, and so, how do you guarantee HIPPA compliance? How do you get through FDA trials, through third party, blind testing? And still continue to validate and retrain your models, while protecting the identity of these children? So we provide a platform where we can anonymize all the data for them, we can guarantee that there's blind studies, where the company doesn't have access to certain subsets of the data. We can also then connect providers to gain access to the HIPPA data as needed. We can automate the whole thing for them. And they're a startup too, there are 100 people. But imagine if you were a startup in this health-tech industry and you had to invest in the backend infrastructure to handle all of that. It's too expensive. What we're unlocking for them, I mean yes, it's great that they're HIPPA compliant and all that, that's what we want right? But the more important thing is like, we're providing a value add to innovate in areas utilizing machine learning, that regulations would've stymied, right? We're allowing startups in that ecosystem to really push us forward and help those families. >> Cause HIPPA compliance is table stay compulsory. But now you're talking about enabling new business models. >> Yeah, yeah exactly. >> How did you get into all this? You're CEO, you're business savvy, but it sounds like you're pretty technical as well. What's your background? >> Yeah I mean, so I worked in the intelligence community before this. And most of my focus was on how do we take data and be able to leverage it, either for counter-terrorism missions, to different non-kinetic operations. And so, where I kind of grew up in is in this age of, think about billions of dollars in Baghdad. Where I learned is that through the computing infrastructure there, everything changed. 2006 Baghdad created this boom of technology. We had drones, right? We had all these devices on our trucks that were collecting information in real time and telling us things. And then we started building computing infrastructure and it burst Hadoop. So, I kind of grew up in this era of Big Data. We were collecting it all, we had no idea what to do with it. We had nowhere to process it. And so, I kind of saw like, there's a problem here. If we can find the unique little, you know, nuggets of information out of that, we can make some really smart decisions and save lives. So once I left that community, I kind of dedicated myself to that. The birth of this company again, was spun out of the US Intelligence community and it was really a simple problem. It was, they had a bunch of data scientists that couldn't access data fast enough. So they couldn't solve problems at the speed they needed to. It took four to six months to get to data, the mission said they needed it in less than 72 hours. So it was orthogonal to one another, and so it was very clear we had to solve that problem fast. So that weird world of very secure, really sensitive, but also the success that we saw of using data. It was so obvious that we need to democratize access to data, but we need to do it securely and we need to be able to prove it. We work with more lawyers in the intelligence community than you could ever imagine, so the goal was always, how do we make a lawyer happy? If you figure that problem out, you have some success and I think we've done it. >> Well that's awesome in applying that example to the commercial business world. Scott McNeely's famous for saying there is no privacy in the internet, get over it. Well guess what, people aren't going to get over it. It's the individuals that are much more concerned with it after the whole Facebook and fake news debacle. And as well, organizations putting data in the cloud. They need to govern their data, they need that privacy. So Matt, thanks very much for sharing with us your perspectives on the market, and the best of luck with Immuta. >> Thanks so much, I appreciate it. Thanks for having me out. >> All right, you're welcome. All right and thank you everybody for watching this Cube Conversation. This is Dave Vellante, we'll see ya next time. (digital music)

Published Date : Nov 7 2019

SUMMARY :

in Boston Massachusetts, it's the Cube. Matt, good to see ya. What is Immuta, why did you guys start this company? on the data to enforce any regulation, and get out to the market, but then the lawyers and the governance seems the ability to take control back. but the penalties didn't take effect till '18. and at the core of it is, why are you using my data? We have to automate it. There's a lot of confusion in the marketplace. So the cloud players are starting to see, So much to talk to you about here, Matt. So, the key isn't to stop people from using data. and I can get access to it, and other can get access to it. and we do that with customers across the world. Can you automate that on the creation of that data set? we can do it at the row level, The reason is that, especially in the age of data, to the highest common denominator as an example. and the consumer, and all they want to do So the other mega-trend you have is obviously, and it's likely, in the future, You had all the nice business logic to control it. Cause, the time to insight is perishable. What is the strategy there with regard to are all moving to cloud now in a different way. What do you mean by that? It's changing the ability to not just access, but it doesn't scale the same way. Cause that's the thing is you got to remember, And the opportunity for you is that data We just want to allow you to use the data and they'd say hey, "I need to ETL this And you've got providence issues, like you say. Yeah and storage is cheap, to look at all of the logs cross listed. and then I took you down a different path. and we can manage the updates for you, and whatever you can share about momentum, in the backend infrastructure to handle all of that. But now you're talking about enabling new business models. How did you get into all this? so the goal was always, how do we make a lawyer happy? and the best of luck with Immuta. Thanks so much, I appreciate it. All right and thank you everybody

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Matt CarrollPERSON

0.99+

BostonLOCATION

0.99+

ImmutaORGANIZATION

0.99+

MattPERSON

0.99+

2014DATE

0.99+

GoogleORGANIZATION

0.99+

2017DATE

0.99+

15 yearsQUANTITY

0.99+

32 millionQUANTITY

0.99+

FacebookORGANIZATION

0.99+

2019DATE

0.99+

November 2019DATE

0.99+

VegasLOCATION

0.99+

99 percentQUANTITY

0.99+

CongressORGANIZATION

0.99+

BaghdadLOCATION

0.99+

SnowflakeORGANIZATION

0.99+

42 minutesQUANTITY

0.99+

GDPRTITLE

0.99+

fourQUANTITY

0.99+

third questionQUANTITY

0.99+

AWSORGANIZATION

0.99+

six monthsQUANTITY

0.99+

22QUANTITY

0.99+

three peopleQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

May of 2018DATE

0.99+

BigqueryORGANIZATION

0.99+

three piecesQUANTITY

0.99+

87 percentQUANTITY

0.99+

two sidesQUANTITY

0.99+

Data PraqORGANIZATION

0.99+

Scott McNeelyPERSON

0.99+

DatabricksORGANIZATION

0.99+

less than 72 hoursQUANTITY

0.99+

twoQUANTITY

0.99+

100 peopleQUANTITY

0.99+

firstQUANTITY

0.99+

tomorrowDATE

0.99+

first jobQUANTITY

0.98+

secondQUANTITY

0.98+

2006DATE

0.98+

ReInventORGANIZATION

0.98+

each stateQUANTITY

0.98+

USLOCATION

0.98+

this yearDATE

0.98+

AWBSORGANIZATION

0.98+

over 500 thousand policiesQUANTITY

0.98+

over 25 statesQUANTITY

0.98+

oneQUANTITY

0.98+

over 150 privacy regulationsQUANTITY

0.98+

'18DATE

0.98+

495QUANTITY

0.98+

fourth quarterDATE

0.98+

OneQUANTITY

0.97+

about 70 peopleQUANTITY

0.96+

three data setsQUANTITY

0.96+

billions of dollarsQUANTITY

0.95+

Series BOTHER

0.95+

one more quarterQUANTITY

0.95+

YULUORGANIZATION

0.95+

CISOORGANIZATION

0.95+

LookerORGANIZATION

0.94+

over 61 different storage platformsQUANTITY

0.93+

Fort KnoxORGANIZATION

0.92+

about 28 millionQUANTITY

0.92+

ImmutaTITLE

0.92+

TableauORGANIZATION

0.88+

NEEDS APPROVAL Chris Smith, Ticketmaster | ESCAPE/19


 

(upbeat techno music) >> Narrator: From New York, it's theCUBE, Covering Escape/19. >> Okay, welcome back to theCUBE coverage here in New York City for the first inaugural Multi-Cloud Conference called Escape/2019 as in gathering of industry thought leaders, experts, entrepreneurs, engineers, really having substantive conversations around what multi-cloud is, what it's going to look like, what are some of the thing, technical and business opportunities around that, really small intimate conference. Again first inaugural conference. I'm here with my next guest to talk about that Chris Smith, Vice President of Engineering, on Data Science at Ticketmaster. Chris, thanks for coming on. >> Thank you very much Don. >> Appreciate taking the time. >> Glad to talk to you. >> Practitioner out there, you know, we all go scar tissue. >> Yes we do. >> If you don't have scar tissue, if you're not breaking things and then the learning from it then you're not advancing. But sometimes you don't want to step too far forward right? >> Yep, yep. >> Can you get back it's like you know. So you guys have a great experience. Legacy business, I remember buying tickets when I was going to conference back in the day when I was in, you know, in college. >> Yep. >> Buy it at Ticketmaster. >> That's right, that was Ticketmaster then, Ticketmaster now. >> Now it's lot of online provisioning of all direct to consumer. So you guys are a journey, tell the story. >> Well certainly, the company Ticketmaster, has had an incredibly long journey, starting back our first concert was Electric Light Orchestra which kind of like puts that in in context. >> (laughs) I was in eighth grade, '79. >> Yeah, yeah that was back at ASU. And even then we were a very innovative technology company we were making ticketing platforms that performed better, got more capacity out of the hardware than anybody else could do, anything close to that. We were really pioneered that idea of the what was at the time called the electronic ticket. Which was the idea that, you know, you could go to any store that was selling tickets for an event and the same inventory would be available at each store instead of the old model of a bunch of tickets getting sent out to each place >> That was bad-ass back in the day. >> That was really cutting edge and we've been evolving ever since then for 40 years. We were also very early onto the web scene. We were selling tickets online before anybody else was and before most people were selling anything online really to a degree. So we've been pioneers in a lot of areas, we see ourselves as the technology partner for the live events business. That's really what we are. And as a consequence, we're always sitting on that edge right? Trying to innovate and move to new opportunities but at the same time trying to provide that quality of experience at scale. >> Yeah. >> That is so critical to the business. >> And there's a big business so it's not like it's your nimble start up but you got to be agile. What are the learnings? Take us through the cloud learnings as you guys pioneered and started to go into that pioneering mode which was okay, you don't have to be a rocket scientist to figure out what a cloud's going to do. So you guys probably said hey, we got to go look at this, let's go pioneer our impact, take us through that what happened? >> Yeah absolutely, and I think there's two interesting contexts that started that conversation right? One was we're one of the few online businesses that launches a denial of services attack against itself on a regular basis, basically every day, right? And so we have traffic patterns that are unusual even for a typical e-commerce site where we might see loads that are a hundred x, you know beginning of a Taylor Swift on sale. There's going to be traffic like no one's business. And then when all her tickets are sold, there's not going to be nearly as much traffic right? And so that is the nature of our business and cloud is very attractive for its elastic capacity. When we were running on prim, we have to provide all that capacity all the time, just to have it for that one peak moment that might literally be the highest traffic level we see all year, right? So that drew a lot of the interest in looking at the cloud in the first place. And then the other aspect was we'd been working on, you know we'd been running on prim for nearly 40 years at the time and there is a lot of technical debt that had accumulated in the system at that point. And so, there was an interest in maybe potentially being able to leverage cloud vendors' infrastructure, and migrate systems onto that and then sort of declare bankruptcy on some of that technical debt rather than trying to pay it off. And so that, those were the two thoughts that were driving that conversation. I think we got really excited by the possibility and we committed really heavily to the idea of a strategy of just moving aggressively into the cloud as fast as we possibly could. And we knew that in the process, that we would be breaking some things, we'd be you know discovering some challenges et cetera, and that's definitely what happened, right? >> (laughs) What was the big learning? >> I think the biggest learning was that, you know, we had been developing systems for decades literally, with our on prim environment and so the systems were actually very well tuned for that on prim environment and that on prim environment was very well tuned for them. >> Yeah, yeah exactly. >> And it clouds use-- >> On all levels, hardware, software. >> Yeah, all the way through 'cause it's a fully integrated, vertically integrated solution. We build a lot of this stuff custom ourselves. >> John: Yeah, and we would decompose all that. >> And so it was very difficult to migrate some parts of that to the cloud and more importantly we're pretty smart guys, we can figure out how to move stuff into the cloud. But then to do it in a cost effective manner. Required in a lot of cases, really dramatically changing the design and architecture even of the software at a pretty fundamental level that you just can't do overnight. And so ironically, you know, the technical debt that we had in our infrastructure didn't seem quite so huge once you start thinking about the technical debt of the entire stack, right? And so then we realized that we could be much more strategic about how we went after our cloud strategy and that's kind of where we are now. Where we are being smart about, there's a lot of new products that are being developed, that, you know, we can build from the get go with the idea of them being designed for the cloud. >> Cloud native. >> Exactly, so we have a lot of stuff like that, that's just being built, in fact, the bulk of our website now when you go to visit it as a consumer, the bulk of that is running in the cloud right now. But, there are some really critical systems that are core to that experience, that are still running on prim. >> So you guys had to essentially re-architect the operating environment to take into account hybrid operating. >> Yes. >> Decoupling the critical systems that can't be tampered with, maybe put some containers of Kubernetes move some services around. But for the most part treat Cloud Native as Cloud Native, Greenfield apps and nurture-- >> Yeah but there's also refactoring opportunities. So there's a lot of opportunities where you need to go in and change the product anyway and that can be an opportunity to make things a lot more cloud friendly and better take advantage of the capabilities that the cloud has, so it's actually a mix of both. >> Give an example of a good opportunity to refactory, 'cause this comes up a lot in my CUBE interviews. Like okay, 'cause it's all opportunity, opportunistic, but what are the characteristics for a great refactoring opportunity the tune up? >> So a lot of times when you want to refactor really what you want to do is take a set of capabilities that you may have in a much larger system and pull 'em out and manipulate them and play around with them and do things differently. So, our ticket purchasing process we're constantly looking at tweaking the process. Now the core pieces of it remain the same right? But we might want to change the experience and provide something more innovative that's different from what people used to do. And so one of the areas we're working on for this as an example is reserve-less checkout. Where you just buy the ticket without ever actually reserving the seat. That's a very small minor change in the flow, but to make that really work you have to pull out the pieces of the system anyway right? And grab, say I want these four pieces to rearrange differently, so that's a great refactoring opportunity. You can make all those pieces, what we actually did is we've made those pieces into lambdas that are sitting in AWS, they're basically not running most of the time which is great. >> Yeah. (laughs) >> Really cheap when it's not running right? >> Yeah, exactly. >> Very efficient. But then when we need them they run very efficiently and more importantly we can now manipulate the order of operations for this stuff. So breaking things out into those composable parts whenever you know you need to do that anyway, it's a great opportunity to change it. >> So great for work flow refactoring there. >> Absolutely. >> Final question for you, I know we got to break for lunch, but, then really appreciate you coming and sharing your insight. >> Absolutely. >> As a pioneer in data science and data you got machine learning certainly is the engine of AI. AI gets math and cognition are kind of coming into it. Learning machines, deep learning, bla bla bla, what's your, in your opinion, what are some pioneering areas that are ripe pioneering grounds to dig into in data science and data? When you think about CloudScale, Hybrid and just, in general what are the ripe opportunities for people to pioneer in daily. What's the next frontier in your mind? >> So I think the trend right now that's maybe not the frontier, but it's now where the main shift is, is to moving into what I would call real time learning, right? Where you're doing refactor, reinforcement learning, or online learning of some form. Where you're literally, the data's arriving in real time, transforming your model in real time, learning in real time, that's key to our strategy and it's very very common. But I think in terms of where the frontiers are it's actually kind of everywhere, in the sense that the name of the game is the cost of doing that work is getting lower and lower. You know, data's getting cheaper, computes' getting cheaper, and also the products for doing it are getting more productized, so you need less expertise and you can deploy them more quickly. So what you want to look at is businesses that are traditionally been too low margin right? To apply machine learning to but have large scale, right? Which is like the commodity, everything in that's commoditized, right? Now there's an opportunity to, there's the cost have gone so low-- >> To squeeze insight out of those areas. >> That you can now optimize that small margin and get value from it with you know, otherwise like 10 years ago it would have been so costly to build a machine learning infrastructure for it. You would've lost more money than you would've gained. >> So you could, what your saying is, these areas that were not attractive because of cost in the past, that have large scale, there's penetration opportunities to create value and insight that could-- >> Absolutely. >> Bring in new franchises and new capabilities. >> And that's why I think you know the Andreessen's software's eating the world thing, that's what that's really about is as those costs get lower, as the ability to deploy gets easier, suddenly businesses that before didn't make any sense to invest in this way, they totally make sense and in fact there's huge opportunities to completely transform the landscape by getting in. >> Chris you're a man of our world, we love you, thank you for coming on theCUBE. >> Thank you so much. >> That's great insight. >> Look at this we're getting insider on the future of data, which I believe everything that he just said is totally relevant. You're an entrepreneur out there, you can attack big markets and get in there with a position with great IP, great intellectual property, again this is the modern world of computer science. >> It is. >> Don't ya think? >> It absolutely is. >> This is the benefit of scale and cloud. >> Absolutely. >> I wish I was 20 something years old again. (laughs) We've been through the ringer. >> Yes. >> Chris, thanks for coming on. Keep coverage here in New York for the first inaugural conference, Escape/2019, I'm John Furrier here, thanks for watching. (upbeat techno music)

Published Date : Oct 19 2019

SUMMARY :

Narrator: From New York, it's theCUBE, for the first inaugural Multi-Cloud Conference Practitioner out there, you know, But sometimes you don't want to step too far forward right? So you guys have a great experience. That's right, that was Ticketmaster then, So you guys are a journey, tell the story. Well certainly, the company Ticketmaster, that performed better, got more capacity out of the hardware back in the day. but at the same time trying to provide that quality as you guys pioneered and started to go And so that is the nature of our business and so the systems were actually very well tuned Yeah, all the way through 'cause it's a fully integrated, And so ironically, you know, the technical debt in fact, the bulk of our website now the operating environment to take into account But for the most part treat Cloud Native as Cloud Native, and that can be an opportunity to make things a great refactoring opportunity the tune up? So a lot of times when you want to refactor and more importantly we can now manipulate but, then really appreciate you coming and data you got machine learning So what you want to look at is businesses that are with you know, otherwise like 10 years ago as the ability to deploy gets easier, thank you for coming on theCUBE. you can attack big markets and get in there I wish I was 20 something years old again. for the first inaugural conference, Escape/2019,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Chris SmithPERSON

0.99+

TicketmasterORGANIZATION

0.99+

ChrisPERSON

0.99+

40 yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

New YorkLOCATION

0.99+

New York CityLOCATION

0.99+

Electric Light OrchestraORGANIZATION

0.99+

two thoughtsQUANTITY

0.99+

each storeQUANTITY

0.99+

OneQUANTITY

0.99+

first concertQUANTITY

0.99+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

DonPERSON

0.97+

'79DATE

0.97+

Cloud NativeTITLE

0.96+

four piecesQUANTITY

0.96+

two interesting contextsQUANTITY

0.96+

10 years agoDATE

0.95+

nearly 40 yearsQUANTITY

0.95+

ASUORGANIZATION

0.95+

Taylor SwiftPERSON

0.93+

each placeQUANTITY

0.93+

first placeQUANTITY

0.92+

Escape/2019EVENT

0.92+

AndreessenORGANIZATION

0.88+

20 something years oldQUANTITY

0.85+

2019DATE

0.84+

AWSORGANIZATION

0.82+

decadesQUANTITY

0.82+

Escape/EVENT

0.81+

Vice PresidentPERSON

0.77+

first inaugural conferenceQUANTITY

0.76+

KubernetesTITLE

0.73+

eighth gradeQUANTITY

0.7+

theCUBEORGANIZATION

0.64+

hundred xQUANTITY

0.63+

first inaugural Multi-Cloud ConferenceEVENT

0.6+

CUBETITLE

0.55+

CloudScaleTITLE

0.52+

19DATE

0.5+

GreenfieldORGANIZATION

0.48+

EscapeTITLE

0.34+

19EVENT

0.33+

Mariesa Coughanour, Cognizant & Clemmie Malley, NextEra Energy | UiPath FORWARD III 2019


 

(upbeat music) >> Live, from Las Vegas, it's theCUBE, covering UiPath Forward Americas 2019. Brought to you by UiPath. >> Welcome back to Las Vegas, everybody. You're watching theCUBE, the leader in live tech coverage. We go out to the events and we extract the signal from the noise. This is day two of UiPath Forward III, the third North American conference that UiPath-- The rocket ship that is UiPath. Clemmie Malley is here. She's the Enterprise RPA Center of Excellence Lead at NextEra Energy. Welcome. Great to have you. And Mariesa Coughanour, who is the Managing Principal of Intelligent Automation and Technology at Cognizant. Nice to see you guys. >> Nice seeing you. >> Nice to see you. >> Thanks for coming on. How's the show going for you? >> It's been great so far. >> Yes. >> It's been awesome. >> Have you been to multiple... >> This is my third. >> Yep. >> Really? Okay, great. How does this compare? >> It has changed significantly in three years, so. It was very small in New York in 2017 and even last year grew, but now it's a two-year event taking over. >> Yeah, last year Miami was-- >> I don't know. >> It was nice. >> Definitely smaller than this, but it was happening. Kind of hip vibe. We're here in Vegas, everybody loves to be in Vegas. CUBE comes to Vegas a lot. So tell me more about your role at NextEra Energy. But let's start with the company. You guys are multi billion, many, many, tens of billions, probably close to $20 billion energy firm. Really dynamic industry. >> Yeah, so NextEra Energy is actually an awesome company, right? So we're the world's largest in clean renewable energy. So with wind and solar, really, and we also have Florida Power and Light, which is one of the child companies to NextEra as a parent, which is headquartered out of Florida. So it's usually the regulated side of power in the state of Florida. >> We know those guys. We've actually done some work with Florida Power and Light. Cool people down there. And we heard, one of the keynotes today, Craig LeClaire, was saying, "Yeah, the Center of Excellence, "that's actually maybe asking too much." But there are a lot of folks here that are sort of involved in a COE and that's kind of your role. But I was surprised to hear him say that. I don't know if you were in the keynote this morning, but was it a challenge to get a Center of Excellence? What is that all about? >> So I think there's a little bit of caution around doing it initially. People are very aggressive. And we actually learned from this story. So when we started, it was more about showing value, building as many automations as possible. We didn't really care about having a COE. The COE just happened to form. >> Okay. >> Because we found out we needed some level of governance and control around what we were doing. But now that I look back on it, it's really instrumental to making sure we have the success. So whether you do a hybrid development to automation, which you can have citizen development, or you're fully centralized, I think having the strong COE to have that core governance model and control and process is important. >> Mariesa, so your title is not, there's not RPA in your title, right? RPA is too narrow, right? >> Yeah. >> In your business you're trying to help transform companies, it's all about automation. But maybe explain a little bit about your practice and your role. >> Sure, so Cognizant's been on the automation journey now for three years. We started back in 2014 and right out the gate it was all about intelligent automation, just not RPA. Because we knew to be able to do end-to-end solutions you would need multiple technologies to really get the job done and get the outcomes they wanted. So we sit now, over 2,500 folks at our practice, going out, working cross-industry, cross-regions to be able to work with people like Clemmie to put in their program. And we've even added some stuff recently. A lot of it actually inspired by NextEra. And we have an advisory team now. And our whole job is to go in and help people unstuck their programs, for lack of a better way to say it. Help them think about, how do you put that foundation? Get a little bit stronger and actually enable scale, and putting in all this technology to get outcomes? Versus just focusing on just the pure play RPA, which a lot of people struggle to gain the benefits from. >> So Clemmie, what leads you to the decision to bring in an outside firm like Cognizant? What's that discussion like internally? >> So, I'll just give you a little bit of backstory, because I think that's interesting, as well. When we started playing with RPA in late 2016, early 2017, we knew that we wanted to do a lot of things in-house, but in order to have a flex model and really develop automations across the company, we needed to have a partner. And we wanted them to focus more on delivery, so developing, and then partner with us to give us some best practices, things that we could do better. When we founded the COE we knew what we wanted to do. So we actually had two other partners before we went with Cognizant, and that was a huge challenge for us. We found we were reworking a lot of the code that they gave us. They weren't there to be our partners. They wanted to come and actually do the work for us, instead of enabling us to be successful. And we actually said, "We don't want a partner." And then Cognizant came in and they actually were like, "Let's give you somebody." So we wanted somebody around delivery, because we said, "Okay, now that we centralize, "we have a good foundation, a good model, "we're going to need to focus on scale. "So how do we do that? "We need a flex model." So Cognizant came in and they said, "Well, we're going to offer you a delivery lead "to help focus on making sure "you get the automations out the door." Well, Mariesa actually showed up, which was one of the best hidden surprises that we received. And she really just came in, learned the company, learned our culture, and was able to say, "Okay, here's some guidance. "What can you instill? "What can you bring?" Tracking, and start capturing the outcomes that she's mentioned. And I know that was a little bit more, but it's been quite a journey. >> No, it's really good, back up. So Mariesa, I'm hearing from Clemmie that you were willing to teach these guys how to fish, as opposed to just perpetual, hourly, daily rate billing. >> Yep. And that's really what our belief is. We can go in, and yes, we can augment, from resourcing perspective, help them deliver, develop, support everything, which we do. And we work with Clemmie and others to do that. But what's really important to get to scale was how do we teach them how to go do this? Because if you're going to really embed this type of automation culture and mindset, you have to teach people how to do it. It's not about just leaning on me. I needed to help Clemmie. I need to help her team, and also their leadership and their employees. On how do you identify opportunities, and how then do you make these things actually work and run? >> So you really understand the organization. Clemmie was saying you learned the culture. >> Yeah. >> So you're not just a salesperson going in and hanging out in theCUBE. So you're kind of an extension, really, of the staff. So, either of you, if you can explain to me sort of, where RPA fits into this broader vision. That would really be helpful. >> Sure, so maybe I can kick a little bit off from what I'm seeing from clients like Clemmie, and also other customers. So what you'll find is RPA tends to be like this gateway. It's the stepping stone to all things automation. Because folks in the business, they really understand it. It's rule-based, right? It's a game of Simon Says, in some ways, when you first get this going. And then after that, it's enabling the other technology and looking at, "Look, if I want to go end-to-end, "what do I need to get the job done? "What do I need around data intake? "How do I have the right framework "to pick the right OCR tool, "or put analytics on, "or machine learning?" Because there's so much out there today and you need to have the stuff that's right-fit to come in. And so it's really about looking at what's that company strategy? And then looking at this as a tool set. And how to use these tools to go and get the job done. And that's what we were doing a lot with Clemmie and team when we sat down. They have a steering committee that's chaired by their CIO, Chief Accounting Officer, and senior leaders from every business unit across their enterprise. >> So you mentioned scaling. >> Yep. >> We heard today in the predictions segment that we're going to move from snowflake to snowball. And so I would think for scaling it's important to identify reusable components. And so how have you, how has that played out for you? And how's the scaling going? >> Yeah, so that's been one really cool component that we've built out in the COE. So I had my team actually vote on a name and we said, "We want to go after reusable components." They decided to call them Microbots. So it's a cool little term that we coined. >> That's cool. >> And our CIO and CAO actually talk about them frequently. "How are our Microbots? "How many do we have? "What are they doing?" So it's pretty catchy. But what it's really enabled us is to build these reusable snippets of code that are specific to how we perform as a company that we can plug and play and reduce our cycle time. So we've actually reduced our cycle time by over 50%. And reusable components is one of the major key components. >> So how do you share those components? Are they available in some kind of internal marketplace? And how do you train people to actually know what to apply where? >> Right. So because we're centralized, it's a little bit easier, right? We have a stored repository, where they're available. We document them-- >> And it's the COE-- Sorry to interrupt. It's the COE's responsibility, and-- >> Exactly. So the COE has it. We're actually working with Cognizant right now to figure out how can we document those further, right? And UiPath. There's a lot of cool tools that were introduced this week. So I think we're definitely going to be leveraging from them. But the ability to really show what they are, make them available, and we're doing all of that internally right now. Probably a little manual. So it'll be great to have that available. >> So Amazon has this cool concept they call working backwards documents. I don't know if you ever heard this. But what they do is they basically write the press release, thinking five years in advance. This is how they started AWS, they actually wrote. This is what we want, and then they work backwards from there. So my question is around engineering outcomes. Can you engineer outcomes, and is that how you were thinking about this? Or is it just too many unknown parts of the process that you can't predict? >> So I think one of the things that we did was we did think about, "What do we want to achieve with this?" So one of the big programs that Clemmie and the team have is also around accelerate. And their key initiatives to drive, whether it's improve customer experience, more efficiencies of certain processes across the company. And so we looked at that first, and said, "Okay, how do we enable that?" That's a top strategy driven by their CEO. And even when we prioritize all the work, we actually build a model for them. So that it's objective. So if any opportunities that come in align to those key outcomes that the company's striving for, they can prioritize first to be worked on. I actually also think this is where this is all going. Everyone focuses today on these automation COEs and automation teams, but what you will see, and this is happening at NextEra, and all the places we're starting to see this scale, is you end up with this outcomes management office. This is a core nucleus of a team that is automation, there's IT at the table, there's this lean quality mindset at the table, and they're actually looking at opportunities and saying, "All right, this one's yours. "This one's yours and then I'll pick up from you." And it's driving, then, the right outcomes for the organization versus just saying, "I have a hammer, I'm going to go find a nail," which sometimes happens. >> Right, oh, for sure. And it may be a fine nail to hit, but it might not be the most strategic-- >> Exactly. >> Or the most valuable. So what are some examples of areas that you're most excited about? Where you've applied automation and have given a business outcome that's been successful? >> Yeah, so we are an energy company. And we've had a lot of really awesome brainstorming sessions that we've held with UiPath and Cognizant. And a couple of key ones that have come out of it, really around storm season is big for us in the state of Florida. And making sure that our critical infrastructure is available. So our nursing homes, our hospitals, and so on. So we've actually built automations that help us to ping and make sure that they're available, so that we can stay proactive, right? There's also a cool use-case around, really, the intelligent automations space. So our linemen in their trucks are saying, "Hey, we spend a lot of time having to log on the computer, "log our tickets, "and then we have to turn our computers off, "drive to the next site, "and we're not able to restore as much power "or resolve issues as quickly as possible." So we said, "How can we enable them?" Speech recognition, where they can talk to it, it can log a ticket for them on their behalf. So it's pretty exciting. >> So that's kind of an interesting example. Where RPA, in and of itself's not going to solve that problem, right, but speech recognition-- >> It's a combination. >> So you got to bring in other technology, so using, what, some NLP capability, or? >> Yeah, so that's one we're currently working on. But yes, you would need some type of cognitive speech recognition, and. >> So you sort of playing around with that in R&D right now? The speech [Mumbles]. >> Yeah. >> Which, as you know, is not perfect, right? >> It is not. >> Talk to us. We know about it all. Because we transcribe every word that's said on theCUBE. And so, there's some good ones and there's some not so good ones. And they're getting better, though. They're getting better. And that's going to be kind of commodity shortly. You really need just good enough, right? I mean, is that true? Or do you need near perfect? >> So I think there's a happy medium. It depends on what you're trying to do. In this case we're logging tickets, so there might be some variability that you can have. But I will say, so NextEra is really focused on energy, but they're also trying to set themselves apart. So they're trying to focus on innovation, as well. So this is a lot of the areas that they're focusing on: the machine learning, and the processing, and we even have chat bots that they're coining and branding internally, so it's pretty exciting. >> So NextEra is, are you entirely new energy? Is that right? No fossil fuels, or? >> So it's all clean energy, yes. Across the enterprise. >> Awesome. How's that going? Obviously you guys are very successful, but, I mean, what's kind of happening in the energy business today? You're sort of seeing a resurgence in oil, right, but? >> Yeah, so I think we had a really good boom. A couple years ago there were a lot of tax credits that we were able to grow that side of our company. And it enabled us to really pivot to be the clean energy that we are. >> I mean, that's key, right? I mean, United States, we want to lead in clean energy. And I'm not sure we are. I mean, like you say, there was tax incentives and credits that sort of drove a lot of innovation, but am I correct? You see countries outside the U.S., really, maybe leaning in harder. I mean, obviously we got NextEra, but. >> I mean, I think there's definitely competition out there. We're focused on trying to be, maybe not the best, but compete with the best. We're also trying to focus on what's next, right? So be proactive, and grow the company in a multitude of ways. Maybe even outside the energy sector, just to make sure that we can compete. But really what we're focused on is the clean renewables, so. >> That's awesome. I mean, as a country we need this, and it's great to have organizations like yours. Mariesa, I'll give you the final word. Kind of, the landscape of automation. What inning are we in? Baseball analogy. Or how far can this thing go? And what's your sort of, as you pull out the binoculars, maybe not the telescope, but the binoculars, where do you see it going? >> I think there's a lot of runway left. So if you look at a lot of the research out there today, I heard today, 10% was quoted by one person. I heard 13% quoted from HFS around where are we at on scale from an RPA perspective? And that's just RPA. >> Yeah. >> So that means there's still so much out there to still go and look at and be able to make an impact. But if you look, there's also a lot of runway on this intelligent automation. And that's where, I think, we have to shift the focus. You're seeing it now, at these conferences. That you're starting to see people talk about, "How do I integrate? "How do I actually think about connecting the dots "to get bigger and broader outcomes for an organization?" and I think that's where we're going to shift to, is talking about how do we bring together multiple technologies to be able to go and get these end-to-end solutions for customers? And ultimately go, what we were talking a little bit about before, on outcome-focused for an organization. Not talking about just, "How do I go do AI? "How do I go put a bot in?" But, "I want to choose this outcome for my customer. "I need to grow the top line. "I'm getting this feedback." Or even internally, "I want to get more efficient so I can deliver." And focus there, and then what we'll do is find the right tools to be able to move all that forward. >> It's interesting. We're out of time, but you think about, it's somewhat surprising when people hear what you just said, Mariesa, because people think, "Wow, we've had all this technology for 50 years. "Haven't we automated everything?" Well, Daniel Dines, last night, put forth the premise that all this technology's actually creating inefficiencies and somewhat creating the problem. So technology's kind of got us into the problem. We'll see if technology can get us out. All right? Thanks, you guys, for coming on theCUBE. Appreciate it. >> Thank you. >> Thank you for having us. >> You're welcome. >> Thanks. >> All right, keep it right there, everybody. We'll be right back with our next guest right after this short break. UiPath Forward III from Las Vegas. You're watching theCUBE. (electronic music)

Published Date : Oct 16 2019

SUMMARY :

Brought to you by UiPath. Nice to see you guys. How's the show going for you? How does this compare? and even last year grew, We're here in Vegas, everybody loves to be in Vegas. and we also have Florida Power and Light, And we heard, one of the keynotes today, And we actually learned from this story. it's really instrumental to making sure we have the success. to help transform companies, and putting in all this technology to get outcomes? And I know that was a little bit more, that you were willing to teach these guys how to fish, And we work with Clemmie and others to do that. So you really understand the organization. So you're not just a salesperson going in It's the stepping stone to all things automation. And how's the scaling going? So it's a cool little term that we coined. that are specific to how we perform as a company So because we're centralized, And it's the COE-- But the ability to really show what they are, and is that how you were thinking about this? And so we looked at that first, and said, And it may be a fine nail to hit, So what are some examples of areas so that we can stay proactive, right? So that's kind of an interesting example. But yes, you would need some type of So you sort of playing around with that in R&D right now? And that's going to be kind of commodity shortly. and we even have chat bots that they're coining So it's all clean energy, yes. in the energy business today? to be the clean energy that we are. And I'm not sure we are. just to make sure that we can compete. and it's great to have organizations like yours. So if you look at a lot of the research out there today, So that means there's still so much out there to still go and somewhat creating the problem. right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Clemmie MalleyPERSON

0.99+

Craig LeClairePERSON

0.99+

Mariesa CoughanourPERSON

0.99+

AmazonORGANIZATION

0.99+

VegasLOCATION

0.99+

NextEraORGANIZATION

0.99+

MariesaPERSON

0.99+

FloridaLOCATION

0.99+

50 yearsQUANTITY

0.99+

2014DATE

0.99+

ClemmiePERSON

0.99+

New YorkLOCATION

0.99+

NextEra EnergyORGANIZATION

0.99+

UiPathORGANIZATION

0.99+

2017DATE

0.99+

AWSORGANIZATION

0.99+

two-yearQUANTITY

0.99+

Las VegasLOCATION

0.99+

Daniel DinesPERSON

0.99+

early 2017DATE

0.99+

13%QUANTITY

0.99+

three yearsQUANTITY

0.99+

last yearDATE

0.99+

10%QUANTITY

0.99+

thirdQUANTITY

0.99+

late 2016DATE

0.99+

todayDATE

0.99+

CognizantORGANIZATION

0.99+

oneQUANTITY

0.99+

over 50%QUANTITY

0.99+

$20 billionQUANTITY

0.98+

MiamiLOCATION

0.98+

U.S.LOCATION

0.98+

over 2,500 folksQUANTITY

0.98+

CognizantPERSON

0.98+

2019DATE

0.98+

Florida Power and LightORGANIZATION

0.97+

this weekDATE

0.97+

five yearsQUANTITY

0.97+

firstQUANTITY

0.97+

two other partnersQUANTITY

0.96+

last nightDATE

0.95+

one personQUANTITY

0.94+

tens of billionsQUANTITY

0.94+

United StatesLOCATION

0.93+

COEORGANIZATION

0.93+

multi billionQUANTITY

0.91+

UiPath Forward IIIEVENT

0.91+

Simon SaysTITLE

0.86+

Keynote Analysis | PTC Liveworx 2018


 

>> From Boston Massachusetts, it's The Cube! Covering LiveWorx 18. Brought to you by PTC. >> Welcome to Boston everybody. You're watching The Cube, the leader in live tech coverage. And we're here with a special presentation in coverage of the LiveWorx show sponsored by PTC of Needham, soon to be of Boston. My name is Dave Vellante. I'm here with my co-host Stu Miniman. And Stu, this is quite a show. There's 6,000 people here. Jim Heppelmann this morning was up giving the keynote. PTC is a company that kind of hit the doldrums in the early 2000s. A company that as manufacturing moved offshore, its core business was CAD software for manufacturers, and it went through a pretty dramatic transformation that we're going to be talking about today. Well, fast forward 10 years, 12 years, 15 years on, this company is smokin, the stock's up 50 percent this year. They got a billion dollars plus in revenue. They're growing at 10 to 15 percent a year. They've shifted their software business from a perpetual software license to a recurring revenue model. And they're booming. And we're here at the original site of The Cube, as you remember well in 2010, the Boston Convention Center down at the seaport. And Stu, what are your initial impressions of LiveWorx? >> Yeah, it's great to be here, Dave. Good to be here with you and they dub this the largest digital transformation conference in the world. (laughing) So, I mean, Dave, you and I have been to much bigger conferences and we've been to a lot of conferences that are talking about digital transformation. But, IOT, AI, Augmented Reality, Block Chain, Robotics, all of these things really are about software, it's about digital transformation, and a really interesting space as you mentioned kind of the legacy of PTC. I have been around long enough. I remember when we used to call them Parametric Technologies. They kind of rebranded themselves as PTC. Windchill brings back some memories for me. When I worked for a high tech manufacturing company, it was that's the life cycle management tool that we used back in the early 2000s. So, I had a little bit of background in them. And, as you said, they're based in Needham, and they're moving to the Seaport. Hot area, especially, as we've said Dave, Boston has the opportunity to be the hub of IOT. And it's companies like PTC that are going to help bring those partnerships and lots of companies to an event like this. >> Well PTC has always been an inquisitive company, as you were pointing out to me off camera. They brought Prime Computer, Computer Vision. A number of acquisitions that they made back in the late 90s, which essentially didn't pan out the way they had hoped. But now again, fast forward to the modern era, Jim Heppelmann came in I think around 2010, exceeded ThingWorx, a company called Cold Light, Kept Ware is another company that they purchased. And took these really sort of independent software components and put them together and created a platform. Everybody talks about platform. We'll be talking about that a lot today, where the number of customers and partners of PTC. And we even have some folks from PTC on. But, basically, talking about digital transformation earlier, Stu, IOT is a huge tailwind for a company like PTC. But they had to really deliberately pivot to take advantage of this market. And if you think about it, yes, it's about connecting and instrumenting devices and machines, it's about reaching them, creating whatever wireless connections. But it's also about the data. We talk about that all the time. And constructing data that goes from edge to core, and even into the cloud, whether that cloud's on prem or in the data center. So you're seeing the transformation of this company. Obviously, I talked about some of the financials. We'll go into some of that. But an evolving ecosystem we heard Accenture's here, Infosys is here, Deloitte is here. As I like to say, the SI's like to eat at the trough. If the SI's are here, that means there's money here, right? >> Yeah Dave and actually a number that jumped out at me when Microsoft was up on stage, and it wasn't that Microsoft is investing five billion dollars in diode, the number that caught my ear was the 20 to 25 partners that it takes to deploy a single IOT solution. So, anybody that's been in tech for a long time, when you see these complicated stack solutions, the SIs need to be here. It takes a long time to work through them, and integration is a big challenge. How do I get all of these pieces together? It's not something that I just tit buy off the shelf. It's not shrink wrap software. This is complicated solution. It is very fragmented in how we make them up. Very specific to the industry that we're building, so really fascinating stuff that's going on. But we are still very early in the life-cycle of IOT. Huge, huge, huge opportunities but big players like Microsoft, like Google, like Amazon are going to be here making sure that they're going to simplify that environment over time. Huge, you know Dave, what's the original forecast I think we did at Wiki Bon, was a 1.2 trillion dollar opportunity, which most of that, that was actually for the industrial Internet, which is not the commercial things that we think about all the time, when we talk about the home sensors and some of the things, some of the consumer stuff, but also the industrial here. >> Well, I think a couple of key points that you're making here. First of all, the market is absolutely enormous. It's almost impossible to size. I mean you're talking about a trillion dollars in sort of spending on hardware, software, services, virtually everything. But to your point, Stu. It's highly highly fragmented, virtually every industry. And a lot of different segmented technologies. But it's also important to point out this is the mashing together of operations technology, OT with Information Technology, IT, and those four leading companies IT is actually leaning in and embracing this notion of edge, computing, and IOT. Now, I wouldn't even say that IT and OT are Hatfield and McCoy's. They're not. They're parts of the organization that don't talk to each other. So they are cultural differences. They use different languages. They think differently. One is largely engineers who make machines work. The other IT guys, which we obviously know what they do, they keep information technology systems running. They deploy a lot of new IT projects. So, really different worlds that have to start coming together. Jim Heppelmann today I thought did a really good job in his keynote. He talked about innovation. Usually you start with okay we're here at point A, we want to go here. We want to get to point B. And we're going to take a straight line and have a bunch of linear steps and milestones to get there. He pointed out that innovation today is really sort of a non-linear process. And he talked about the combinatorial effects of really three things. Machines, or the physical, computers and humans. Machines are strong, they can do heavy lifting. Computers are fast, and they can do repetitive tasks very accurately. And humans are creative. And he talked about innovation in this new world coming together by combining those three aspects, finding new ways to attack problems, to solve nature's challenges. And bringing nature into that problem solving. He gave a lot of examples of how mother nature mimicking mother nature is now possible with AI and other technologies. Pretty cool. >> Yeah, absolutely Dave. I'm sure we'll be talking a lot today about the fourth Industrial Revolution. A lot of discussion as to what jobs are Robots going to take. I look around the show floor here and there's a lot of cool robotics going on. But as Eric Manou said and Aaron McAfee, the folks from MIT that we've interviewed a couple of times talked about the second machine age. Really the marring of people and machines that are going to be powerful. And absolutely Jim Heppelmann talked about that a lot. It's humans, it's physical, and it's digital. Putting those together and then, the other thing that he talked about is we're talking a lot about voice lightly with all of these assistants, but, you're really limited as to how much input and how fast you can take information in from an auditory standpoint. I mean, I know that I listen to podcasts at 1.5 to 2 X to try to get more information in faster, but it is sight that we're going to get 80 percent of the information in, and therefore, it's the VR and AR that are huge opportunities. I know when I've been talking to some of the large manufacturers, what they used to have in written documentations and then they went digital with, they're now getting you inside to be able to configure the systems with the hollow lens, or some of the AR headsets, the VR headsets, to be able to play with that. So, we're really early but excited to see where this technology has come so far. >> Yeah, we're seeing a lot of practical applications of VR and AR. We go to a lot of these shows and they'll have the demos, and you go, okay, what will I do with this? Well, you're really seeing here at LiveWorx some of the things you actually can do. One good example I thought they did was BEA Systems up in Nashua, actually showing the folks that are doing the manufacturing, little tutorial in how to do that. We're going to see some surgical examples today. Remote surgery. There are thousands, literally thousands of examples. In the time we have remaining, I want to just do the rundown on PTC. Cause it really is quite an amazing transformation story. You're talking about a company with 1.1 billion dollars in revenue. Their aspiration is by 2021 to be a two billion dollar company. They're growing at ten percent a year, their software business has grown at 12 to 15 percent a year. 15 percent is that annual recurring revenue. So this is an example of a company that has successfully shifted from that perpetual model to that recurring model. They got 200 million dollars this year in free cash flow. Their stock, as I said, is up 50 percent this year. They got 350 million dollars in cash, but they just got a billion dollar investment from Rockwell Automation that took about 8.4 percent of the company given them an implied evaluation of almost 11 billion dollars, which has got a little uplift from the stock market there. They're selling a lot of seven figure deals. Really, the core is manufacturing product life-cycle management, CAD. That's the stuff that we know PTC well from. And I talked about some of those acquisitions that they made. They sell products like Creo, which is their 3D CAD software. I think they're on Rev five or six by now. So they've taken their sort of legacy software and sort of updated that for the digital world. >> Yep ,it is version five that they were just announced today. Talking about really the 3D effort they're doing there. Some partnerships around it, and like every other software Dave that we've been hearing about AI is getting infused in here because with so many devices and so much data, we really need the machines to help us process that and do things that humans can't keep up with. >> And the ecosystem's grown. This is a complicated marketplace. If you look at the Gartner Magic Quadrant, there is no leader, even though PTC is the leader. But there is no leader. They're all sort of in the lower right, PTC is up highest. GE is interestingly is not in there, because it doesn't have an on prem solution. I don't know why GE doesn't have an on prem solution. And I don't know why they're not in there. >> Is there another version of the magic quadrant that includes the Amazons and GEs of the world? >> I don't know. So that's kind of interesting. We'll try to unpack that as we go on here. PTC announced today a relationship with a company called Ansys, which does simulation software. Normally, simulation comes sort of after the design. They're bringing those two worlds together. The CAD design piece and the simulation piece, sort of closer to real time. So, there's a lot of stuff going on. As you said, it's data, analytics, edge computing. It's cloud, it's on prim, it's block chain for security. We haven't talked about security. A lot bigger threat metrix, so block chain comes into play. >> Yeah, Dave. I saw a great joke. Do you realize that the S in IOT stands for security? Did you know that? (laughing) Oh wait, there's no S in IOT. Well, that's the point. >> All right, good. So Stu and I will be here all day today. This is actually a three day conference. The Cube will only be there for day one. Keep right there everybody. And we'll be right back. You're watching The Cube, Live from Liveworx in Boston. (upbeat music)

Published Date : Jun 18 2018

SUMMARY :

Brought to you by PTC. kind of hit the doldrums kind of the legacy of PTC. We talk about that all the time. the SIs need to be here. And he talked about the I mean, I know that I listen to podcasts that are doing the manufacturing, Talking about really the 3D And the ecosystem's grown. sort of after the design. Well, that's the point. So Stu and I will be here all day today.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim HeppelmannPERSON

0.99+

Dave VellantePERSON

0.99+

Eric ManouPERSON

0.99+

AmazonORGANIZATION

0.99+

Aaron McAfeePERSON

0.99+

Rockwell AutomationORGANIZATION

0.99+

20QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

80 percentQUANTITY

0.99+

StuPERSON

0.99+

10 yearsQUANTITY

0.99+

12 yearsQUANTITY

0.99+

350 million dollarsQUANTITY

0.99+

Cold LightORGANIZATION

0.99+

AnsysORGANIZATION

0.99+

1.1 billion dollarsQUANTITY

0.99+

15 yearsQUANTITY

0.99+

15 percentQUANTITY

0.99+

NeedhamLOCATION

0.99+

InfosysORGANIZATION

0.99+

12QUANTITY

0.99+

2010DATE

0.99+

DeloitteORGANIZATION

0.99+

HatfieldORGANIZATION

0.99+

200 million dollarsQUANTITY

0.99+

10QUANTITY

0.99+

thousandsQUANTITY

0.99+

LiveWorxORGANIZATION

0.99+

2021DATE

0.99+

6,000 peopleQUANTITY

0.99+

Stu MinimanPERSON

0.99+

AmazonsORGANIZATION

0.99+

five billion dollarsQUANTITY

0.99+

Kept WareORGANIZATION

0.99+

PTCORGANIZATION

0.99+

two billion dollarQUANTITY

0.99+

The CubeTITLE

0.99+

1.2 trillion dollarQUANTITY

0.99+

GEORGANIZATION

0.99+

BostonLOCATION

0.99+

AccentureORGANIZATION

0.99+

todayDATE

0.99+

SeaportLOCATION

0.99+

BEA SystemsORGANIZATION

0.99+

early 2000sDATE

0.99+

about 8.4 percentQUANTITY

0.99+

GEsORGANIZATION

0.99+

three dayQUANTITY

0.99+

1.5QUANTITY

0.99+

25 partnersQUANTITY

0.98+

this yearDATE

0.98+

three aspectsQUANTITY

0.98+

late 90sDATE

0.98+

NashuaLOCATION

0.98+

50 percentQUANTITY

0.98+

two worldsQUANTITY

0.97+

Parametric TechnologiesORGANIZATION

0.97+

almost 11 billion dollarsQUANTITY

0.97+

2 XQUANTITY

0.97+

Boston MassachusettsLOCATION

0.97+

GartnerORGANIZATION

0.97+

FirstQUANTITY

0.96+

Boston Convention CenterLOCATION

0.96+

Crystal Rose, Sensay | Coin Agenda Caribbean 2018


 

>> Narrator: Live from San Juan, Puerto Rico, it's theCube, covering CoinAgenda, brought to you by SiliconANGLE. (salsa music) >> Hello everyone, welcome to our special CUBE exclusive coverage in Puerto Rico. I've been here on the island all week, talking to the most important people, entrepreneurs, citizens of Puerto Rico, the entrepreneur, the students, connecting with Blockchain, investors, thought leaders, and the pioneers. I'm John Furrier, the cohost of theCUBE, co-founder of SiliconANGLE Media, and we're here with Crystal Rose, who is the CEO and co-founder of Sensay, doing something really cutting edge, really relevant, and kind of ahead of its time, but I think it's time to get it out there and get that token program. Crystal Rose, thanks for joining me and spending time with me. >> Thank you for having me. >> So one of the things I think that you're doing, and I want you to explain this because it's nuanced, and a lot of the super geeks get it and alpha geeks will get it, but the mainstream people are used to dealing in their silos. I use Facebook, I use LinkedIn, I use Twitter, I use chat, I use Telegram, I use these apps. The world's kind of horizontally being disrupted because of the network affect that Blockchain and Crypto is now the underpinnings of, and there's ICOs out there and other things happening, but it's a disruption at the technology stack with software. You guys are doing something with Sensay in the SENSE token that is changing the equation of how people come together, how people grow and learn, whether it's a nonlinear path of some proficiency or connecting with folks or just learning, whatever it is, it's a discovery mechanism. Take a minute to explain what you guys are doing and why it's so important. >> Well we built Sensay to connect everyone together without any borders or intermediaries, and so really it's as simple as every phone has the capability to have a messenger. We have five billion phones that have SMS on them, and so we wanted to take the most basic messaging system, which is the most important thing that people do, and connect it to any other messenger, so Facebook Messenger, Telegram, Slack, anywhere where people are chatting, we wanted to create a system that is interoperable and can decentralize your contact list, essentially. >> Yeah, so this is important, so like most people when they go to social networks you got to find a friend, you get connected. In some cases I don't want to have to friend someone just to have a chat, I mean I may not want to friend them, or I might want to or it's a hassle, I don't know who to friend. Is that kind of where you guys come in? >> Yeah, that's one really great use case, because things like Facebook max you at five thousand friends, so if you friended everybody that you had a conversation with, if you needed to know something. Let's say that every Google search that you did was actually a conversation, you would cap the number of potential contacts. We have a circle of people around us that extends out with different tiers. But I think some of the most important people in our lives are actually strangers. So instead of building the social graph we wanted to build the stranger graph. Sensay cares more about what you know than who you know. Because if we can connect people together around similar interests and like-mindedness, we're connecting tribes, and that's really the innate human connection that we're all looking for. And it's also when you extend yourself outside of your social graph, you're most likely to educate yourself or to uplift yourself more. So the way to level up is to get somebody who's an eight or a 10 if you're a five or a two, and find someone outside of your current circle. >> And that also eliminates all this group think we've seen on some of these hate threads that have been on, whether it's Facebook or some IRC backchannel or Slack channel, you see the hate just comes in because everyone's just talking to themselves. This is the new way, right? Connecting out? Through the metadata of the chat. >> Exactly, we want people to seek out good connections, helpful connections, and so if you can both contribute what you know you get rewarded. And if you can ask people on the network you also get rewarded. So by asking something, you're receiving a reward. It's a two-way system. So it's not just the person who is helping, so we don't really encourage an economy of experts. We think that everyone is a sensei. A sensei literally means a person who's been there before. So we think of that as somebody who has had that life experience. And I think if we look at the internet, the internet democratized expertise. It gave us the ability for every single person to write what they were thinking, or contribute some kind of content in some way. But for 20 years the internet has been free. It's a really beautiful thing for consumption, and open source is the absolute right methodology for software. When it comes to your own content a reward makes sense, and so we wanted to create SENSE on top of the platform as a value exchange. It was a point system, so kind of like Reddit Karma. And we wanted to let people exchange it out for some value that they could transact in the world. >> So basically you're going to reward folks with a system that says, okay, first ante up some content, that's your SENSE token, and then based upon how you want to work with people in the network, there's a token transaction that could come out of it. Did I get that right? >> Exactly. So the person who contributes on the network gets rewarded for that data, and it can be anything that you've done in the past, too. So if you have a lot historical data on Facebook or on GitHub for instance. Let's say you're a developer and you have a bunch of repos out there that could be analyzed to see what kind of developer you are, or if you've contributed a lot to Reddit, all of that data is out there, and it's been something that defines you and your personality and your skills and who you are, so you can leverage that, and you can get a reward for it just by letting Sensay understand more about you, so the AI runs through it. You get more rewards, though, if you have real conversations. So it's almost like a bounty program on conversation. >> So we have the same mission. We love what you're doing. I'm really so glad you're doing it. I want to get to an example in Puerto Rico where you've reached out with strangers, I know you have. And get that, I want to get to that in a minute, but I want to continue on the Sensay for a second and the SENSE token. As you guys do this, what is the token going to be looking like to the user? Because you have a user who's contributing content and data, and then you have people who are going to transact with the token, it could be a bounty, it could be someone trying to connect. How is the token economics, just so I can get that out there, how does that work? >> Well right now in Sensay the transaction is peer to peer, so both users who are chatting have the ability to tip each other, essentially. They can give each other some coins within the chat. We have the concept that when you're having a conversation it's always a buyer and a seller. It's always a merchant and a consumer, and sometimes those roles flip, too. I'll be selling you something and eventually you're selling me something. But it's a natural way that we chat to transact. So that was the first way that the token could be used. We then realized that the powerful part of the platform is actually everything underlying the application. So the layer underneath really was the most powerful thing. And so SENSE network evolved as a way for developers who are creating apps or bots to be able to build on top of the network and leverage the access to the humans or to their data, and so now the token can be used to access the network. You get paid if you contribute data or users and vice versa, you can pay to access them. What that's doing is it's taking away the advertising model from being the only entity that's earning a profit on the data. So you, the user, when you're giving your data to Facebook, Facebook earns a lot of money on it, selling it over and over repeatedly to advertisers, and while it's technically yours in the terms you own it, you don't actually have any upside of that profit, and so what we're doing is saying, well why don't we just let a potential business talk to you directly on your consent and give you the money directly for that? So that two or five dollars for one connection would go straight to you. >> This is the new business model. I mean, this is something that, I mean first of all, don't get me started on my ad and tech rant because advertising creates a bad behavior. Okay? You're chasing a business model that's failing, attention and page views, so the content is not optimized the proper way. And you mentioned the Facebook example. Facebook's not optimizing their data for a user experience, they're optimizing for their monetization, which is counter to what users want to do. So I think you kind of are taking it in another direction, which we love 'cause that's what we do, we are open source content, but the role of the data is critical so I got to ask you the hard question. I'm a user, it's my data, how do the developers get access to it? Do they pay me coins or... You want developers because that's going to be a nice piece of the growth so what's the relationship between the developer, who's trying to add value, but also respecting the user's data? >> Exactly, so the developer pays the network and as a user you're a token holder, you own the network, essentially. So there is really no real middle layer since the token will take a small amount out for continuing to power the network, but a nominal amount. Right now the most expensive thing that happens is the gas that's on top of Ethereum because we're an ERC20 token. So we're looking to be polychain. We want to move onto other types of blockchains that have better, faster transactions with no fees and be able to pass that through as well. So we really want to just do a peer-to-peer connection. There's no interest in owning that connection or owning the repository of data. That's why the blockchain's important. We want the data to be distributed, we want it to be owned by the user, and we want it to be accessible by anyone that they want to give access to. So if it's a developer, they're building a bot maybe, or if it's a brand, they're using a developer on their behalf they have to pay the user for that data. So the developer's incentives are completely aligned with the peer-to-peer architecture that you have, users interests, and the technical underpinnings of the plumbing. Is that right? >> Exactly. >> Okay, good, so check. Now I got that. All right, now let's talk about my favorite topic, since we're on this kind of data topic. Who's influential? I mean, what does an influencer mean to you? Is it the most followers (mumbles) it's kind of a canned question, you can hear it coming. I'll just say it. I don't like the influencer model right now because it's all about followers. It's the wrong signal. 'Cause you can have a zillion followers and not be influential. And we know people are buying followers. So there's kind of been that gamification. What should influence really be like in this network? Because sometimes you can be really influential and then discover and go outside your comfort zone into a new area for some reason, whether it's a discovery or progression to some proficiency or connection, you're not an influencer, you're a newbie. So, context is very important. How do you guys look at, how do you look at influencers and how influence is measured? >> I think at the bare bones an influencer is someone who drives action. So it's a person who can elicit an action in another person. And if you can do that at scale, so one to many, then you have more power as an influencer. So that's sort of the traditional thinking. But I think we're missing something there, which is good action. So an influencer to me, a good influencer, is somebody who can encourage positive action. And so if it's one to one and you get one person to do one positive thing, versus one to a thousand and you get a thousand people to do something not so great, like buy a product that's crap because it was advertised to them for the purpose of that influencer making profit, that metric doesn't add up. So I think we live in a world of vanity metrics, where we have tons of numbers all over the place, we have hearts and likes and stars and followers and all of these things that keep adding up, but they have no real value. And so I think it's a really, like you said before, the behavior is being trained in the wrong way. We're encouraged to just get numbers rather than quality, and so what I think a really good influencer is is somebody who has a small group of people who will always take action. It can be any number of people. But let's say a group of followers who will take action based on that person's movements and will follow them in a positive direction. >> And guess what, its a network graph so you can actually measure it. That's interesting... >> Exactly, exactly. >> I can see where you're going with this. Okay, so I got to talk about your role here in Puerto Rico. You mentioned earlier about reaching out to strangers, the stranger graph, which is a way, people's outside of their comfort zones sometimes, reaching out to strangers. You came here in the analog sense, you're in person, but on the digital side as well, kind of blends together. Give an example where you reached out to strangers and how that's impacted your life and their life, because this is the heart of your system, if I can get that right. You're connecting people and creating value, I mean sometimes there might not be value, but you're creating connections, which have the potential for more value. What have you done here in Puerto Rico that's been a stranger outreach that turned into a wow moment. >> Our outreach has been so far an invitation. So we bought a space here that's turned into a community center. Even at the very beginning we had no power as most of the places around that have been sitting for a year or two or since the hurricane, and so we put a call out and said we'd like to get to know the community. We're doing something called Let There Be Light, which is turn the power on, and you know, we put it out to a public group and saw who would show up. So basically it's a community, central building, it's a historical building, so a lot of people know it. There's a lot of curiosity, so it was just a call, it was a call for help. It was really, I think the biggest thing people love is when you're asking them for help, and then you give gratitude in return for that help and you create a connection around it. So that's why we built Sensay the way that we did, and I think there's a lot of possibilities for how it could be used, but having that encouragement of the community to come and share, we've done that now this whole week, so this is restart week, and one of the other things that we've done is help all of the conferences come together, collaborate rather than compete, so go into the same week, and put all of these satellite groups around it. And then we blanketed a week around it so that we had one place for people to go and look for all of the events, and also for them to understand a movement. So we since then have done a dinner every single night, and it's been an open invitation. It's basically whoever comes in first, and we've had drinks every night as well, open. So it's really been an invitation. It's been an open invitation. >> Well congratulations. I really love what you're doing. You guys are doing great work down here. The event this week has been great. We've got great content. We have some amazing people and it's working, so congratulations on that. As you guys look forward, one of the things I've observed in my many years of history, is that there are a lot of waves, I've seen all the waves, this wave's the biggest. But what jumps out at me is the mission-driven aspect of it. So I mean I can geek out on what's the decentralize and the stacks and all the tech stuff happening, but what's most impressive is the mission oriented, the impact kind of thinking. This is now, society is now software driven. This is a new major thinking. Used to be philanthropy was a waterfall model. Yeah, donate, it either goes or doesn't go. Go to the next one, go to the next one. Now you have this integrated model where it's not just philanthropy, it's action, there's money behind it, there's coding, there's community. This is now a new era of societal entrepreneurship, societal missions. Let's talk about your vision on this mission and impact culture that's part of this ethos. >> I think impact is the important word there. So we think about, we think about bringing capital, like you said with normal philanthropy, you can bring capital and you can continuously pump capital into something, but if the model is wrong it's just going to drain, and it's going to go to inefficient systems, and in the end maybe do some help, but a very small percentage of the capacity of what it could do. So what we have the concept of is bringing funds here. We have a fund that was just launched called Restart Ventures, and the idea is instead of compounding interests, we want to make compounding impact, and so it's a social good focused fund, but at the same time all of the proceeds generated from the fund recycle back into other things that are making more impact. So we're measuring based on how much impact can be created with different projects. It could be a charity or it could be an entrepreneur. And if we're getting a multiple, most of that money is going back. So a very small percentage goes to the actual fund and to the fund managers, and the lion's share of the fund is going back into Puerto Rico. So I think if we look at how we can help in a way that is constantly regenerative, sustainable is good, regenerative is better. We want to at least elevate ourselves and get to the point of sustainability, but we're not improving at that point. We're still just fixing problems. We want regenerative. So if we can keep planting things that regrow themselves, if we can make it so that we're setting up the ecosystem to constantly mend itself, it's like a self-healing system of software, this is the right way to do it. So I think that's the new model. >> You built in some nurturing into the algorithm, I like that. 'Cause you're not going to do the classic venture capital carry, you're going to rotate in, but still pay some operators to run it, so they got to get paid. So I noticed in the announcement there was some money for managing directors to do it. So they get paid, and the rest goes into the compounding impact. >> Right. >> Okay, so I got to ask you what your view is these days on something that's really been important in open source software, which again, when I started it was a tier 2 citizen, at best, now it's running the world, tier 1. Open source ethoses are sprinkled throughout these new, awesome opportunities, but community made it happen. What is your current view on the role of the community, communities in general, to make this new compounding impact, whether it's software development, innovation, impact giving, regenerative growth. What's your view on community? >> If community operates with a mentality of giving or contribution over consumption we do a lot better. So when you have an open source network, if a community comes and they contribute to it more, that's something that regenerates. It keeps adding value. But if a community comes and they just keep consuming, then you have to continue to have more and more people giving. I think a really good example of this is Wikipedia. Wikipedia has hundreds of thousands of people who constantly contribute, and the only reward that they've ever gotten for that is a banner ad that says please donate because we don't do ads. So it's a broken model, because you want it to be free and you want it to continue to have the same ethos and you want it to have no advertising, yet the people who contribute most of the time also contribute most of the funding to keep it alive because they love it and care about it so much. So how could we change that model so that the community could give contributions while also receiving a way to make sure that they're able to keep doing that. And a reward system works, and maybe that's not the only solution, but we have to think about how we can keep creating more and more. >> Well I think transparency is one thing I've always loved. The thing that I always hear, especially with women in tech and these new important areas like underserved minorities, and also the bad behavior that goes on in other groups, is to shine the light on things. Having the data being open, changes everything. That is a huge thing. So community and open data. Your thoughts? I'm sure you agree? Open data and the importance of having the data exposed. >> One hundred percent. So our platform also has a layer of anonymity on the user by default, and part of the idea of being able to understand whether or not data is good. Because think of human data, we have to figure out quality. In the past there would be a validation system that is actually other humans telling you whether or not you're good and giving you some accreditation, some verification. This is our concept of experts on things. Now we would rather take consensus. So let's just crowdsource this validation and use a consensus mechanism that would see whether or not other humans think the data is good. If we're using a system like that, we have to have open data, it has to be transparent and it has to be able to be viewed in order to be voted on. So on our platform on just the first application on Sensay, we expose this consensus mechanism in a feature called Peek. So Peek basically lets you peek inside of conversations happening on the network. You can watch all the conversations that happen, the AI pulls out the good ones, and then you vote on them. >> It's kind of like when you walk into a nightclub, do I want to kind of hang out here? >> Yeah, you're kind of a voyeur but you get rewarded for doing it. It's a way for us to help classify, it's a way for us to help train the AI, and also it's a way for people to have passive ability to interact without having to have a conversation with an actual human. >> Well you're exposing the conversation to folks, but also you get signaling data. Who jumps in, who kind of walks away. I mean it's a gesture data, but it's a data point. >> Right, and it's completely private. So the beauty of the transparency is there's actually privacy baked in. And that's what I love about blockchain is it has all of the good things. >> Crystal, I got to ask you a final question. I know you're very busy, and thank you for taking the time to share your thoughts with me today here on theCUBE here in Puerto Rico. This week you've been super busy, you look great. I'm sure you've been up, burning the midnight oil, as they say. What is the, I won't say craziest thing because I've seen a lot of cool, crazy things going on here, it's been fun, what is some highlights for you? Conversations, meeting new people, can you just share a couple anecdotal highlights from restart week that have moved you or surprised you or just in general might be worth noting. >> I've been overall extremely surprised but the sheer number of people who showed up. I feel like a few months ago there was a small group of us sitting around wondering what it would be like if we could encourage our friends to come here and share the space. So just to see the thousands of people who have come here to support these several conferences has been amazing. My most surprising thing, though, is the amount of people that have told me that they bought a one-way ticket and have no intention of going home. So to make Puerto Rico your home I think is a really amazing first step, and I just did a panel earlier today with the person in government who had instituted Act 20 and 22, and that was the initial incentive-- >> Just take a minute to explain what that is for the folks that don't know what it is. >> Sure. So Act 20 and 22 are for the company and the individual respectively. They are a way for you to get a tax incentive for moving here as a resident or domiciling your company here. So you get 0% taxes. I think companies range up to 4% or something like that, and that incentive was created to bring more brilliant minds and entrepreneurs and different types of people with different vocations to the island. So basically, give them a tax incentive and encourage the stimulation of economy. So that has brought this wave of people in who have an idea that no taxes are great. At the same time they fall in love with the island. It's amazing because to me Puerto Rico is a combination of LA's weather, San Francisco's open-mindedness, and Barcelona's deep European history. It's just a really beautiful place. >> And it's US territory, so it's a short hop and a jump to the States if you need to, or Europe. >> Yeah exactly. And no customs and you have your driver's license to get here. Also it's a US dollar. And I say that because most people in America mainland don't realize that Puerto Rico is an American territory, and so they sort of think they're going to a foreign country because it's treated that way by our government. But what I've been really shocked about, though, is the sheer amount of innovation already here. The forward thinking ways of people and the embracing of things like open source and blockchain technology, because their minds are already in a mode of community, a mode of sharing, a mode of giving. >> We interviewed Michael Angelo from Edublock.ido, Edublock, they're connecting all the universities with blockchain. We also interviewed Damaris Rivera, with Puerto Rico Advantage. They'll move you down here. You can press a button, it's instant move. So folks in Silicon Valley who are watching who know us and around the world know theCUBE, there's a group of like-minded people here that have tech chops, there's capital flowing. There's capital people I know have moved here, setting up shop, as well as the Caymans and everywhere else, but it's nice. So it's kind of like LA. >> There is a lot of capital. I have just witnessed a couple hundred million dollars of funds that were established in the last couple of months. And this is around all different types of technology sectors. You don't have to be a blockchain company. You can be innovating in any way possible. One of my favorite projects is a machine that turns plastic bottles into diesel fuel. So one of the problems here is that the generators on the island, when we were here last time we met a guy that was working at a bar in a restaurant, and he was like, "Hey I saw you guys in New York Times "and I think you're like the Crypto people." And he had a conversation, and he said, "I was wondering if you could help my grandmother "who is stuck with no power, and it's been months, "and she's in her 90s, and she needs a generator to run "a machine that keeps her life supported." and so a couple of people went out to bring more fuel, bring a generator to donate. They started understanding that there are so many areas that still need this level of help, that there's a lot that we can do. So when I see projects like that, that's something I want to back. >> Yeah, it's entrepreneurial action taking impact. Crystal, thanks so much for coming out. Crystal Rose, CEO, co-founder of Sensay, real innovative company, pioneer here in the Puerto Rico movement. It's a movement, a lot of tech, entrepreneurs, capital, investors, and the pioneers in the blockchain, decentralized internet are all here. This is like the Silicon Valley of Crypto, right? >> I think they're calling it Crypto Island. >> Crypto Island, yes. It sounds like a TV show. We should be on it. It's not lost, it's Crypto Island. >> Exactly. >> Thanks so much for spending the time on theCUBE. >> Thanks John. >> John: I appreciate it. >> I appreciate it so much. Thanks for making sense of me. >> I'm John Furrier here on theCUBE here in Puerto Rico. Our coverage continues after this short break.

Published Date : Mar 17 2018

SUMMARY :

brought to you by SiliconANGLE. and get that token program. and a lot of the super geeks get it and connect it to any other messenger, Is that kind of where you guys come in? and that's really the This is the new way, right? and so if you can both and then based upon how you want to work and it's been something that defines you and the SENSE token. and leverage the access to so I got to ask you the hard question. and the technical I don't like the So that's sort of the its a network graph so you but on the digital side as well, and one of the other and the stacks and all and in the end maybe do some help, and the rest goes into Okay, so I got to ask you what your and maybe that's not the only solution, and also the bad behavior and part of the idea of and also it's a way for the conversation to folks, is it has all of the good things. and thank you for taking the time and that was the initial incentive-- for the folks that don't know what it is. and encourage the stimulation of economy. to the States if you need to, and the embracing of So it's kind of like LA. is that the generators on the island, This is like the Silicon I think they're We should be on it. Thanks so much for spending the time I appreciate it so much. I'm John Furrier here on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael AngeloPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

twoQUANTITY

0.99+

Puerto RicoLOCATION

0.99+

FacebookORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

AmericaLOCATION

0.99+

20 yearsQUANTITY

0.99+

10QUANTITY

0.99+

CrystalPERSON

0.99+

OneQUANTITY

0.99+

EdublockORGANIZATION

0.99+

Puerto RicoLOCATION

0.99+

EuropeLOCATION

0.99+

Act 20TITLE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

0%QUANTITY

0.99+

USLOCATION

0.99+

five billion phonesQUANTITY

0.99+

LALOCATION

0.99+

todayDATE

0.99+

five thousand friendsQUANTITY

0.99+

five dollarsQUANTITY

0.99+

eightQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

oneQUANTITY

0.99+

SensayORGANIZATION

0.99+

This weekDATE

0.99+

first wayQUANTITY

0.98+

both usersQUANTITY

0.98+

RedditORGANIZATION

0.98+

this weekDATE

0.98+

Crystal RosePERSON

0.98+

LinkedInORGANIZATION

0.98+

One hundred percentQUANTITY

0.98+

a yearQUANTITY

0.98+

fiveQUANTITY

0.98+

bothQUANTITY

0.98+

San FranciscoLOCATION

0.98+

a weekQUANTITY

0.98+

one-way ticketQUANTITY

0.98+

thousands of peopleQUANTITY

0.97+

Edublock.idoORGANIZATION

0.97+

first applicationQUANTITY

0.97+

San Juan, Puerto RicoLOCATION

0.97+

first stepQUANTITY

0.97+

one personQUANTITY

0.97+

Puerto Rico AdvantageORGANIZATION

0.96+

firstQUANTITY

0.96+

Damaris RiveraPERSON

0.96+

theCUBEORGANIZATION

0.96+

BarcelonaLOCATION

0.96+

GitHubORGANIZATION

0.95+

Let There Be LightORGANIZATION

0.95+

hundreds of thousands of peopleQUANTITY

0.95+

WikipediaORGANIZATION

0.95+

SensayPERSON

0.95+

CUBEORGANIZATION

0.95+

TelegramTITLE

0.94+

New York TimesTITLE

0.94+

SlackTITLE

0.94+

Restart VenturesORGANIZATION

0.94+

CoinAgendaTITLE

0.94+

SENSEORGANIZATION

0.94+

Reddit KarmaORGANIZATION

0.93+

one placeQUANTITY

0.93+

Randy Meyer, HPE & Paul Shellard, University of Cambridge | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain, it's the Cube, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, Spain everybody, this is the Cube, the leader in live tech coverage. We're here covering HPE Discover 2017. I'm Dave Vellante with my cohost for the week, Peter Burris, Randy Meyer is back, he's the vice president and general manager Synergy and Mission Critical Solutions at Hewlett Packard Enterprise and Paul Shellerd is here, the director of the Center for Theoretical Cosmology at Cambridge University, thank you very much for coming on the Cube. >> It's a pleasure. >> Good to see you again. >> Yeah good to be back for the second time this week. I think that's, day stay outlets play too. >> Talking about computing meets the cosmos. >> Well it's exciting, yesterday we talked about Superdome Flex that we announced, we talked about it in the commercial space, where it's taking HANA and Orcale databases to the next level but there's a whole different side to what you can do with in memory compute. It's all in this high performance computing space. You think about the problems people want to solve in fluid dynamics, in forecasting, in all sorts of analytics problems, high performance compute, one of the things it does is it generates massive amounts of data that people then want to do things with. They want to compare that data to what their model said, okay can I run that against, they want to take that data and visualize it, okay how do I go do that. The more you can do that in memory, it means it's just faster to deal with because you're not going and writing this stuff off the disk, you're not moving it to another cluster back and forth, so we're seeing this burgeoning, the HPC guys would call it fat nodes, where you want to put lots of memory and eliminate the IO to go make their jobs easier and Professor Shallard will talk about a lot of that in terms of what they're doing at the Cosmos Institute, but this is a trend, you don't have to be a university. We're seeing this inside of oil and gas companies, aerospace engineering companies, anybody that's solving these complex computational problems that have an analytical element to whether it's comparative model, visualize, do something with that once you've done that. >> Paul, explain more about what it is you do. >> Well in the Cosmos Group, of which I'm the head, we're interested in two things, cosmology, which is trying to understand where the universe comes from, the whole big bang and then we're interested in black holes, particularly their collisions which produce gravitational waves, so they're the two main areas, relativity and cosmology. >> That's a big topic. I don't even know where to start, I just want to know okay what have you learned and can you summarize it for a lay person, where are you today, what can you share with us that we can understand? >> What we do is we take our mathematical models and we make predictions about the real universe and so we try and compare those to the latest observational data. We're in a particularly exciting period of time at the moment because of a flood of new data about the universe and about black holes and in the last two years, gravitational waves were discovered, there's a Nobel prize this year so lots of things are happening. It's a very data driven science so we have to try and keep up with this flood of new data which is getting larger and larger and also with new types of data, because suddenly gravitational waves are the latest thing to look at. >> What are the sources of data and new sources of data that you're tapping? >> Well, in cosmology we're mainly interested in the cosmic microwave background. >> Peter: Yeah the sources of data are the cosmos. >> Yeah right, so this is relic radiation left over from the big bang fireball, it's like a photograph of the universe, a blueprint and then also in the distribution of galaxies, so 3D maps of the universe and we've only, we're in a new age of exploration, we've only got a tiny fraction of the universe mapped so far and we're trying to extract new information about the origin of the universe from that data. In relativity, we've got these gravitational waves, these ripples in space time, they're traversing across the universe, they're essentially earthquakes in the universe and they're sound waves or seismic waves that propagate to us from these very violent events. >> I want to take you to the gravitational waves because in many respects, it's an example of a lot of what's here in action. Here's what I mean, that the experiment and correct me if I'm wrong, but it's basically, you create a, have two lasers perpendicular to each other, shooting a signal about two or three miles in that direction and it is the most precise experiment ever undertaken because what you're doing is you're measuring the time it takes for one laser versus another laser and that time is a function of the slight stretching that comes from the gravitational rays. That is an unbelievable example of edge computing, where you have just the tolerances to do that, that's not something you can send back to the cloud, you gotta do a lot of the compute right there, right? >> That's right, yes so a gravitational wave comes by and you shrink one way and you stretch the other. >> Peter: It distorts the space time. >> Yeah you become thinner and these tiny, tiny changes are what's measured and nobody expected gravitational waves to be discovered in 2015, we all thought, oh another five years, another five years, they've always been saying, we'll discover them, we'll discover them, but it happened. >> And since then, it's been used two or three times to discover new types of things and there's now a whole, I'm sure this is very centric to what you're doing, there's now a whole concept of gravitational information, can in fact becomes an entirely new branch of cosmology, have I got that right? >> Yeah you have, it's called multimessenger astronomy now because you don't just see the universe in electromagnetic waves, in light, you hear the universe. This is qualitatively different, it's sound waves coming across the universe and so combining these two, the latest event was where they heard the event first, then they turned their telescope and they saw it. So much information came out of that, even information about cosmology, because these signals are traveling hundreds of billions of light years across to us, we're getting a picture of the whole universe as they propagate all that way, so we're able to measure the expansion rate of the universe from that point. >> The techniques for the observational, the technology for observation, what is that, how has that evolved? >> Well you've got the wrong guy here. I'm from the theory group, we're doing the predictions and these guys with their incredible technology, are seeing the data, seeing and it's imagined, the whole point is you've gotta get the predictions and then you've gotta look in the data for a needle in the haystack which is this signature of these black holes colliding. >> You think about that, I have a model, I'm looking for the needle in the haystack, that's a different way to describe an in memory analytic search pattern recognition problem, that's really what it is. This is the world's largest pattern recognition problem. >> Most precise, and literally. >> And that's an observation that confirms your theory right? >> Confirms the theory, maybe it was your theory. >> I'm actually a cosmologist, so in my group we have relativists who are actively working on the black hole collisions and making predictions about this stuff. >> But they're dampening vibration from passing trucks and these things and correcting it, it's unbelievable. But coming back to the technology, the technology is, one of the reasons why this becomes so exciting and becomes practical is because for the first time, the technology has gotten to the point where you can assume that the problem you're trying to solve, that you're focused on and you don't have to translate it in technology terms, so talk a little bit about, because in many respects, that's where business is. Business wants to be able to focus on the problem and how to think the problem differently and have the technology to just respond. They don't want to have to start with the technology and then imagine what they can do with it. >> I think from our point of view, it's a very fast moving field, things are changing, new data's coming in. The data's getting bigger and bigger because instruments are getting packed tighter and tighter, there's more information, so we've got a computational problem as well, so we've got to get more computational power but there's new types of data, like suddenly there's gravitational waves. There's new types of analysis that we want to do so we want to be able to look at this data in a very flexible way and ingest it and explore new ideas more quickly because things are happening so fast, so that's why we've adopted this in memory paradigm for a number of years now and the latest incarnation of this is the HP Superdome flex and that's a shared memory system, so you can just pull in all your data and explore it without carefully programming how the memory is distributed around. We find this is very easy for our users to develop data analytic pipelines to develop their new theoretical models and to compare the two on the single system. It's also very easy for new users to use. You don't have to be an advanced programmer to get going, you can just stay with the science in a sense. >> You gotta have a PhD in Physics to do great in Physics, you don't have to have a PhD in Physics and technology. >> That's right, yeah it's a very flexible program. A flexible architecture with which to program so you can more or less take your laptop pipeline, develop your pipeline on a laptop, take it to the Superdome and then scale it up to these huge memory problems. >> And get it done fast and you can iterate. >> You know these are the most brilliant scientists in the world, bar none, I made the analogy the other day. >> Oh, thanks. >> You're supposed to say aw, chucks. >> Peter: Aw, chucks. >> Present company excepted. >> Oh yeah, that's right. >> I made the analogy of, imagine I.M. Pei or Frank Lloyd Wright or someone had to be their own general contractor, right? No, they're brilliant at designing architectures and imagining things that no one else could imagine and then they had people to go do that. This allows the people to focus on the brilliance of the science without having to go become the expert programmer, we see that in business too. Parallel programming techniques are difficult, spoken like an old tandem guy, parallelism is hard but to the extent that you can free yourself up and focus on the problem and not have to mess around with that, it makes life easier. Some problems parallelize well, but a lot of them don't need to be and you can allow the data to shine, you can allow the science to shine. >> Is it correct that the barrier in your ability to reach a conclusion or make a discovery is the ability to find that needle in a haystack or maybe there are many, but. >> Well, if you're talking about obstacles to progress, I would say computational power isn't the obstacle, it's developing the software pipelines and it's the human personnel, the smart people writing the codes that can look for the needle in the haystack who have the efficient algorithms to do that and if they're cobbled by having to think very hard about the hardware and the architecture they're working with and how they've parallelized the problem, our philosophy is much more that you solve the problem, you validate it, it can be quite inefficient if you like, but as long as it's a working program that gets you to where you want, then your second stage you worry about making it efficient, putting it on accelerators, putting it on GPUs, making it go really fast and that's, for many years now we've bought these very flexible shared memory or in memory is the new word for it, in memory architectures which allow new users, graduate students to come straight in without a Master's degree in high performance computing, they can start to tackle problems straight away. >> It's interesting, we hear the same, you talk about it at the outer reaches of the universe, I hear it at the inner reaches of the universe from the life sciences companies, we want to map the genome and we want to understand the interaction of various drug combinations with that genetic structure to say can I tune exactly a vaccine or a drug or something else for that patient's genetic makeup to improve medical outcomes? The same kind of problem, I want to have all this data that I have to run against a complex genome sequence to find the one that gets me to the answer. From the macro to the micro, we hear this problem in all different sorts of languages. >> One of the things we have our clients, mainly in business asking us all the time, is with each, let me step back, as analysts, not the smartest people in the world, as you'll attest I'm sure for real, as analysts, we like to talk about change and we always talked about mainframe being replaced by minicomputer being replaced by this or that. I like to talk in terms of the problems that computing's been able to take on, it's been able to take on increasingly complex, challenging, more difficult problems as a consequence of the advance of technology, very much like you're saying, the advance of technology allows us to focus increasingly on the problem. What kinds of problems do you think physicists are gonna be able to attack in the next five years or so as we think about the combination of increasingly powerful computing and an increasingly simple approach to use it? >> I think the simplification you're indicating here is really going to more memory. Holding your whole workload in memory, so that you, one of the biggest bottlenecks we find is ingesting the data and then writing it out, but if you can do everything at once, then that's the key element, so one of the things we've been working on a great deal is in situ visualization for example, so that you see the black holes coming together and you see that you've set the right parameters, they haven't missed each other or something's gone wrong with your simulation, so that you do the post-processing at the same time, you never need the intermediate data products, so larger and larger memory and the computational power that balances with that large memory. It's all very well to get a fat node, but you don't have the computational power to use all those terrabytes, so that's why this in memory architecture of the Superdome Flex is much more balanced between the two. What are the problems that we're looking forward to in terms of physics? Well, in cosmology we're looking for these hints about the origin of the universe and we've made a lot of progress analyzing the Plank satellite data about the cosmic microwave background. We're honing in on theories of inflation, which is where all the structure in the universe comes from, from Heisenberg's uncertainty principle, rapid period of expansion just like inflation in the financial markets in the very early universe, okay and so we're trying to identify can we distinguish between different types and are they gonna tell us whether the universe comes from a higher dimensional theory, ten dimensions, gets reduced to three plus one or lots of clues like that, we're looking for statistical fingerprints of these different models. In gravitational waves of course, this whole new area, we think of the cosmic microwave background as a photograph of the early universe, well in fact gravitational waves look right back to the earliest moment, fractions of a nanosecond after the big bang and so it may be that the answers, the clues that we're looking for come from gravitational waves and of course there's so much in astrophysics that we'll learn about compact objects, about neutron stars, about the most energetic events there are in the whole universe. >> I never thought about the idea, because cosmic radiation background goes back what, about 300,000 years if that's right. >> Yeah that's right, you're very well informed, 400,000 years because 300 is. >> Not that well informed. >> 370,000. >> I never thought about the idea of gravitational waves as being noise from the big bang and you make sense with that. >> Well with the cosmic microwave background, we're actually looking for a primordial signal from the big bang, from inflation, so it's yeah. Well anyway, what were you gonna say Randy? >> No, I just, it's amazing the frontiers we're heading down, it's kind of an honor to be able to enable some of these things, I've spent 30 years in the technology business and heard customers tell me you transformed by business or you helped me save costs, you helped me enter a new market. Never before in 30 plus years of being in this business have I had somebody tell me the things that you're providing are helping me understand the origins of the universe. It's an honor to be affiliated with you guys. >> Oh no, the honor's mine Randy, you're producing the hardware, the tools that allow us to do this work. >> Well now the honor's ours for coming onto the Cube. >> That's right, how do we learn more about your work and your discoveries, inclusions. >> In terms of looking at. >> Are there popular authors we could read other than Stephen Hawking? >> Well, read Stephen's books, they're very good, he's got a new one called A Briefer History of Time so it's more accessible than the Brief History of Time. >> So your website is. >> Yeah our website is ctc.cam.ac.uk, the center for theoretical cosmology and we've got some popular pages there, we've got some news stories about the latest things that have happened like the HP partnership that we're developing and some nice videos about the work that we're doing actually, very nice videos of that. >> Certainly, there were several videos run here this week that if people haven't seen them, go out, they're available on Youtube, they're available at your website, they're on Stephen's Facebook page also I think. >> Can you share that website again? >> Well, actually you can get the beautiful videos of Stephen and the rest of his group on the Discover website, is that right? >> I believe so. >> So that's at HP Discover website, but your website is? >> Is ctc.cam.ac.uk and we're just about to upload those videos ourselves. >> Can I make a marketing suggestion. >> Yeah. >> Simplify that. >> Ctc.cam.ac.uk. >> Yeah right, thank you. >> We gotta get the Cube at one of these conferences, one of these physics conferences and talk about gravitational waves. >> Bone up a little bit, you're kind of embarrassing us here, 100,000 years off. >> He's better informed than you are. >> You didn't need to remind me sir. Thanks very much for coming on the Cube, great pleasure having you today. >> Thank you. >> Keep it right there everybody, Mr. Universe and I will be back after this short break. (upbeat techno music)

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. the director of the Center for Theoretical Cosmology Yeah good to be back for the second time this week. to what you can do with in memory compute. Well in the Cosmos Group, of which I'm the head, okay what have you learned and can you summarize it and in the last two years, gravitational waves in the cosmic microwave background. in the universe and they're sound waves or seismic waves and it is the most precise experiment ever undertaken and you shrink one way and you stretch the other. Yeah you become thinner and these tiny, tiny changes of the universe from that point. I'm from the theory group, we're doing the predictions for the needle in the haystack, that's a different way and making predictions about this stuff. the technology has gotten to the point where you can assume to get going, you can just stay with the science in a sense. You gotta have a PhD in Physics to do great so you can more or less take your laptop pipeline, in the world, bar none, I made the analogy the other day. This allows the people to focus on the brilliance is the ability to find that needle in a haystack the problem, our philosophy is much more that you solve From the macro to the micro, we hear this problem One of the things we have our clients, at the same time, you never need the I never thought about the idea, Yeah that's right, you're very well informed, from the big bang and you make sense with that. from the big bang, from inflation, so it's yeah. It's an honor to be affiliated with you guys. the hardware, the tools that allow us to do this work. and your discoveries, inclusions. so it's more accessible than the Brief History of Time. that have happened like the HP partnership they're available at your website, to upload those videos ourselves. We gotta get the Cube at one of these conferences, of embarrassing us here, 100,000 years off. You didn't need to remind me sir. Keep it right there everybody, Mr. Universe and I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Dave VellantePERSON

0.99+

Peter BurrisPERSON

0.99+

2015DATE

0.99+

PaulPERSON

0.99+

Randy MeyerPERSON

0.99+

PeterPERSON

0.99+

30 yearsQUANTITY

0.99+

HeisenbergPERSON

0.99+

Frank Lloyd WrightPERSON

0.99+

Paul ShellerdPERSON

0.99+

twoQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Cosmos InstituteORGANIZATION

0.99+

30 plus yearsQUANTITY

0.99+

Center for Theoretical CosmologyORGANIZATION

0.99+

A Briefer History of TimeTITLE

0.99+

Cosmos GroupORGANIZATION

0.99+

RandyPERSON

0.99+

100,000 yearsQUANTITY

0.99+

ten dimensionsQUANTITY

0.99+

three milesQUANTITY

0.99+

yesterdayDATE

0.99+

five yearsQUANTITY

0.99+

second stageQUANTITY

0.99+

Paul ShellardPERSON

0.99+

threeQUANTITY

0.99+

ctc.cam.ac.ukOTHER

0.99+

ShallardPERSON

0.99+

Stephen HawkingPERSON

0.99+

three timesQUANTITY

0.99+

Brief History of TimeTITLE

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.98+

first timeQUANTITY

0.98+

this weekDATE

0.98+

Ctc.cam.ac.ukOTHER

0.98+

two lasersQUANTITY

0.98+

Madrid, SpainLOCATION

0.98+

400,000 yearsQUANTITY

0.98+

hundreds of billions of light yearsQUANTITY

0.98+

this yearDATE

0.98+

DiscoverORGANIZATION

0.98+

MadridLOCATION

0.98+

second timeQUANTITY

0.98+

oneQUANTITY

0.97+

about 300,000 yearsQUANTITY

0.96+

two main areasQUANTITY

0.96+

University of CambridgeORGANIZATION

0.96+

Superdome flexCOMMERCIAL_ITEM

0.96+

Nobel prizeTITLE

0.95+

OneQUANTITY

0.95+

about twoQUANTITY

0.95+

one wayQUANTITY

0.95+

one laserQUANTITY

0.94+

HANATITLE

0.94+

single systemQUANTITY

0.94+

HP DiscoverORGANIZATION

0.94+

eachQUANTITY

0.93+

YoutubeORGANIZATION

0.93+

HPORGANIZATION

0.93+

two thingsQUANTITY

0.92+

UniversePERSON

0.92+

firstQUANTITY

0.92+

ProfessorPERSON

0.89+

last two yearsDATE

0.88+

I.M. PeiPERSON

0.88+

CubeCOMMERCIAL_ITEM

0.87+

370,000QUANTITY

0.86+

Cambridge UniversityORGANIZATION

0.85+

SynergyORGANIZATION

0.8+

PlankLOCATION

0.8+

300QUANTITY

0.72+

several videosQUANTITY

0.65+

next five yearsDATE

0.64+

HPCORGANIZATION

0.61+

a nanosecondQUANTITY

0.6+

Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT


 

>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)

Published Date : Sep 28 2017

SUMMARY :

is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JudithPERSON

0.99+

JenniferPERSON

0.99+

JimPERSON

0.99+

NeilPERSON

0.99+

Stephanie McReynoldsPERSON

0.99+

JackPERSON

0.99+

2001DATE

0.99+

Marc AndreessenPERSON

0.99+

Jim KobielusPERSON

0.99+

Jennifer ShinPERSON

0.99+

AmazonORGANIZATION

0.99+

Joe CasertaPERSON

0.99+

Suzie WelchPERSON

0.99+

JoePERSON

0.99+

David FloyerPERSON

0.99+

PeterPERSON

0.99+

StephaniePERSON

0.99+

JenPERSON

0.99+

Neil RadenPERSON

0.99+

Mark TurnerPERSON

0.99+

Judith HurwitzPERSON

0.99+

JohnPERSON

0.99+

ElysianORGANIZATION

0.99+

UberORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

2017DATE

0.99+

HoneywellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Derek SiversPERSON

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

New York CityLOCATION

0.99+

1998DATE

0.99+

Tod Nielsen, VMware Hosts Phil Soran, Compellent & Heineken Netherlands- VMworld 2010- theCUBE


 

welcome back to vmworld live 2010 live at the cube in san francisco california Moscone Center at vmworld 2010 please welcome this morning's press conference with VM ware compelling technologies and their customers Heineken from the Netherlands speaking today our Todd is Todd Nielsen's chief operating officer of VMware Phil sore and CEO of Compellent technologies and from Heineken the Netherlands microbrews virtualization team lead lucien de konak project manager and now please welcome Todd Nielsen the chief operating officer of VMware it's a it's great to be here we'd like to welcome you to the Compellent vmware operands and i want to say a couple words about compelling technologies in our partnership with them as vmware they've been a great storage partner of ours have a number of customers together a number and we really like work with them to drive value to our overall customer the solution said the that we did announced yesterday at vmware at vm for every dollar of license revenue that vmware cells we are partners or our ecosystem is able to add on or to drag with that fifteen dollars of ecosystem revenue and the compellent folks are a great example of a partnership with vmware where our solutions work well together and we do some exciting things we're going to hear from for the president and CEO of compellent and one of their customers but before we do one of my favorites twist of this press conference is a differentiation of compellent is the fluid data architecture and I think it's somewhat ironic after last night's beer crawls at vmworld 2010 that Heineken happens to be the customer on stage so I'm sure there's a story there and I would like to introduce Phil Soren the president and CEO of compellent to tell us about the company and about the Heineken beer crawls great Todd thanks a lot we're just thrilled to be up here on stage with you being participated in the fantastic show you have in operation here at moscone in San Francisco and we're thrilled to have a joint announcement our customer heineken here and to have them for from the Netherlands to share the excitement with us but let me tell you a little about Compellent we're a data storage company with the fluid data architecture we've been really the innovator if you look at primary storage innovation over the last decade things like thin provisioning sub lund automated tiered storage tiering disk platters flexible volumes portable volumes then provision you look at all those types of innovations over the last decade that storages come out and compellent has been the leader in that whole space and I think we'd be able to get ahead of some of the incumbent vendors with our innovation and we're in really fast growing we grew about thirty eight percent year-over-year last year we're the one of the fastest growing sandbenders in the world and we're hoping to keep that growing about 2,100 customers in 34 countries Heineken being a good example in the Netherlands of those customers there they're running their mission-critical enterprise applications on us for their worldwide operations and I would say of the 2100 about 2090 of them are also running some form of VMware so this partnership with vmware is very very important to us and we're real excited about it talk a little bit about our patented technology we call it the fluid data architecture and we thought no better customer to do a joint press conference with on our fluid data architecture than Heineken so the ultimate fluid data architecture is the combination of heineken and compellent and our system is so easy to use that you can actually enjoy a Heineken while you're about storage administrator so we like that they're so Heineken Nell lenders are our customer we have microbrews in Lushan nakonec and we're real excited to hear about their story they're part of a global enterprise of customers we talked about we have customers in all industries verticals geographic areas we're announcing actually this week we're announcing our expansion of our Australian operations where we have dozens of customers already but we're now seeing the expanse of our Australian operations and now let's take it back to the Netherlands and let's hear a real customer story about how vmware Compellent can really cut the the total cost of ownership in a data center by more than fifty percent with the combination of our two efforts and also improve the operational efficiency of those data centers and let's hear Mike and Lucien to tell us a little bit about it okay thank you very much feel I guess I don't have to introduce any cancer company itself because we all know with the core business or for companies brewing beer not only the beer we grade to brew great beers and great brands and that makes us the number one brewer in Europe and the number three in volume in the complete worldwide we have over 200 regional and local beers and ciders in total and when we look at our breweries we have almost in every country we have a brewery or its Hank is deliverable when we look at the International Anakin international we're very large company almost in every country as I just said before and we have 130 140 breweries in more than 70 countries which is good for a group beer volume of 200 million hectoliters of beer a year as includes insiders when we look at the the Netherlands we have only three breweries that's where it all started we have 18 million hectoliters of total supply but we're not drinking at all ourselves the domestic market is only about five million hectoliters and the rest of the volume is going to USA so as all export for us and that's where all your beers coming from and I strategies that we've introduced a Heineken Light several years ago is especially made for the USA market because we don't drink it okay when we take a look at the virtualization roadmap for hanukkah we started about six years ago in 2004 VMware was the only real player in the market at the moment we introduced it when we were consolidating our data centers in our main location suit about we came from about 12 server rooms to one major data center which we used storage from HP at the moment and we used HP blades infrastructure and we decided to go for it with VMware for our DTI environments or the test and acceptance environment after several years it we grew outgrew our storage capacity and we needed to upgrade so we we change te va with a forklift upgrade some to another EPA and we also introduced a new version of vmware again we're later we thought everybody was ready to go to use protection so everyone used the dta and i was confident it should work on production also so we start with the bronze service that our servers are not mission critical for us those are great success and last year we start a new project to virtualize every gold and silver system we have that means every mission critical and priority system we use for brewing packaging and distribution just the latest news is that last weekend we migrated one of the last warehouse management systems there's also virtualized now and is running perfectly where are we going we are looking at the end of the year we're going to vsphere for of course the main thing and last year we decided to choose for another storage storage solution we chose component well this is something where elution comes in you can tell about the choice you ate and why we did it okay thank you well well tell you something about the project itself the migration and why we choose component in the first place well we really needed to look for other solutions because especially in the two main sites suta wild and divorce we had some serious problems especially the support costs because after three years you pay an enormous amount of money for support from HB also we had our capacity problems and also experienced severe performance issues in suit about us so that meant that we had to take action fast also we had we were stuck on the AEV a 5000 which didn't allow us to upgrade to a newer version of vmware and also we couldn't use windows server 2008 which was very high on us on our priority list furthermore business continuity is on a plan for early next year so we wanted to have a solution which could provide us that and also because heineken is as called a new but it's not really a project but Sequoia the hunt for cash within itn Anakin element meant that we we want to you reduce IT costs as much as possible so in another point problem was that we had a major issue with reporting from our currents and infrastructure why did you choose for component well it opera it operates with every operating system it is very very important it's one for the solution that fits really everything that's what we experience as well during the migration we could start with replication early next year that's also very important and we needed a high high performance solution but it eventually meant why we choose chose for a component that it's excellent value for money the fluid data concept we really was consecrated what we can can use and give us high flexibility and one of the major pros is that the accident reporting facilities is I've never seen a better reporting functionality inside a project such as propellant and what is also very important that we got 24 7 proactive support and that's something you will never get for free so okay well as a result we have at least certain sixty percent virtualized and actually like my except last week we went to 61 percent because we virtualize to more FM machines and the speed we are going now it it really looks that we are in 2012 we are going to for ninety percent and I think it's a really feasible the number of disks we significantly reduced which meant lowers I decided lower on the power lower on low on the rec space for example the evi 5000 cost us one and a half 19-inch rack and right now it's about 12 views so it's a real big difference the performance we see on all layers not only on the only windows servers also on ax systems we see an enormous improvement regarding performance with yet we did have to do some optimization but with the support of copilot in the in the last month we had a excellent result and we even have a much better performance that we ever had so and because yeah we are finally using solar state drives because we really needed that for a sequel a reporting server which is very business critical and indio on the old evi we reached performance for about twenty thirty five minutes for a report which needed to be ready before a certain time and now we even cut times under 20 minutes so you see how fast it really is so we are next week actually the final and virtual machine will be migrated from the OTV a two component and that will finalize our migration on both breweries and so far no disruption whatsoever so we're very pleased perfect so that's a that's our part of the presentation thank you somebody talks out of the sky now right any questions uh well the question the question was with all the savings he's gotten his data center can you lower the cost of heineken beer for everyone I knew a new kind of heineken light right yeah how we go do that let us not up to me we really want to thank you guys for sharing that story I mean it just hit all our bullets about you know the future built in performance flexibility fluid data VMware compellent working together and we're just really really excited and we appreciate you sharing your story with our viewers and our customers and our prospects out in the audience here okay thank you guys yeah okay

Published Date : Feb 27 2012

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
TeresaPERSON

0.99+

ComcastORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Khalid Al RumaihiPERSON

0.99+

Phil SorenPERSON

0.99+

BahrainLOCATION

0.99+

MikePERSON

0.99+

Dave VolantePERSON

0.99+

TIBCOORGANIZATION

0.99+

General ElectricORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

John FurrierPERSON

0.99+

Jeff FrickPERSON

0.99+

TonyPERSON

0.99+

2016DATE

0.99+

AWSORGANIZATION

0.99+

PegaORGANIZATION

0.99+

KhalidPERSON

0.99+

Tony BaerPERSON

0.99+

AsiaLOCATION

0.99+

Dave VellantePERSON

0.99+

2014DATE

0.99+

$100 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

SunnyvaleLOCATION

0.99+

March 2015DATE

0.99+

DavePERSON

0.99+

JeffPERSON

0.99+

MongoORGANIZATION

0.99+

46%QUANTITY

0.99+

90%QUANTITY

0.99+

Todd NielsenPERSON

0.99+

2017DATE

0.99+

SeptemberDATE

0.99+

MicrosoftORGANIZATION

0.99+

JulyDATE

0.99+

USLOCATION

0.99+

AtlasORGANIZATION

0.99+

Bahrain Economic Development BoardORGANIZATION

0.99+

KuwaitLOCATION

0.99+

MaltaLOCATION

0.99+

Hong KongLOCATION

0.99+

SingaporeLOCATION

0.99+

2012DATE

0.99+

Gulf Cooperation CouncilORGANIZATION

0.99+

So CalORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

United StatesLOCATION

0.99+

VegasLOCATION

0.99+

JohnPERSON

0.99+

New YorkLOCATION

0.99+