Compute Session 04
>>Good morning. Good, absolute and good evening to all >>those who are listening to this presentation. >>I am rather to Saxena and I manage the platform >>solutions and the thought body operating systems team in the compute workload and solutions group within HP compute >>today I'm >>going to discuss about containers >>and what containers >>do for you >>as a customer >>and why >>should you consider h PE container solutions >>for transforming your business? >>Let's talk about how some of >>the trends seen >>in the industry are impacting the >>customer's day >>in and >>day out and what is it that >>they really need >>cloud services >>and continue your ization, increase operational flexibility, agility and >>speed. >>But non native >>apps seem >>to be a serious issue. >>These legacy apps >>and architecture slow the >>development team, >>making it much harder to meet competitive demand >>and cost pressures. It administrators are >>looking for a way to quickly deploy and manage the resources there. Developers need. >>They want to release more >>updates more quickly. Digital transformation has really shifted >>its focus >>from operations. Two applications, it's all >>about gaining the agility to deploy code faster >>developers want the >>flexibility to choose from a variety of >>Os or containerized ab stacks and to have fast access >>to the resources >>they need. And Ceos >>and line >>of business owners need visibility >>into cost >>and usage so they can optimize their >>spend and drive >>higher utilization of >>their resources. >>So let's define what >>is container technology. >>Container >>technology is a method used to package >>an application >>and software. >>It is a game changer. >>Let's take a closer look at at a couple of >>examples within each area. In the area of cost savings, we achieve savings by reducing the virtualized footprint and by reducing administrative overhead >>through the introduction >>of CIA >>CD pipelines. >>In terms of agility, >>this helps you become more a child by enabling >>your workload portability. It also >>shortens development >>life cycle while increasing the frequency >>of application updates. Within innovation, container platform technologies >>provides >>centralized >>images and source code >>through standard >>repositories, decoupling of application dependencies >>and use of templates >>leading to enhancing >>collaboration. This kick starts your innovation >>container technology would bring >>these benefits to enterprise it and accelerate the transformation of business. >>H. P. E has the proven >>architecture and expertise for the introduction >>of container technology. >>Apps and >>data are no longer centralized in >>the data center. >>They live >>everywhere at the edge, >>in Carlos, >>in the cloud and >>in the data center. This creates >>enormous complexity for application operability >>performance >>and security >>customers are looking >>for a way >>to simplify >>speed and scale their apps and that's driving a rise in container adoption. >>Managing these >>distributed environments requires different skill sets, >>tools and processes >>to manage both >>traditional and cloud environments. >>It is complex >>and time consuming >>all of these workloads are also very >>data dependent Ai >>data analytics and that modernization are the key entry points for >>HB >>Admiral to >>intercept the transformation budget. >>A study from I. T. >>C. Found that >>More than 50 of enterprises are leveraging containers >>to modernize legacy applications >>as is >>without re architect in them. >>These containers are often then deployed >>in on premise cloud environments using kubernetes and Docker. Re implementing legacy applications >>as >>cloud native microservices >>has proven >>more difficult >>than expected, >>held back by the scarcity of the experienced Microsoft >>talent to do that work. >>As a result, only half >>of the new containers deployed leverage microservices >>for cloud native apps. one key element of the >>HB approach is to reduce the effort >>required to >>continue to rise these existing applications. >>One platform for non cloud native and cloud >>native apps >>is the H P E. S. Moral >>container platform. >>Hp Green Lake brings the >>true cloud >>experience to your cloud >>native and non cloud native apps without >>costly. Re factoring with cloud services for containers through Hve Green Lake >>continue rising. >>Non cloud native apps, >>improves >>efficiency, >>increases agility >>and provides >>application affordability. >>Simple applications can take about three months >>while complex once >>up to a year to re factor >>with cloud services for >>containers through HP Green Lake >>customers can save this time and get the benefits >>With 100 open source kubernetes right away with HP >>Asmal >>container platform, non cloud native state fel. Enterprise apps can be deployed in containers without >>costly re factoring >>enabling customers to bring speed and agility >>to non cloud native apps >>with ease. Hp Green Lake is a >>single platform for war clothes and helps customers avoid the cost of moving data and apps and run walk clothes >>securely from the edge >>call occasions >>and data centers >>while meeting the needs for the agency, >>data sovereignty >>and >>regulatory compliance >>with unique type. The >>HBs milk container platform >>provides a container management control plane >>with the fully integrated >>Hve Admiral data fabric. >>The HBs real container platform >>integrates a high performance distributed >>file, an >>object storage. >>These turnkey >>pre configured >>cloud connected >>solutions >>are delivered in >>As little as 14 days and managed for you by HP. E and our partners so >>customers do not need to skill up on kubernetes. >>The key differentiators >>for H. >>B. S. Merrill are providing a complete >>solution that addresses >>a broad set of applications and a consistent multi cloud deployment and management platform. It solves the data integrity >>and application recovery issues >>central >>to business critical >>on >>premise applications. >>It maintains the commitment to open source to ensure customers >>can take >>advantages of future developments >>with these distributions. >>It reduces >>development effort and moves application development >>to self service. >>Now let us look at >>some customer success stories with HBs Merrill. Here is a >>customer who modernize >>their existing legacy applications. >>There were a lot of blind >>spots in the system and the >>utilization >>Was just about 10%. By transitioning to containers, they >>were able to get >>50 >>eight times faster in just performance, reducing a significant >>portion of the cost of >>the customers deployment, significant >>reduction in infrastructure >>footprint resulting >>in lower TCO >>and with HB Green Lake, they received cloud agility >>at a fraction >>of the cost of the alternatives. This customer is expanding its efforts into machine >>learning and >>analytics technologies >>for decision support in areas >>of ingesting and processing large data sets. >>They are enabling data science >>and >>such based applications >>on large >>and low late in data sets using a combination of >>patch >>and streaming transformation processes. >>These data sets support both offline and in line machine learning, deep learning training >>and model execution >>to deploy these >>environments at >>scale and >>move from >>experimentation >>to >>production. They need to connect the dots between their devops teams and the data science teams >>walking on machine learning >>and analytics from an inch for such a standpoint. They're using containers >>and kubernetes >>to drive greater agility >>and flexibility as well as cost savings and efficiency >>as they are >>operationalized. >>These machine >>learning deep learning >>and analytic initiatives. >>This includes >>automated configuration of software stacks and the deployment of data pipeline bills >>in containers. >>The developers >>selected kubernetes >>as the container >>orchestration engine for the enterprise >>and is using H >>P E S, real container >>platform >>for their machine learning >>deep learning and analytic war clothes. This customer had a growing demand for >>data scientists >>and their goals >>were >>to gain continuous insights into existing and new customers >>and develop innovative products >>and get them to >>market faster amongst others. >>The greater >>infrastructure utilization >>on premises resulted in >>significant cost savings Around $6 million three years >>and significantly improved environment >>provisioning time >>From 9 to 18 months to just about 30 minutes. And along those lines, >>there are many >>more examples >>of customer success stories across various industries >>that proved >>transitioning >>to the HP. Es. >>Moral container >>solutions can be >>a total game changer by the way. HB also >>provides container solutions on with various software vendors. >>This customer >>was eager to >>embrace a giant abb development techniques >>that would allow them >>to become more a child >>scalable >>and affordable, helping to deliver >>an exceptional customer service >>and avoid vendor lock in HB. partnered with >>them to deploy >>red hat, open shift running on HP hardware, >>which became a new container >>based devoPS >>platform, effectively >>running on bare metal for >>minimal resource >>overheads and maximum performance. >>The customer now had a platform >>that was capable of supporting >>their virtualization and continue realization ambitions. >>Now let us see how HB Green Lake can help >>you reduce costs, >>risk and time you get speed, time >>to value >>with >>pre integrated hardware, >>software and services the HP ES moral platform to >>design and build >>container based >>services and cell service, catalog and marketplace for rapid >>provisioning >>of these services, >>you get lower risk to the business >>with >>fully managed by contained by HP >>container experts. >>Proactive resolution >>of incidents, >>active capacity management to scale with demand, you can reduce costs >>by avoiding >>upfront capital expense >>and over >>provisioning with pay per use model >>intuitive dashboard for >>cluster costs and storage. >>HB also has a huge >>differentiator when it >>comes to security. >>The HBs. Silicon Root >>of Trust >>secures your >>data at the microcode level >>inside the processor itself, ensuring >>that your digital assets >>remain protected and secure >>with your continued authorization strategy >>built on the world's >>most >>secure industry standard servers, >>you'll be able to >>fully concentrate your resources on your modernization efforts. >>Additionally, >>you can enjoy >>benefits such as HP >>form where threat detection >>along with the with other best in class >>innovations from H B such as malware detection >>and Form where recovery. Your HP servers >>are protected >>from >>silicon to >>software >>and at every touch >>point in between >>preventing bad >>actors from gaining access to containers or infrastructure. >>H B E can help accelerate >>your transformation >>using >>three pillars. >>Hp Green Lake, >>you can deploy >>any workload as a service >>with >>HP Green Lake Services, >>you can now bring >>cloud >>speed >>agility and as a >>service model >>to wear your >>apps and data are today transform the >>way you do business >>with one experience >>And one operating model >>across your distributed clouds >>for apps >>and data >>at the edge in coal occasions >>and in your data center. HB point Next services >>with over >>11,000 >>I'd projects conducted >>And 1.4 million >>customer interactions each year. >>HB point X Services, >>15,000 plus experts and its vast >>ecosystem of solution >>partners and channel partners >>are uniquely able to help you at every stage >>of your digital transformation because we address >>some of the biggest >>areas that can slow you down. >>We bring together technology >>and expertise >>to help you drive >>your business forward >>and last but not the least. >>Hp Financial services, >>flexible investment >>capacity are key >>considerations >>for businesses >>to drive digital transformation initiatives >>in order to forge a path forward. You need >>access two flexible >>payment options >>that allow you to match icty costs >>to usage. >>From helping release >>capital from existing infrastructure, two different payments >>and providing >>pre owned tech >>to relieve capacities. Train >>HP Financial >>services unlocks the value of the customer's entire >>estate from >>edge >>to cloud >>to end user >>with multi vendor >>solutions consistently and sustainably >>around the world. HB Fs >>makes I'd >>investment >>force multiplier, >>not a stumbling block. >>H B S. Moral >>and HB compute are the >>ideal choice >>for your container Ization strategy, >>combining familiar silver hardware >>with a container platform that has been >>optimized for the environment. >>This combination is >>particularly cost effective, >>allowing you to capitalize on existing hardware skills >>as you focus >>on developing innovative >>containerized solutions. >>H beef Admiral >>fits your existing infrastructure and provides potential to scale as required. >>And with that, >>I conclude this session and I hope >>you found this valuable. There are many resources available at hp dot >>com that you can use >>to your benefit. Thank you once again.
SUMMARY :
Good, absolute and good evening to all and cost pressures. looking for a way to quickly deploy and manage the resources there. Digital transformation has from operations. And Ceos and by reducing administrative overhead your workload portability. of application updates. This kick starts your innovation these benefits to enterprise it and accelerate the transformation in the data center. speed and scale their apps and that's driving a rise in container in on premise cloud environments using kubernetes and Docker. one key element of the Re factoring with cloud services for containers through Hve Enterprise apps can be deployed in containers without with unique type. E and our partners so It solves the data some customer success stories with HBs Merrill. they of the cost of the alternatives. They need to connect the dots between their devops teams and and analytics from an inch for such a standpoint. This From 9 to 18 months to just about 30 minutes. to the HP. HB also and avoid vendor lock in HB. and Form where recovery. and in your data center. in order to forge a path forward. to relieve capacities. around the world. fits your existing infrastructure and provides potential to you found this valuable. to your benefit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
HP | ORGANIZATION | 0.99+ |
9 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Two applications | QUANTITY | 0.99+ |
1.4 million | QUANTITY | 0.99+ |
More than 50 | QUANTITY | 0.98+ |
about 30 minutes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
18 months | QUANTITY | 0.98+ |
each year | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
100 open source kubernetes | QUANTITY | 0.96+ |
15,000 plus experts | QUANTITY | 0.96+ |
HBs Merrill | ORGANIZATION | 0.95+ |
one experience | QUANTITY | 0.95+ |
about 10% | QUANTITY | 0.95+ |
about three months | QUANTITY | 0.95+ |
HB Green Lake | ORGANIZATION | 0.94+ |
two different payments | QUANTITY | 0.94+ |
up to a year | QUANTITY | 0.92+ |
single platform | QUANTITY | 0.92+ |
two | QUANTITY | 0.91+ |
HB | ORGANIZATION | 0.9+ |
three years | QUANTITY | 0.9+ |
three pillars | QUANTITY | 0.9+ |
one | QUANTITY | 0.89+ |
eight times | QUANTITY | 0.88+ |
H. P. E | ORGANIZATION | 0.88+ |
enterprises | QUANTITY | 0.87+ |
One platform | QUANTITY | 0.87+ |
HBs | ORGANIZATION | 0.85+ |
each area | QUANTITY | 0.85+ |
HP. E | ORGANIZATION | 0.83+ |
H B | TITLE | 0.79+ |
Es | TITLE | 0.75+ |
Around $6 million | QUANTITY | 0.75+ |
HB Green Lake | COMMERCIAL_ITEM | 0.74+ |
Hve Green Lake | ORGANIZATION | 0.73+ |
couple | QUANTITY | 0.73+ |
B. S. Merrill | ORGANIZATION | 0.72+ |
one key | QUANTITY | 0.71+ |
Saxena | ORGANIZATION | 0.7+ |
Hp | ORGANIZATION | 0.69+ |
11,000 | QUANTITY | 0.68+ |
Lake | TITLE | 0.66+ |
H. | ORGANIZATION | 0.64+ |
Hve | ORGANIZATION | 0.63+ |
I. T. | ORGANIZATION | 0.6+ |
Session | OTHER | 0.59+ |
Hp | TITLE | 0.59+ |
Ceos | TITLE | 0.55+ |
half | QUANTITY | 0.54+ |
beef | ORGANIZATION | 0.51+ |
ES | TITLE | 0.51+ |
Green | ORGANIZATION | 0.49+ |
Green Lake | TITLE | 0.49+ |
hp | OTHER | 0.49+ |
Asmal | TITLE | 0.47+ |
Lake | ORGANIZATION | 0.45+ |
Green | COMMERCIAL_ITEM | 0.45+ |
Services | ORGANIZATION | 0.39+ |
04 | QUANTITY | 0.38+ |
Carlos | LOCATION | 0.32+ |
Admiral | COMMERCIAL_ITEM | 0.31+ |
HB | COMMERCIAL_ITEM | 0.26+ |
Inderpal Bhandari, IBM | IBM DataOps 2020
from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome this special digital presentation where we're covering the topic of data ops and specifically how IBM is really operationalizing and automating the data pipeline with data ops and with me is Interpol Bhandari who is the global chief data officer at IBM in Nepal has always great to see you thanks for coming on my pleasure you know the standard throw away question from guys like me is you know what keeps the chief data officer up at night well I know what's keeping you up at night it's kovat 19 how are you doing it's keeping keeping all of us yeah for sure so how you guys making out as a leader I'm interested in you know how you have responded with whether it's you know communications obviously you're doing much more stuff you know remotely you're not on airplanes certainly like you used to be but but what was your first move when you actually realized this was going to require a shift well I think one of the first things that I did was to test the ability of my organization who worked remotely this was well before the the recommendations came in from the government but just so that we wanted you know to be sure that this is something that we could pull off if there were extreme circumstances where even everybody was good and so that was one of the first things we did along with that I think another major activity that we embarked on is even that we had created this central data and AI platform for IBM using our hybrid multi cloud approach how could that be adapting very very quickly you helped with the covert situation but those were the two big items that my team embarked on very quickly and again like I said this is well before there was any recommendations from the government or even internally within IBM any recommendations but B we decided that we wanted to run ahead and make sure that we were ready to ready to operate in that fashion and I believe a lot of my colleagues did the same yeah there's a there's a conversation going on right now just around productivity hits that people may be taking because they really weren't prepared it sounds like you're pretty comfortable with the productivity impact that you're achieving oh I'm totally comfortable with the productivity in fact I will tell you that while we've gone down this spot we've realized that in some cases the productivity is actually going to be better when people are working from home and they're able to focus a lot more on the work aspect you know and this could this runs the gamut from the nature of the job where you know somebody who basically needs to be in the front of a computer and is remotely taking care of operations you know if they don't have to come in their productivity is gonna go up somebody like myself who had a long drive into work you know which I would use on phone calls but now that entire time is can be used a lot more productivity but not maybe in a lot more productive manner so there is a we realize that that there's going to be some aspects of productivity that will actually be helped by the situation provided you're able to deliver the services that you deliver with the same level of quality and satisfaction that you've always done now there were certain other aspects where you know productivity is going to be affected so you know my team there's a lot of whiteboarding that gets done there are lots of informal conversations that spark creativity but those things are much harder to replicate in a remote in life so we've got a sense of you know where we have to do some work what things together versus where we were actually going to be more productive but all in all they are very comfortable that we can pull this off no that's great I want to stay on Kovac for a moment and in the context of just data and data ops and you know why now obviously with a crisis like this it increases the imperative to really have your data act together but I want to ask you both specifically as it relates to Co vid why data ops is so important and then just generally why at this this point in our time so I mean you know the journey we've been on they you know when I joined our data strategy centered around the cloud data and AI mainly because IBM's business strategy was around that and because there wasn't the notion of ái in enterprise right there was everybody understood what AI means for the consumer but for the enterprise people don't really understand what it meant so our data strategy became one of actually making IBM itself into an AI and a BA and then using that as a showcase for our clients and customers who look a lot like us to make them into a eye on the prize and in a nutshell what that translated to was that one had to in few AI into the workflow of the key business processes of enterprise so if you think about that workflow is very demanding why do you have to be able to deliver data and insights on time just when it's needed otherwise you can essentially slow down the whole workflow of a major process with but to be able to pull all that off you need to have your own data very very streamlined so that a lot of it is automated and you're able to deliver those insights as the people who are involved in the workflow needed so we've spent a lot of time while we were making IBM into an AI enterprise and infusing AI into our keepers and thus processes into essentially a data ops pipeline that was very very streamlined which then allowed us to very quickly adapt to the covert 19 situation and I'll give you one specific example that we'll go to you know how one would say one could essentially leverage that capability that I just talked about to do this so one of the key business processes that we had taken aim at was our supply chain you know we're a global company and our supply chain is critical we have lots of suppliers and they are all over the globe and we have different types of products so that you know it has a multiplicative fact is we go from each of those you have other additional suppliers and you have events you have other events you have calamities you have political events so we have to be able to very quickly understand the risk associated with any of those events with regard to our supply chain and make appropriate adjustments on the fly so that was one of the key applications that we built on our central data and the Aqua and as part of a data ops pipeline that meant he ingested the ingestion of the several hundred sources of data had to be blazingly fast and also refreshed very very quickly also we had to then aggregate data from the outside from external sources that had to do with weather related events that had to do with political events social media feeds etcetera and overlay that on top of our map of interest with regard to our supply chain sites and also where they were supposed to deliver we'd also weaved in our capabilities here to track those shipments as they flowed and have that data flow back as well so that we would know exactly where where things were this is only possible because we had a streamlined data ops capability and we had built this central data Nai platform for IBM now you flip over to the covert 19 situation when go with 19 you know emerged and we began to realize that this was going to be a significant significant pandemic what we were able to do very quickly was to overlay the Kovach 19 incidents on top of our sites of interest as well as pick up what was being reported about those sites of interest and provide that over to our business continuity so this became an immediate exercise that we embarked but it wouldn't have been possible if you didn't have the foundation of the data ops pipeline as well as that central data Nai platform in place to help you do that very very quickly and adapt so so what I really like about this story and something that I want to drill into is it essentially a lot of organizations have a real tough time operationalizing AI and fusing it to use your word and the fact that you're doing it is really a good proof point that I want to explore a little bit so you're essentially there was a number of aspects of what you just described there was the data quality piece with your data quality in theory anyway is gonna go up with more data if you can handle it and the other was speed time to insight so you can respond more quickly if it's think about this Kovan situation if your days behind or weeks behind which is not uncommon you know sometimes even worse you just can't respond I mean these things change daily sometimes certainly within the day so is that right that's kind of the the business outcome and objective that you guys were after yes you know so trauma from an infused AI into your business processes by the overarching outcome metric that one focuses on is end to end cycle so you take that process the end-to-end process and you're trying to reduce the end-to-end cycle time by you know several factors several orders of magnitude we did for instance in my organization that have to do with the generation of metadata is data about data and that's usually a very time-consuming process and we've reduced that by over 95% by using AI you actually help in the metadata generation itself and that's applied now across the board for many different business processes that you know iBM has that's the same kind of principle that was you you'll be able to do that so that foundation essentially enables you to go after that cycle time reduction right off the bat so when you get to a situation like of open 19 situation which demands urgent action your foundation is already geared to deliver on that so I think actually we might have a graphic and then the second graphic guys if you bring up this second one I think this is Interpol what you're talking about here that sort of 95 percent reduction guys if you could bring that up would take a look at it so this is maybe not a co vid use case yeah here it is so that 95 percent reduction in in cycle time improving and data quality what we talked about there's actually some productivity metrics right this is what you're talking about here in this metadata example correct yeah yes the middle do that right it's so central to everything that one does with data I mean it's basically data about data and this is really the business metadata that we're talking about which is once you have data in your data Lee if you don't have business metadata describing what that data is then it's very hard for people who are trying to do things to determine whether they can even whether they even have access to the right data and typically this process has been done manually because somebody looks at the data they looks at the fields and they describe it and it could easily take months and what we did was we essentially use a deep learning and a natural language processing approach looked at all the data that we've had historically over an idea and we've automated the metadata generation so whether it was you know you were talking about both the data relevant for probit team or for supply chain or for a receivable process any one of our business processes this is one of those fundamental steps that one must go through to be able to get your data ready for action and if you were able to take that cycle time for that step and reduce it by 95% you can imagine the acceleration yeah and I liked it we were saying before you talk about the end to end a concept you're applying system thinking here which is very very important because you know a lot of a lot of points that I talked you'll they'll be they're so focused on one metric may be optimizing one component of that end to end but it's really the overall outcome that you're trying to achieve you you may sometimes you know be optimizing one piece but not the whole so that systems thinking is is very very important isn't it the system's thinking is extremely important overall no matter you know where you're involved in the process of designing the system but if you're the data guy it's incredibly important because not only does that give you an insight into the cycle time reduction but it also gives it clues you in into what standardization is necessary in the data so that you're able to support an eventual out you know a lot of people will go down the path of data governance and creation of data standard and you can easily boil the ocean trying to do that but if you actually start with an end-to-end view of your key processes and that by extension the outcomes associated with those processes as well as the user experience at the end of those processes and kind of then work backwards as to what are the standards that you need for the data that's going to feed into all that that's how you arrive at you know a viable practical data standards effort that you can essentially push forward with so there's there are multiple aspects when you take that end-to-end system you that helps the chief later one of the other tenets of data ops is really the ability across the organization for everybody to have visibility communications it's very key we've got another graphic that I want to show around the organizational you know in the right regime and this is a complicated situation for a lot of people but it's imperative guys if you bring up the first graphic it's imperative that organizations you know fine bring in the right stakeholders and actually identify those individuals that are going to participate so that there's full visibility everybody understands what their their roles are they're not in in silos so a guys if you could show us that first graphic that would be great but talk about the organization and the right regime they're Interpol yes yes I believe you're going to what you're gonna show up is actually my organization but I think it's yes it's very very illustrative of what one has to set up to be able to pull off the kind of impact you know so let's say we talked about that central data and AI platform that's driving the entire enterprise and you're infusing AI into key business processes like the supply chain you then create applications like the operational risk insights that we talked about and then extend it over to a faster merging and changing situation like the overt nineteen you need an organization that obviously reflects the technical aspects of the plan right so you have to have the data engineering arm and in my case there's a lot of emphasis around because that's one of those skill set areas that's really quite rare and but also very very powerful so they're the major technology arms of that there's also the governance arm that I talked about where you have to produce a set of standards and implement them and enforce them so that you're able to make this end-to-end impact but then there's also there's a there's an adoption where there's a there's a group that reports in to me very very you know empowered which essentially has to convince the rest of the organization to adopt but the key to their success has been in power in the sense that they are empowered to find like-minded individuals in our key business processes who are also empowered and if they agree they just move forward and go ahead and do it because you know we've already provided the central capabilities by central I don't mean they're all in one location we're completely global and you know it's it's it's a hybrid multi-cloud set up but it's central in the sense that it's one source to come for for trusted data as well as the expertise that you need from an AI standpoint to be able to move forward and deliver the business outcome so when these business schemes come together with the adoption that's where the magic hand so that's another another aspect of the organization that's critical and then we've also got a data officer council that I chair and that has to do with the people who are the chief data officer z' of the individual business units that we have and they're kind of my extended team into the rest of the organization and we leverage that bolt from a adoption of the platform standpoint but also in terms of defining and enforcing standard it helps us do want to come back the Ovid talked a little bit about business resiliency people I think you've probably seen the news that IBM's you know providing super computer resources to the government to fight coronavirus you've also just announced that some some RTP folks are helping first responders and nonprofits and providing capabilities for no charge which is awesome I mean it's the kind of thing look I'm sensitive the companies like IBM you know you don't want to appear to be ambulance-chasing in these times however IBM and other big tech companies you're in a position to help and that's what you're doing here so maybe you could talk a little bit about what you're doing in this regard and then we'll tie it up with just business resiliency and the importance of data right right so you know I'd explained the operational risk insights application that we had which we were using internally and be covert nineteen even be using it we were using it primarily to assess the risk to our supply chain from various events and then essentially react very very quickly to those through those events so you could manage the situation well we realize that this is something that you know several non government NGOs that big they could essentially use the ability because they have to manage many of these situations like natural disasters and so we've given that same capability to the NGOs to you and to help them to help them streamline their planning and their thinking by the same token but you talked about Oh with nineteen that same capability with the poet mine team data overlaid on top of them essentially becomes a business continuity planning and resilience because let's say I'm a supply chambers right now I can look the incidence of probe ignite and I can and I know where my suppliers are and I can see the incidence and I can say oh yes know this supplier and I can see that the incidence is going up this is likely to be affected let me move ahead and start making plans backup plans just in case it reaches a crisis level then on the other hand if you're somebody in our revenue planning you know on the finance side and you know where your keep clients and customers are located again by having that information overlaid with those sites you can make your own judgments and you can make your own assessment to do that so that's how it translates over into a business continuity and resilient resilience planning - we are internally doing that now - every department you know that's something that we are actually providing them this capability because we could build rapidly on what we had already done and to be able to do that and then as we get inside into what each of those departments do with that data because you know once they see that data once they overlay it to their sites of interest and this is you know anybody and everybody in IBM because no matter what department they're in there are going to be sites of interest that are going to be affected and they have an understanding of what those sites of interest mean in the context of the planning that they're doing and so they'll be able to make judgments but as we gain a better understanding of that we will automate those capabilities more and more for each of those specific areas and now you're talking about a comprehensive approach an AI approach to business continuity and resilience planning in the context of a large complicated organization like IBM which obviously will be of great interest to enterprise clients and customers right one of the things that we're researching now is trying to understand you know what about this crisis is gonna be permanent some things won't be but but we think many things will be there's a lot of learnings do you think that organizations will rethink business resiliency in this context that they might sub optimize profitability for example to be more prepared for crises like this with better business resiliency and what role would data play in that so no it's a very good question and timely question Dave so I mean clearly people have understood that with regard to such a pandemic the first line of beef right is it is it's not going to be so much on the medicine side because the vaccine is not even we won't be available for a period of time it has to go to development so the first line of defense is actually to take a quarantine like a pro like we've seen play out across the world and then that in effect results in an impact on the businesses right in the economic climate and the businesses there's an impact I think people have realized this now they will obviously factor this in into their into how they do business will become one of those things from if this is time talking about how this becomes permanent I think it's going to become one of those things that if you're a responsible enterprise you are going to be planning for you're going to know how to implement this on the second go-around so obviously you put those frameworks and structures in place and there will be a certain cost associated with them and one could argue that that could eat into the profitability on the other hand what I would say is because these two points really that these are fast emerging fluid situations you have to respond very very quickly to those you will end up laying out a foundation pretty much like we did which enables you to really accelerate your pipeline right so the data ops pipelines we talked about there there's a lot of automation so that you can react very quickly you know data ingestion very very rapidly that you're able to you know do that kind of thing the metadata generation just the entire pipeline that we're talking about that you're able to respond and very quickly bring in new data and then aggregated at the right levels infuse it into the workflows and then deliver it to the right people at the right time I will you know that will become a must now but once you do that you could argue that there is a cost associated with doing that but we know that the cycle time reductions on things like that they can run you know I mean I gave you the example of 95 percent you know on average we see like a 70% end to end cycle time era where we've implemented the approach that's been pretty pervasive with an idea across a business process so that in a sense in in essence then actually becomes a driver for profitability so yes it might you know this might back people into doing that but I would argue that that's probably something that's going to be very good long term for the enterprises involved and they'll be able to leverage that in their in their business and I think that just the competitive pressure of having to do that will force everybody down that path mean but I think it'll be eventually a good that end and cycle time compression is huge and I like what you're saying because it's it's not just a reduction in the expected loss during a crisis there's other residual benefits to the organization Interpol thanks so much for coming on the cube and sharing this really interesting and deep case study I know there's a lot more information out there so really appreciate your time all right take care buddy thanks for watching and this is Dave Allante for the cube and we will see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Allante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
95 percent | QUANTITY | 0.99+ |
95 percent | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
Interpol | ORGANIZATION | 0.99+ |
Nepal | LOCATION | 0.99+ |
Interpol Bhandari | PERSON | 0.99+ |
two points | QUANTITY | 0.99+ |
nineteen | QUANTITY | 0.99+ |
first graphic | QUANTITY | 0.99+ |
first move | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
first line | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
two big items | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Kovach 19 | EVENT | 0.97+ |
pandemic | EVENT | 0.97+ |
one metric | QUANTITY | 0.96+ |
Inderpal Bhandari | PERSON | 0.96+ |
Kovac | ORGANIZATION | 0.95+ |
each | QUANTITY | 0.94+ |
one component | QUANTITY | 0.94+ |
Kovan | EVENT | 0.94+ |
over 95% | QUANTITY | 0.93+ |
both | QUANTITY | 0.93+ |
several hundred sources | QUANTITY | 0.92+ |
first line of beef | QUANTITY | 0.92+ |
iBM | ORGANIZATION | 0.91+ |
second graphic | QUANTITY | 0.91+ |
second one | QUANTITY | 0.91+ |
one source | QUANTITY | 0.9+ |
one of those things | QUANTITY | 0.9+ |
first things | QUANTITY | 0.88+ |
a lot of people | QUANTITY | 0.88+ |
lot of a lot of points | QUANTITY | 0.79+ |
IBM DataOps | ORGANIZATION | 0.78+ |
coronavirus | OTHER | 0.77+ |
second go | QUANTITY | 0.77+ |
lot | QUANTITY | 0.75+ |
first | QUANTITY | 0.74+ |
a lot of people | QUANTITY | 0.73+ |
19 | OTHER | 0.73+ |
19 situation | QUANTITY | 0.72+ |
one of those fundamental steps | QUANTITY | 0.71+ |
non government | QUANTITY | 0.6+ |
Ovid | ORGANIZATION | 0.55+ |
2020 | DATE | 0.55+ |
more | QUANTITY | 0.51+ |
19 | EVENT | 0.41+ |