Jim Wasko, IBM - Red Hat Summit 2017
>> Announcer: Live from Boston Massachusets it's The Cube covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to The Cubes coverage of the Red Hat Summit, I'm your host Rebecca Knight, along with my cohost Stu Miniman. We are joined by Jim Wasko, he is the vice president of Open Systems at IBM. Thanks so much for joining us. >> Thanks for having me. >> So, before we get into the new ways in which IBM and Red Hat are working together, give us a little history on the IBM, Red Hat alliance and contextualize things for us. >> Oh sure, sure, so we started with Linux back in the very late '90's as a strategic initiative for IBM, and so Red Hat was one of the key players at that time. We worked with other Linux vendors who no longer exist. Linux Care was one of the companies we worked with, Mandrake, things along those lines. But Red Hat has been a constant through all of that. So we started in the very early days with Red Hat and we had an X86 line at the time, and then as well as Power NZ, and even in the very early days, we had ports of Red Hat running on IBM, all of IBM's hardware. >> And the alliance is going strong today? >> Yes it is, yes it is. So we have that long history and then as Red Hat transformed as a company into their enterprise software and REL in particular, that really matured, as far as our relationship was concerned, and I'm the engineering VP with Red Hat, and we just had a very strong collaborative relationship. We know how to work upstream, they obviously work very well upstream. We've worked in the Fedora Project, as a staging area for our platforms and so, yeah, we've known each other very well. I've been working on Linux at IBM since November of 2000. >> Jim, so IBM, long history with Open Source, I remember when it was the billion dollars invested in Linux. We covered on The Cube when Power became Open Power. Companies like Google endorsing Open Power. Bring us up to speed as to Open Power, how that fits with what you're doing with Red Hat and what you're talking about on the show here. >> Oh yeah, so Open Power was really about opening up hardware architecture as well as the operating system and firmware. And so, as that's progressed Red Hat has also joined in that Open Power initiative. If you look at when we started, just a small group of companies kicked it off, and today we're over 300 companies, including Red Hat as a part of Open Power foundation. They're also board members, so as a key partner in strategic partner of ours, they've recognized that it's an ecosystem that is worth participating in, because it's very disruptive, and they've been very quick to join us. >> That's good, we've talked to Jim Lighthurst about how they choose and they look for communities that are going to do good things for the industry, for the world, for the users, so, it's a nice endorsement to have Red Hat participate, I would think. >> Oh, it is, they don't enter into anything lightly. And so, their participation really is a signal, I think, in the marketplace, that this is a good strategic initiative for the industry. >> Where do you see as the biggest opportunities for growth, going forward. >> Opportunities for growth, there's quite a few. A lot of people don't realize that Linux is really the underlying engine for so many things that we do in the technology world. It's everything from embedded into the automotive industry, if you've got Onboard computer, which most new cars do, 80% of those are Linux. If you talked about web serving, websites, front ends, it's Linux, you know. I know with my mom, she's like "What do you work on?" and I say Linux you know, and she's like "Is that like Windows?" and I'm like "No." And then I tell her, you know Mom you've used it, probably a dozen times today, and then I give her examples. And so, all the new innovation tends to happen on Linux. If we look at HyperLedger, and Blockchain in particular, good example, that's one that takes a lot of collaboration, a lot of coordination if it's going to have a meaningful impact on the world. And so, it starts with Linux as foundation to it. So, any of those new technologies, if you look at what we're doing with quantum computing for example, it takes a traditional computer to feed it, and a tradition computer for the output, and we don't have time to go into details behind that but, Linux fed, as a part of it, because really that's where the innovation is taking place. >> Jim, could you expand a little bit more on the Hyperledger and Blockchain piece? A lot of people still, I think they understand BitCoin and digital currency there, but it's really some of the distributed and open source capabilities that these technologies deliver to the market, have some interest and use cases, what's the update on that? >> Oh that's a good question. So, a lot of people think of BitCoin and that says a very limited use case. As we look at Hyperledger, we notice that it could be applied in so many more ways than just a financial kind of way. Where we've done, it is logistics, and supply chain, we've implemented it at IBM for our supply chain and we've taken data from Weather.com, company that we've acquired, and we use that for our logistics for end of quarter for example. So that's something that was easier for us to implement, because it's all within our company. But then we are expanding that through partners. So that's an example where you could do supply chain logistics, you could do financials. But really, in order for that to work 'cause it's a distributed ledger, you need everybody in the ecosystem to participate. It can't be one company, can't be two companies. And so, that's why very early on we recognized we should jointly start up a project that the Linux Foundation, called Hyperledger, to look at what's the best and how could we all collaborate because we're all going to benefit from it, and it will be transformative. >> So what are you doing there, because as you said, these do present big challenges because there has to buy in from everyone? >> Yeah so if I look at the Hyperledger project specifically at the Linux Foundation, we've got customers of ours like JPMC for example, founding member and participant, we've got a distribution partners, we've got technology partners all there and so we contributed early code. Stuff we'd done in research, as kind of like a building block. And then we have members, both from research and product development side of the house, that are constantly working in that upstream community on the source code. >> And continually contributing, and okay... >> Yeah, well continually contributing, that's on the technology side. On the business side we're doing early proof of concepts, so we worked early with a company called Everledger that looks at the history of diamonds and tracks them beginning to end, and the ultimate goal of that is to eliminate blood diamonds from the marketplace and so if you know, it's also a very good market to begin because it's a limited set of players. So you can implement the technology, you can do the business processes behind it and then demonstrate the value. So that's an early project. Most of the financial institutions are doing stuff, whether it's stock trading or what have you. And so we're doing early proof of concept, so we're taking both technology and business, you marry 'em together as Jim Whitehurst said the other day you know, what's the minimal viable product, lets get that out there, lets try it out, lets learn. >> Release early release often. >> Yes, and then modify quickly, don't start with something you think is overly baked, and find that you have to shelf it in order to kind of back track and make corrections. >> And what is like to mesh those two cultures, the technology and the business? I mean, do you find that there is a clash? >> We have not. Now at IBM it was not a simple transition back in the late '90's. There were people that thought Open Source would be just a flash in the pan, and here we are so many years later, that's not true. And so early on, like I said, there were a lot of internal kind of debates, but that debate is long since settled, so we don't have that. And if you look across our different business divisions, even within our company, whether its Cloud, whether it's Cognitive, whether it's systems business, all use Open Source. Whether we contribute everything externally and we're using third party packaged, or we consume it ourselves. And we see that as happening across industry, even with out clients. Some that you might think are very traditional, they recognize that's where the innovation is taking place. And so, you always look at balancing is this viable, is that healthy? Or is still the commercially available stuff the better stuff? Just a quick story, I had a development team and we were doing Agile and we needed a tool to do to track our sprints and everything like that, and so, all of my developers were Open Source developers, and so that's their bias. If we're going to use software, it has to be Open Source, they went and evaluated a couple projects and they found Open Source software that had been abandoned, they were smart enough to recognize we also acquired a company called Rational, and Rational Team Concert does this, but it's proprietary. And so they initially resisted it, but then they looked at these Open Source project and saw, if we picked up that code, we maintain it forever, and we're alone. That is as worthless, as it can be, because there's no benefit. Doing Open Source, where you have multiple people contributing, you give an added benefit. So they went with our in house stuff, Rational Team Concert. Just showed the maturity of the team that even though they think Open Source is really the best thing in life, you've got to balance the business with it. >> Jim, so we look at the adoption of Open Source, it took many years to mature. Today, you talk about things like Cognitive, it's racing so fast, give us a little bit of look forward, you know, what's changing your space? What are you looking forward to? What would we expect to see from you by the time we come back next year? >> Sure, so a lot of what you've heard here at the conference so a lot of things that we're doing, are often offered in a Cloud platform, or as a hosted service, or as a service. So, for example, we do have Blockchain as a service available today. And it's running the back end is on mainframe cloud, for example, running Linux. Other examples of that, looking at new applications for quantum computing. Well that requires severengic freezing in order to keep those cubits alive. And so that's a hosted thing, and we actually have that available online, people can use that today. So I think that you're going to see a lot of early access, even for commercial applications. Early access so people can try it, and then based on their business model, like we've heard from clients this week, sometimes they'll need it on prem, and for various business reasons, and other times they can do it on the cloud and we'll be able to provide that. But we give them early access via cloud and as a service. And I think that's what you're going to see a lot in the industry. >> And it's this hybrid mix, as you said, some on prem, some off prem, okay. >> Jim: Yes. >> Well Jim, thanks so much for joining us, we really appreciate you sitting down with us. >> You're welcome, and thanks for your time. >> I'm Rebecca Knight, for Stu Miniman, we'll have more from the Red Hat Summit after this. (upbeat electronic music)
SUMMARY :
Brought to you by Red Hat. We are joined by Jim Wasko, he is the vice president of IBM and Red Hat are working together, and even in the very early days, we had ports of Red Hat and I'm the engineering VP with Red Hat, and what you're talking about on the show here. and today we're over 300 companies, for the world, for the users, so, for the industry. Where do you see as the biggest opportunities and we don't have time to go into details behind that but, and we use that for our logistics and so we contributed early code. and the ultimate goal of that is to eliminate and find that you have to shelf it and we were doing Agile and we needed a tool to do by the time we come back next year? and we actually have that available online, And it's this hybrid mix, as you said, we really appreciate you sitting down with us. I'm Rebecca Knight, for Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Wasko | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Jim Lighthurst | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
November of 2000 | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
JPMC | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
Weather.com | ORGANIZATION | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Everledger | ORGANIZATION | 0.99+ |
Open Power | ORGANIZATION | 0.99+ |
Mandrake | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.98+ |
Hyperledger | ORGANIZATION | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
two cultures | QUANTITY | 0.98+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
one company | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 300 companies | QUANTITY | 0.97+ |
Power NZ | ORGANIZATION | 0.97+ |
Agile | TITLE | 0.97+ |
Windows | TITLE | 0.97+ |
Today | DATE | 0.96+ |
late '90's | DATE | 0.95+ |
a dozen times | QUANTITY | 0.94+ |
late | DATE | 0.93+ |
one | QUANTITY | 0.93+ |
Red Hat | TITLE | 0.92+ |
this week | DATE | 0.92+ |
Rational | ORGANIZATION | 0.9+ |
many years later | DATE | 0.86+ |
Open Systems | ORGANIZATION | 0.83+ |
HyperLedger | TITLE | 0.81+ |
end of quarter | DATE | 0.81+ |
Rational Team Concert | ORGANIZATION | 0.77+ |
'90's | DATE | 0.77+ |
Hat | TITLE | 0.71+ |
Open | ORGANIZATION | 0.7+ |
The Cubes | ORGANIZATION | 0.7+ |
Linux Care | ORGANIZATION | 0.7+ |
BitCoin | OTHER | 0.69+ |
Steve Roberts, IBM– DataWorks Summit Europe 2017 #DW17 #theCUBE
>> Narrator: Covering DataWorks Summit, Europe 2017, brought to you by Hortonworks. >> Welcome back to Munich everybody. This is The Cube. We're here live at DataWorks Summit, and we are the live leader in tech coverage. Steve Roberts is here as the offering manager for big data on power systems for IBM. Steve, good to see you again. >> Yeah, good to see you Dave. >> So we're here in Munich, a lot of action, good European flavor. It's my second European, formerly Hadoop Summit, now DataWorks. What's your take on the show? >> I like it. I like the size of the venue. It's the ability to interact and talk to a lot of the different sponsors and clients and partners, so the ability to network with a lot of people from a lot of different parts of the world in a short period of time, so it's been great so far and I'm looking forward to building upon this and towards the next DataWorks Summit in San Jose. >> Terri Virnig VP in your organization was up this morning, had a keynote presentation, so IBM got a lot of love in front of a fairly decent sized audience, talking a lot about the sort of ecosystem and that's evolving, the openness. Talk a little bit about open generally at IBM, but specifically what it means to your organization in the context of big data. >> Well, I am from the power systems team. So we have an initiative that we have launched a couple years ago called Open Power. And Open Power is a foundation of participants innovating from the power processor through all aspects, through accelerators, IO, GPUs, advanced analytics packages, system integration, but all to the point of being able to drive open power capability into the market and have power servers delivered not just through IBM, but through a whole ecosystem of partners. This compliments quite well with the Apache, Hadoop, and Spark philosophy of openness as it relates to software stack. So our story's really about being able to marry the benefits of open ecosystem for open power as it relates to the system infrastructure technology, which drives the same time to innovation, community value, and choice for customers as it relates to a multi-vendor ecosystem and coupled with the same premise as it relates to Hadoop and Spark. And of course, IBM is making significant contributions to Spark as part of the Apache Spark community and we're a key active member, as is Hortonworks with the ODPi organization forwarding the standards around Hadoop. So this is a one, two combo of open Hadoop, open Spark, either from Hortonworks or from IBM sitting on the open power platform built for big data. No other story really exists like that in the market today, open on open. >> So Terri mentioned cognitive systems. Bob Picciano has recently taken over and obviously has some cognitive chops, and some systems chops. Is this a rebranding of power? Is it sort of a layer on top? How should we interpret this? >> No, think of it more as a layer on top. So power will now be one of the assets, one of the sort of member family of the cognitive systems portion on IBM. System z can also be used as another great engine for cognitive in certain clients, certain use cases where they want to run cognitive close to the data and they have a lot of data sitting on System z. So power systems as a server really built for big data and machine learning, in particular our S822LC for high performance computing. This is a server which is landing very well in the deep learning, machine learning space. It offers the Tesla P100 GPU and with the NVIDIA NVLink technology can offer up to 2.8x bandwidth benefits CPU to GPU over what would be available through a PCIe Intel combination today. So this drives immediate value when you need to ensure that not just you're exploiting GPUs, but you of course need to move your data quickly from the processor to the GPU. >> So I was going to ask you actually, sort of what make power so well suited for big data and cognitive applications, particularly relative to Intel alternatives. You touched on that. IBM talks a lot about Moore's Law starting to hit its peak, that innovation is going to come from other places. I love that narrative 'cause it's really combinatorial innovation that's going to lead us in the next 50 years, but can we stay on that thread for a bit? What makes power so substantially unique, uniquely suited and qualified to run cognitive systems and big data? >> Yeah, it actually starts with even more of the fundamentals of the power processors. The power processor has eight threads per core in contrast to Intel's two threads per core. So this just means for being able to parallelize your workloads and workloads that come up in the cognitive space, whether you're running complex queries and need to drive SQL over a lot of parallel pipes or you're writing iterative computation, the same data set as when you're doing model training, these can all benefit from highly parallelized workloads, which can benefit from this 4x thread advantage. But of course to do this, you also need large, fast memory, and we have six times more cache per core versus Broadwell, so this just means you have a lot of memory close to the processor, driving that throughput that you require. And then on top of that, now we get to the ability to add accelerators, and unique accelerators such as I mentioned the NVIDIA in the links scenario for GPU or using the open CAPI as an approach to attach FPGA or Flash to get access speeds, processor memory access speeds, but with an attached acceleration device. And so this is economies of scale in terms of being able to offload specialized compute processing to the right accelerator at the right time, so you can drive way more throughput. The upper bounds are driving workload through individual nodes and being able to balance your IO and compute on an individual node is far superior with the power system server. >> Okay, so multi-threaded, giant memories, and this open CAPI gives you primitive level access I guess to a memory extension, instead of having to-- >> Yeah, pluggable accelerators through this high speed memory extension. >> Instead of going through, what I often call the horrible storage stack, aka SCSI, And so that's cool, some good technology discussion there. What's the business impact of all that? What are you seeing with clients? >> Well, the business impact is not everyone is going to start with supped up accelerated workloads, but they're going to get there. So part of the vision that clients need to understand is to begin to get more insights from their data is, it's hard to predict where your workloads are going to go. So you want to start with a server that provides you some of that upper room for growth. You don't want to keep scaling out horizontally by requiring to add nodes every time you need to add storage or add more compute capacity. So firstly, it's the flexibility, being able to bring versatile workloads onto a node or a small number of nodes and be able to exploit some of these memory advantages, acceleration advantages without necessarily having to build large scale out clusters. Ultimately, it's about improving time to insights. So with accelerators and with large memory, running workloads on a similar configured clusters, you're simply going to get your results faster. For example, recent benchmark we did with a representative set of TPC-DS queries on Hortonworks running on Linux and power servers, we're able to drive 70% more queries per hour over a comparable Intel configuration. So this is just getting more work done on what is now similarly priced infrastructure. 'Cause power family is a broad family that now includes 1U, 2U, scale out servers, along with our 192 core horsepowers for enterprise grade. So we can directly price compete on a scale out box, but we offer a lot more flexible choice as clients want to move up in the workload stack or to bring accelerators to the table as they start to experiment with machine learning. >> So if I understand that right, I can turn two knobs. I can do the same amount of work for less money, TCO play. Or, for the same amount of money, I can do more work. >> Absolutely >> Is that fair? >> Absolutely, now in some cases, especially in the Hadoop space, the size of your cluster is somewhat gated by how much storage you require. And if you're using the classic scale up storage model, you're going to have so many nodes no matter what 'cause you can only put so much storage on the node. So in that case, >> You're scaling storage. >> Your clusters can look the same, but you can put a lot more workload on that cluster or you can bring in IBM, a solution like IBM Spectrum Scale our elastic storage server, which allows you to essentially pull that storage off the nodes, put it in a storage appliance, and at that point, you now have high speed access to storage 'cause of course the network bandwidth has increased to the point that the performance benefit of local storage is no longer really a driving factor to a classic Hadoop deployment. You can get that high speed access in a storage appliance mode with the resiliency at far less cost 'cause you don't need 3x replication, you just have about a 30% overhead for the software erasure coding. And now with your compete nodes, you can really choose and scale those nodes just for your workload purposes. So you're not bound by the number of nodes equal total storage required by storage per node, which is a classic, how big is my cluster calculation. That just doesn't work if you get over 10 nodes, 'cause now you're just starting to get to the point where you're wasting something right? You're either wasting storage capacity or typically you're wasting compute capacity 'cause you're over provisioned on one side or the other. >> So you're able to scale compute and storage independent and tune that for the workload and grow that resource efficiently, more efficiently? >> You can right size the compute and storage for your cluster, but also importantly is you gain the flexibility with that storage tier, that data plan can be used for other non-HDFS workloads. You can still have classic POSIX applications or you may have new object based applications and you can with a single copy of the data, one virtual file system, which could also be geographically distributed, serving both Hadoop and non-Hadoop workloads, so you're saving then additional replicas of the data from being required by being able to onboard that onto a common data layer. >> So that's a return on asset play. You got an asset that's more fungible across the application portfolio. You can get more value out of it. You don't have to dedicate it to this one workload and then over provision for another one when you got extra capacity sitting here. >> It's a TCO play, but it's also a time saver. It's going to get you time to insight faster 'cause you don't have to keep moving that data around. The time you spend copying data is time you should be spending getting insights from the data, so having a common data layer removes that delay. >> Okay, 'cause it's HDFS ready I don't have to essentially move data from my existing systems into this new stovepipe. >> Yeah, we just present it through the HDFS API as it lands in the file system from the original application. >> So now, all this talk about rings of flexibility, agility, etc, what about cloud? How does cloud fit into this strategy? What do are you guys doing with your colleagues and cohorts at Bluemix, aka SoftLayer. You don't use that term anymore, but we do. When we get our bill it says SoftLayer still, but any rate, you know what I'm talking about. The cloud with IBM, how does it relate to what you guys are doing in power systems? >> Well the cloud is still, really the born on the cloud philosophy of IBM software analytics team is still very much the motto. So as you see in the data science experience, which was launched last year, born in the cloud, all our analytics packages whether it be our BigInsights software or our business intelligence software like Cognos, our future generations are landing first in the cloud. And of course we have our whole arsenal of Watson based analytics and APIs available through the cloud. So what we're now seeing as well as we're taking those born in the cloud, but now also offering a lot of those in an on-premise model. So they can also participate in the hybrid model, so data science experience now coming on premise, we're showing it at the booth here today. Bluemix has a on premise version as well, and the same software library, BigInsights, Cognos, SPSS are all available for on prem deployment. So power is still ideal place for hosting your on prem data and to run your analytics close to the data, and now we can federate that through hybrid access to these elements running in the cloud. So the focus is really being able to, the cloud applications being able to leverage the power and System z's based data through high speed connectors and being able to build hybrid configurations where you're running your analytics where they most make sense based upon your performance requirements, data security and compliance requirements. And a lot of companies, of course, are still not comfortable putting all their jewels in the cloud, so typically there's going to be a mix and match. We are expanding the footprint for cloud based offerings both in terms of power servers offered through SoftLayer, but also through other cloud providers, Nimbix is a partner we're working with right now who actually is offering our Power AI package. Power AI is a package of open source, deep learning frameworks, packaged by IBM, optimized for Power in an easily deployed package with IBM support available. And that's, could be deployed on premise in a power server, but also available on a pay per drink purpose through the Nimbix cloud. >> All right, we covered a lot of ground here. We talked strategy, we talked strategic fit, which I guess is sort of a adjunct to strategy, we talked a little bit about the competition and where you differentiate, some of the deployment models, like cloud, other bits and pieces of your portfolio. Can we talk specifically about the announcements that you have here at this event, just maybe summarize for use? >> Yeah, no absolutely. As it relates to IBM, and Hadoop, and Spark, we really have the full stack support, the rich analytics capabilities that I was mentioning, deep insight, prescriptive insights, streaming analytics with IBM Streams, Cognos Business Intelligence, so this set of technologies is available for both IBMs, Hadoop stack, and Hortonworks Hadoop stack today. Our BigInsights and IOP offering, is now out for tech preview, their next release their 4.3 release, is available for technical preview will be available for both Linux on Intel, Linux on power towards the end of this month, so that's kind of one piece of new Hadoop news at the analytics layer. As it relates to power systems, as Hortonworks announced this morning, HDP 2.6 is now available for Linux on power, so we've been partnering closely with Hortonworks to ensure that we have an optimized story for HDP running on power system servers as the data point I shared earlier with the 70% improved queries per hour. At the storage layer, we have a work in progress to certify Hortonworks, to certify Spectrum Scale file system, which really now unlocks abilities to offer this converged storage alternative to the classic Hadoop model. Spectrum Scale actually supports and provides advantages in both a classic Hadoop model with local storage or it can provide the flexibility of offering the same sort of multi-application support, but in a scale out model for storage that it also has the ability to form a part of a storage appliance that we call Elastic Storage Server, which is a combination of power servers and high density storage enclosures, SSD or spinning disk, depending upon the, or flash, depending on the configuration, and that certification will now have that as an available storage appliance, which could underpin either IBM Open Platform or HDP as a Hadoop data leg. But as I mentioned, not just for Hadoop, really for building a common data plane behind mixed analytics workloads that reduces your TCO through converged storage footprint, but more importantly, provides you that flexibility of not having to create data copies to support multiple applications. >> Excellent, IBM opening up its portfolio to the open source ecosystem. You guys have always had, well not always, but in the last 20 years, major, major investments in open source. They continue on, we're seeing it here. Steve, people are filing in. The evening festivities are about to begin. >> Steve: Yeah, yeah, the party will begin shortly. >> Really appreciate you coming on The Cube, thanks very much. >> Thanks a lot Dave. >> You're welcome. >> Great to talk to you. >> All right, keep it right there everybody. John and I will be back with a wrap up right after this short break, right back.
SUMMARY :
brought to you by Hortonworks. Steve, good to see you again. Munich, a lot of action, so the ability to network and that's evolving, the openness. as it relates to the system and some systems chops. from the processor to the GPU. in the next 50 years, and being able to balance through this high speed memory extension. What's the business impact of all that? and be able to exploit some of these I can do the same amount of especially in the Hadoop space, 'cause of course the network and you can with a You don't have to dedicate It's going to get you I don't have to essentially move data as it lands in the file system to what you guys are and to run your analytics a adjunct to strategy, to ensure that we have an optimized story but in the last 20 years, Steve: Yeah, yeah, the you coming on The Cube, John and I will be back with a wrap up
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Steve Roberts | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Munich | LOCATION | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Terri | PERSON | 0.99+ |
3x | QUANTITY | 0.99+ |
six times | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
San Jose | LOCATION | 0.99+ |
two knobs | QUANTITY | 0.99+ |
Bluemix | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
eight threads | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Hadoop | TITLE | 0.99+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Nimbix | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
DataWorks Summit | EVENT | 0.98+ |
SoftLayer | TITLE | 0.98+ |
second | QUANTITY | 0.97+ |
Hadoop Summit | EVENT | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
Spark | TITLE | 0.97+ |
IBMs | ORGANIZATION | 0.95+ |
single copy | QUANTITY | 0.95+ |
end of this month | DATE | 0.95+ |
Watson | TITLE | 0.95+ |
S822LC | COMMERCIAL_ITEM | 0.94+ |
Europe | LOCATION | 0.94+ |
this morning | DATE | 0.94+ |
firstly | QUANTITY | 0.93+ |
HDP 2.6 | TITLE | 0.93+ |
first | QUANTITY | 0.93+ |
HDFS | TITLE | 0.91+ |
one piece | QUANTITY | 0.91+ |
Apache | ORGANIZATION | 0.91+ |
30% | QUANTITY | 0.91+ |
ODPi | ORGANIZATION | 0.9+ |
DataWorks Summit Europe 2017 | EVENT | 0.89+ |
two threads per core | QUANTITY | 0.88+ |
SoftLayer | ORGANIZATION | 0.88+ |
Jamie Thomas, IBM - IBM Interconnect 2017 - #ibminterconnect - #theCUBE
>> Announcer: Live, from Las Vegas, it's the Cube. Covering InterConnect 2017. Brought to you by, IBM. >> Okay welcome back everyone, we're here live in Las Vegas for IBM InterConnect 2017, this is the Cube coverage here, in Las Vegas for IBM's cloud and data shows. It turns out, I'm John Furrier, with my cohost Dave Vellante, next guess is Jamie Thomas, general manager of systems development and strategy at IBM, Cube Alum. Great to see you, welcome back. >> Thank you, great to see you guys as usual. >> So, huge crowds here. This is I think, the biggest show I've been to for IBM. It's got lines around the corner, just a ton of traffic online, great event. But it's the cloud show, but it's a little bit different. What's the twist here today at InterConnect? >> Well, if you saw the Keynote, I think we've definitely demonstrated that while we're focused on differentiating experience on the cloud through cloud native services, we're also interesting in bridging existing clients IT investments into that environment. So, supporting hybrid cloud scenarios, understanding how we can provide connective fabric solutions, if you will, to enable clients to run mobile applications on the cloud and take advantage of the investments they've made and their existing transactional infrastructure over a period of time. And so the Keynote really featured that combination of capabilities and what we're doing to bring those solution areas to clients and allow them to be productive. >> And the hybrid cloud is front and center, obviously. IOT on the data side, you've seen a lot of traction there. AI and machine learning, kind of powering and lifting this up, it's a systems world now, I mean this is the area that you're in. Cause you have the component pieces, the composibility of that. How are you guys facilitating the hybrid cloud journey for customers? Because now, it's not just all here it is, I might have a little bit of this and a little bit of that, so you have this component-isationer composobility that app developers are consistent with, yet the enterprises want that work load flexibility. What do you guys do to facilitate that? >> Well we absolutely believe that infrastructure innovation is critical on this hybrid cloud journey. And we're really focused on three main areas when we think about that innovation. So, integration, security, and supportive cognitive workloads. When we look at things like integration, we're focused on developers as key stake holders. We have to support the open communities and frameworks that they're leveraging, we have to support API's and allow them to tap into our infrastructure and those investments once again, and we also have to ensure that data and workload can be flexibly moved around in the future because these will allow better characteristics for developers in terms of how they're designing their applications as they move forward with this journey. >> And the insider threat, though, is a big thing too. >> Yes. >> I mean security is not only table stakes, it's a highly sensitive area. >> It's a given. And as you said, it's not just about protecting from the outside threats, it's about protecting from internal threats, even from those who may have privileged access to the systems, so that's why, with our systems infrastructure, we have protected from the chip, all the way through the levels of hardware into the software layer. You heard us talk about some of that today with the shipment of secure service containers that allow us to support the system both at install time and run time, and support the applications and the data appropriately. These systems that run Blockchain, our high security Blockchain services, LinuxONE, we have the highest certification in the industry, EAL five plus, and we're supporting FIPS 120-two, level four cryptology. So it's about protecting at all layers of the system, because our perspective is, there's not a traditional barrier, data is the new perimeter of security. So you've got to protect the data, at rest, in motion, and across the life cycle of the data. >> Let's go back to integration for a second. Give us an example of some of the integrations that you're doing that are high profile. >> Well one of the key integrations is that a lot of clients are creating new mobile applications. They're tapping back into the transactions that reside in the mainframe environment, so we've invested in ZOS Connect and this API set of capabilities to allow clients to do that. It's very prevalent in many different industries, whether it's retail banking, the retail sector, we have a lot of examples of that. It's allowing them to create new services as well. So it's not just about extending the system, but being able to create entirely new solutions. And the areas of credit card services is a good example. Some of the organizations are doing that. And it allows for developer productivity. >> And then, on the security side, where does encryption fit? You mentioned you're doing some stuff at the chip level, end to end encryption. >> Yeah it really, it's at all levels, right? From the chip level, through the firmware levels. Also, we've added encryption capability to ensure that data is encrypted at rest, as well as in motion, and we've done that in a way that encrypts these data sets that are heavily used in the main frame environment as an example, without impending on developer productivity. So that's another key aspect of how we look at this. How can we provide this data protection? But once again, not slow down the velocity of the developers. Cause if we slow down the velocity of the developers, they will be an inhibitor to achieving the end goal. >> How important is the ecosystem on that point? Because you have security, again, end to end, you guys have that fully, you're protecting the data as it moves around, so it's not just in storage, it's everywhere, moving around, in flight, as they say. But now you got ecosystem parties, cause you got API economy, you're dealing with no perimeter, but now also you have relationships as technology partners. >> Yes, well the ecosystem is really important. So if we think about it from a developer perspective, obviously supporting these open frameworks is critical. So supporting Linux and Docker and Spark and all of those things. But also, to be able to innovate at the rate and pace we need, particularly for things like cognitive workloads, that's why we created the Open Power Foundation. So we have more than 300 partners that we're able to innovate with, that allow us to create the solutions that we think we'll need for these cognitive workloads. >> What is a cognitive workload? >> So a cognitive workload is what I would call an extremely data hungry workload, the example that we can all think of is we're expecting, when we experience the world around us, we're expecting services to be brought to us, right, the digital economy understands our desires and wants and reacts immediately. So all of that is driving, that expectation is driving this growth and artificial intelligence, machine learning, deep learning type algorithms. Depending on what industry you're in, they take on a different persona, but there's so many different problems that can be solved by this, whether it's I need to have more insight into the retail offers I provide to an in consumer, to I need to be able to do fraud analytics because I'm in the financial services industry, there's so many examples of these cognitive applications. The key factors are just, tremendous amount of data, and a constrained amount of time to get business insight back to someone. >> When you do these integrations and you talk about the security investments that you're making, how do you balance the resource allocation between say, IBM platforms, mainframe, power, and the OS's, the power in those, and Linux, for example, which is such a mainstay of what you guys are doing. Are you doing those integrations on the open side as well in Linux and going deep into the core, or is it mostly focused on, sort of, IBM owned technology? >> So it really depends on what problem we're trying to solve. So, for instance, if we're trying to solve a problem where we're marrying data insight with a transaction, we're going to implement a lot of that capability on ZOS, cause we want to make sure that we're reducing data latency and how we execute the processing, if you will. If we're looking at things like new work loads and evolution of new work loads, and new things are being created, that's more naturally fit for purpose from a Linux perspective. So we have to use judgment, a lot of the new programming, the new applications, are naturally going to be done on a Linux platform, cause once again that's a platform of choice for the developer community. So, we have to think about whether we're trying to leverage existing transactions with speed, or whether we're allowing developers to create new assets, and that's a key factor in what we look at. >> Jamie, your role, is somewhat unique inside of IBM, the title of GM system's development and strategy. So what's your scope, specifically? >> So, I'm responsible for the systems development involved in our processor's mainframes, power systems, and storage. And of course, as a strategy person for a unit like that, I have responsibility for thinking about these hybrid scenarios and what do we need to do to make our clients successful on this journey? How do we take advantage of their tremendous investments they made with us over years. We have strong responsibility for those investments and making sure the clients get value. And also understanding where they need to go in the future and evolving our architecture and our strategic decisions, along those lines. >> So you influence development? >> Jamie: Yes. >> In a big way, obviously. It's a lot of roadmap work. >> Jamie: Yes. >> A lot of working with clients to figure out requirements? >> Well I have client support too, so I have to make sure things run. >> What about quantum computing? This has been a big topic, what's the road map look like? What's the evolution of that look like? Talk about that initiative. >> Well if I gave you the full road map they'd take me out of here with a hook out of this chair. >> You're too good for that, damn, almost got it from you. >> But we did announce the industries first commercial universal quantum computing project. A few weeks ago. It's called IBM Q, so we had some clever branding help, because Q makes me think of the personality in the James Bond movie who was always involved in the latest R&D research activity. And it really is the culmination of decades of research between IBM researchers and researchers around the world, to create this system that hopefully can solve problems to date, that are unsolvable today with classical computers. So, problems in areas like material science and chemistry. Last year we had announced quantum experience, which is an online access to a quantum capabilities in our Yorktown research laboratory. And over the last year, we've had more than 40,000 users access this capability. And they've actually executed a tremendous number of experiments. So we've learned from that, and now we're on this next leg of the journey. And we see a world where IBM Q could work together with our classical computers to solve really really tough problems. >> And that computing is driving a lot of the IOT, whether that's health care, to industrial, and everything in between. >> Well we're in the early stages of quantum, to be fair, but there's a lot of unique problems that we believe that it will solve. We do not believe that everything, of course, will move from classical to quantum. It will be a combination, an evolution, of the capabilities working together. But it's a very different system and it will have unique properties that allow us to do things differently. >> So, what are the basics? Why quantum computing? I presume it's performance, scale, cost, but it's not traditional, binary, computing, is that right? >> Yes. It's very, very different. In fact, if. >> Oh we just got the two minute sign. >> It's a very different computing model. It's a very different physical, computing model, right? It's built on this unit called a Q bit, and the interesting thing about a Q bit is it could be both a zero and a one at the same time. So it kind of twists our minds a little bit. But because of that, and those properties, it can solve very unique problems. But we're at the early part of the journey. So this year, our goal is to work with some organizations, learn from the commercialization of some of the first systems, which will be run in a cloud hosted model. And then we'll go from there. But, it's very promising. >> In the timeframe for commercial systems, have you guys released that? >> Well, this year, we'll start the commercial journey, but within the next few years we do plan to have a quantum computer that would then, basically, out strip the power of the largest super computers that we have today in the industry. But that's, you know, over the next few years we'll be evolving to that level. Because eventually, that's the goal, right? Is to solve the problems that we can't solve with today's classical computers. >> Talk about real quickly, in the last couple minutes, Blockchain, and where that's going, because you have a lot of banks and financial institutions looking at this as part of the messaging and the announcements here. >> Well, Blockchain is one of those workloads of course that we're optimizing with a lot that security work that I talked about earlier so. The target of our high security Blockchain services is LinuxONE, is driving a lot of encryption strategy. This week, in fact, we've seen a number of examples of Blockchain. One was talked about this morning, which was around diamond provenance, from the Everledger organization. Very clever implementation of Blockchain. We've had a number of financial institutions that are using Blockchain. And I also showed an interesting example today. Plastic Bank, which is an organization that's using Blockchain to allow ecosystem improvement, or improving our planet, if you will, by allowing communities to exchange plastic, recyclable plastic for currency. So it's really about enabling plastic to be turned into currency through the use of Blockchain. So a very novel example of a foundational research organization improving the environment and allowing communities to take advantage of that. >> Jamie thanks for stopping by the Cube, really appreciate giving the update and insight into the quantum, the Q project, and all the greatness around, all the hard work going to into the hybrid cloud, the security-osity is super important, thanks for sharing. >> It's good to see you. >> Okay we're live here, in Mandalay Bay, for IBM InterConnect 2017, stay with us for more live coverage, after this short break.
SUMMARY :
Announcer: Live, from Las Vegas, it's the Cube. and strategy at IBM, Cube Alum. the biggest show I've been to for IBM. and take advantage of the investments and a little bit of that, so you have this in the future because these will allow And the insider threat, though, it's a highly sensitive area. and support the applications and the data appropriately. Let's go back to integration for a second. So it's not just about extending the system, end to end encryption. of the developers. How important is the ecosystem on that point? So we have more than 300 partners that we're able the example that we can all think of and the OS's, the power in those, a lot of the new programming, the title of GM system's development and strategy. and making sure the clients get value. It's a lot of roadmap work. so I have to make sure things run. What's the evolution of that look like? Well if I gave you the full road map damn, almost got it from you. and researchers around the world, And that computing is driving a lot of the IOT, of the capabilities working together. In fact, if. and the interesting thing about a Q bit Because eventually, that's the goal, right? the messaging and the announcements here. of course that we're optimizing with a lot that and insight into the quantum, the Q project, Okay we're live here, in Mandalay Bay,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
more than 40,000 users | QUANTITY | 0.99+ |
Open Power Foundation | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
more than 300 partners | QUANTITY | 0.99+ |
ZOS | TITLE | 0.99+ |
this year | DATE | 0.99+ |
ZOS Connect | TITLE | 0.99+ |
FIPS 120-two | OTHER | 0.99+ |
two minute | QUANTITY | 0.99+ |
Yorktown | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Linux | TITLE | 0.98+ |
decades | QUANTITY | 0.98+ |
first systems | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Blockchain | TITLE | 0.96+ |
InterConnect 2017 | EVENT | 0.96+ |
Everledger | ORGANIZATION | 0.96+ |
Plastic Bank | ORGANIZATION | 0.96+ |
zero | QUANTITY | 0.95+ |
InterConnect | EVENT | 0.95+ |
Spark | TITLE | 0.92+ |
three main areas | QUANTITY | 0.92+ |
this morning | DATE | 0.9+ |
IBM InterConnect 2017 | EVENT | 0.89+ |
Cube Alum | ORGANIZATION | 0.88+ |
One | QUANTITY | 0.85+ |
level four | OTHER | 0.84+ |
few weeks ago | DATE | 0.84+ |
#ibminterconnect | EVENT | 0.8+ |
IBM Interconnect 2017 | EVENT | 0.78+ |
first commercial | QUANTITY | 0.74+ |
next few years | DATE | 0.73+ |
Docker | TITLE | 0.71+ |
years | DATE | 0.7+ |
Cube | COMMERCIAL_ITEM | 0.65+ |
Q | PERSON | 0.6+ |
second | QUANTITY | 0.56+ |
LinuxONE | ORGANIZATION | 0.55+ |
James Bond | PERSON | 0.54+ |
Keynote | TITLE | 0.54+ |
EAL five plus | OTHER | 0.52+ |
LinuxONE | TITLE | 0.51+ |
Cube | ORGANIZATION | 0.49+ |
#theCUBE | ORGANIZATION | 0.4+ |