Day One Kickoff | PentahoWorld 2017
>> Narrator: Live from Orlando, Florida, its theCUBE. Covering Pentaho World 2017. Brought to you by Hitachi Vantara. >> We are kicking off day one of Pentaho World. Brought to you, of course, by Hitachi Vantara. I'm your host, Rebecca Knight, along with my co-hosts. We have Dave Vellante and James Kobielus. Guys I'm thrilled to be here in Orlando, Florida. Kicking off Pentaho World with theCUBE. >> Hey Rebecca, twice in one week. >> I know, this is very exciting, very exciting. So we were just listening to the key notes. We heard a lot about the big three, the power of the big three. Which is internet of things, predictive analytics, big data. So the question for you both is where is Hitachi Vantara in this marketplace? And are they doing what they need to do to win? >> Well so the first big question everyone is asking is what the heck is Hitachi-Vantara? (laughing) What is that? >> Maybe we should have started there. >> We joke, some people say it sounds like a SUV, Japanese company, blah blah blah. When we talked to Brian-- >> Jim: A well engineered SUV. >> So Brian Householder told us, well you know it really is about vantage and vantage points. And when you listen to their angles on insights and data, anywhere and however you want it. So they're trying to give their customers an advantage and a vantage point on data and insights. So that's kind of interesting and cool branding. The second big, I think, point is Hitachi has undergone a massive transformation itself. Certainly Hitachi America, which is really not a brand they use anymore, but Hitachi Data Systems. Brian Householder talked in his keynote, when he came in 14 years ago, Hitachi was 80 percent hardware, and infrastructure, and storage. And they've transformed that. They're about 50/50 last year. In terms of infrastructure versus software and services. But what they've done, in my view, is taken now the next step. I think Hitachi has said, alright listen, storage is going to the cloud, Dell and EMC are knocking each others head off. China is coming in to play. Do we really want to try and dominate that business? Rather, why don't we play from our strengths? Which is devices, internet of things, the industrial internet. So they buy Pentaho two years ago, and we're going to talk more about that, bring in an analytics platform. And this sort of marrying IT and OT, information technology and operation technology, together to go attack what is a trillion dollar marketplace. >> That's it so Pentaho was a very strategic acquisition. For Hitachi, of course, Hitachi data system plus Hitachi insides, plus Pentaho equals Hitachi Vantara. Pentaho was one of the pioneering vendors more than a decade ago. In the whole open source analytics arena. If you cast your mind back to the middle millennium decade, open source was starting to come into its own. Of course, we already had Linux an so forth, but in terms of the data world, we're talking about the pre-Hadoop era, the pre-Spark era. We're talking about the pre-TensorFlow era. Pentaho, I should say at that time. Which is, by the way, now a product group within Hitachi Vantara. It's not a stand alone company. Pentaho established itself as the spearhead for open-source, predictive analytics, and data mining. They made something called Weka, which is an open-source data mining toolkit that was actually developed initially in New Zealand. The core of their offering, to market, in many ways became very much a core player in terms of analytics as a service a so forth, but very much established themselves, Pentaho, as an up and coming solution provider taking a more or less, by the book, open source approach for delivering solutions to market. But they were entering a market that was already fairly mature in terms of data mining. Because you are talking about the mid-2000's. You already had SaaS, and SPSS, and some of the others that had been in that space. And done quite well for a long time. And so cut ahead to the present day. Pentaho had evolved to incorporate some fairly robust data integration, data transformation, all ETL capabilities into their portfolio. They had become a big data player in their own right, With a strong focus on embedded analytics, as the keynoters indicated this morning. There's a certain point where in this decade it became clear that they couldn't go it any further, in terms of differentiating themselves in this space. In a space that dominated by Hadoop and Spark, and AI things like TensorFlow. Unless they are part of a more diversified solution provider that offered, especially I think the critical thing was the edge orientation of the industrial internet of things. Which is really where many of the opportunities are now for a variety of new markets that are opening up, including autonomous vehicles, which was the focus of here all-- >> Let's clarify some things a little bit. So Pentaho actually started before the whole Hadoop movement. >> Yeah, yeah. >> That's kind of interesting. You know they were young company when Hadoop just started to take off. And they said alright we can adopt these techniques and processes as well. So they weren't true legacy, right? >> Jim: No. >> So they were able to ride that sort of modern wave. But essentially they're in the business of data, I call it data management. And maybe that's not the right term. They do ingest, they're doing ETL, transformation anyway. They're embedding, they've got analytics, they're embedding analytics. Like you said, they're building on top of Weka. >> James: In the first flesh and BI as a hot topic in the market in the mid-200's, they became a fairly substantial BI player. That actually helped them to grow in terms of revenue and customers. >> So they're one of those companies that touches on a lot of different areas. >> Yes. >> So who do we sort of compare them to? Obviously, what you think of guys like Informatica. >> Yeah, yeah. >> Who do heavy ETL. >> Yes. You mentioned BI, you mentioned before. Like, guys like Saas. What about Tableau? >> Well, BBI would be like, there's Tableau, and ClickView and so forth. But there's also very much-- >> Talend. >> Cognos under IBM. And, of course, there's the business objects Portfolio under SAP. >> David: Right. And Talend would be? >> In fact I think Talend is in many ways is the closest analog >> Right. >> to Pentaho in terms of predominatly open-source, go to market approach, that involves both the robust data integration and cleansing and so forth from the back end. And also, a deep dive of open source analytics on the front end. >> So they're differentiation they sort of claim is they're sort of end to end integration. >> Jim: Yeah. >> Which is something we've been talking about at Wikibon for a while. And George is doing some work there, you probably are too. It's an age old thing in software. Do you do best-of-breed or do you do sort of an integrated suite? Now the interesting thing about Pentaho is, they don't own their own cloud. Hitachi Vantara doesn't own their own cloud. So they do a lot of, it's an integrated pipeline, but it doesn't include its own database and other tooling. >> Jim: Yeah. >> Right, and so there is an interesting dynamic occurring that we want to talk to Donna Perlik about obviously, is how they position relative to roll your own. And then how they position, sort of, in the cloud world. >> And we should ask also how are they positioning now in the world of deep learning frameworks? I mean they don't provide, near as I know, their own deep learning frameworks to compete with the likes of TensorFlow, or MXNet, or CNT or so forth. So where are they going in that regard? I'd like to know. I mean there are some others that are big players in this space, like IBM, who don't offer their own deep learning framework, but support more than one of the existing frameworks in a portfolio that includes much of the other componentry. So in other words, what I'm saying is you don't need to have your own deep learning framework, or even open-source deep learning code-based, to compete in this new marketplace. And perhaps Pentaho, or Hitachi Vantara, roadmapping, maybe they'll take an IBM like approach. Where they'll bundle support, or incorporate support, for two or more of these third party tools, or open source code bases into their solution. Weka is not theirs either. It's open source. I mean Weka is an open source tool that they've supported from the get go. And they've done very well by it. >> It's just kind of like early day machine leraning. >> David: Yeah. >> Okay, so we've heard about Hitachi's transformation internally. And then their messaging today was, of course-- >> Exactly, that's where I really wanted to go next was we're talking about it from the product and the technology standpoint. But one of the things we kept hearing about today was this idea of the double bottom line. And this is how Hitachi Vantara is really approaching the marketplace, by really focusing on better business, better outcomes, for their customers. And obviously for Hitachi Vantara, too, but also for bettering society. And that's what we're going to see on theCUBE today. We're going to have a lot of guests who will come on and talk about how they're using Pentaho to solve problems in healthcare data, in keeping kids from dropping out of college, from getting computing and other kinds of internet power to underserved areas. I think that's another really important approach that Hitachi Vantara is taking in its model. >> The fact that Hitachi Vantara, I know, received Pentaho Solution, has been on the market for so long and they have such a wide range of reference customers all over the world, in many vertical. >> Rebecca: That's a great point. >> The most vertical. Willing to go on camera and speak at some length of how they're using it inside their business and so forth. Speaks volumes about a solution provider. Meaning, they do good work. They provide good offerings. They're companies have invested a lot of money in, and are willing to vouch for them. That says a lot. >> Rebecca: Right. >> And so the acquisition was in 2015. I don't believe it was a public number. It's Hitachi Limited. I don't think they had to report it, but the number I heard was about a half a billion. >> Jim: Uh-hm >> Which for a company with the potential of Pentaho, is actually pretty cheap, believe it or not. You see a lot of unicorns, billion dollar plus companies. But the more important thing is it allows Hitachi to further is transformation and really go after this trillion dollar business. Which is really going to be interesting to see how that unfolds. Because while Hitachi has a long-term view, it always takes a long-term view, you still got to make money. It's fuzzy, how you make money in IOT these days. Obviously, you can make money selling devices. >> How do you think money, open source anything? You know, so yeah. >> But they're sort of open source, with a hybrid model, right? >> Yeah. >> And we talked to Brian about this. There's a proprietary component in there so they can make their margin. Wikibon, we see this three tier model emerging. A data model, where you've got the edge in some analytics, real time analytics at the edge, and maybe persists some of that data, but they're low cost devices. And then there's a sort of aggregation point, or a hub. I think Pentaho today called it a gateway. Maybe it was Brian from Forester. A gateway where you're sort of aggregating data, and then ultimately the third tier is the cloud. And that cloud, I think, vectors into two areas. One is Onprem and one was public cloud. What's interesting with Brian from Forester was saying that basically said that puts the nail in the coffin of Onprem analytics and Onprem big data. >> Uh-hm >> I don't buy that. >> I don't buy that either. >> No, I think the cloud is going to go to your data. Wherever the data lives. The cloud model of self-service and agile and elastic is going to go to your data. >> Couple of weeks ago, of course we Wikibon, we did a webinar for our customers all around the notion of a true private cloud. And Dave, of course, Peter Burse were on it. Explaining that hybrid clouds, of course, public and private play together. But where the cloud experience migrates to where the data is. In other words, that data will be both in public and in private clouds. But you will have the same reliability, high availability, scaleability, ease of programming, so forth, wherever you happen to put your data assets. In other words, many companies we talk to do this. They combine zonal architecture. They'll put some of their resources, like some of their analytics, will be in the private cloud for good reason. The data needs to stay there for security and so forth. But much in the public cloud where its way cheaper quite often. Also, they can improve service levels for important things. What I'm getting at is that the whole notion of a true private cloud is critically important to understand that its all datacentric. Its all gravitating to where the data is. And really analytics are gravitating to where the data is. And increasingly the data is on the edge itself. Its on those devices where its being persistent, much of it. Because there's no need to bring much of the raw data to the gateway or to the cloud. If you can do the predominate bulk of the inferrencing on that data at edge devices. And more and more the inferrencing, to drive things like face recognition from you Apple phone, is happening on the edge. Most of the data will live there, and most of the analytics will be developed centrally. And then trained centrally, and pushed to those edge devices. That's the way it's working. >> Well, it is going to be an exciting conference. I can't wait to hear more from all of our guests, and both of you, Dave Vellante and Jim Kobielus. I'm Rebecca Knight, we'll have more from theCUBE's live coverage of Pentaho World, brought to you by Hitachi Vantara just after this.
SUMMARY :
Brought to you by Hitachi Vantara. Guys I'm thrilled to be So the question for you both is When we talked to Brian-- is taken now the next step. but in terms of the data world, before the whole Hadoop movement. And they said alright we can And maybe that's not the right term. in the market in the mid-200's, So they're one of those Obviously, what you think You mentioned BI, you mentioned before. ClickView and so forth. And, of course, there's the that involves both the they're sort of end to end integration. Now the interesting sort of, in the cloud world. much of the other componentry. It's just kind of like And then their messaging is really approaching the marketplace, has been on the market for so long Willing to go on camera And so the acquisition was in 2015. Which is really going to be interesting How do you think money, and maybe persists some of that data, is going to go to your data. and most of the analytics brought to you by Hitachi
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Hitachi | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Donna Perlik | PERSON | 0.99+ |
Pentaho | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Peter Burse | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
New Zealand | LOCATION | 0.99+ |
Brian Householder | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
80 percent | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Hitachi Vantara | ORGANIZATION | 0.99+ |
Hitachi Limited | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Onprem | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Hitachi Data Systems | ORGANIZATION | 0.99+ |
Forester | ORGANIZATION | 0.99+ |
two areas | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
one week | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Weka | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Tableau | TITLE | 0.98+ |
PentahoWorld | EVENT | 0.98+ |
14 years ago | DATE | 0.98+ |
Hitachi America | ORGANIZATION | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.97+ |
about a half a billion | QUANTITY | 0.97+ |
John Gossman, Microsoft Azure - DockerCon 2017 - #DockerCon - #theCUBE
>> Announcer: Live from Austin, Texas, It's theCUBE, covering DockerCon 2017. Brought to you by Docker and support from its ecosystem partners. >> Welcome back to theCUBE here in Austin, Texas at DockerCon 2017. I'm Stu Miniman with my cohost for the two days of live broadcast, Jim Kobielus. Happy to welcome back to the program, John Gossman, who is the lead architect with Microsoft Azure. Also part of the keynote this morning. John, had the pleasure of interviewing you two years ago. We went though the obligatory wait, Microsoft Open Source, Linux, and Windows and everything living together. It's like cats and dogs. But thanks so much for joining us again. >> Yeah well as I was saying, that's 14 years in cloud years. So it's been a lot of change in that time, but thanks for having me again. >> Yeah. Absolutely. You said it was three years that you've been working Microsoft and Docker together. 21 years in it, dog or cloud years, if you will. I think Docker is more whales and turtles, as opposed to the dogs. But enough about the cartoons and the animals. Why don't you give our audience just a synopsis of kind of the key messages you were trying to get across in the keynote this morning. >> Okay well the very simple message is that what we enabled this new technology, Hyper-V isolation for Linux containers, is the ability to run Linux containers just seamlessly on Windows using the normal Docker experience. It's just Docker run, BusyBox or Docker run, MySQL, or whatever it is, and it just works. And of course if you know a little more technical detail about containers, you realize that one of the reasons that the containers are the way there are is that all the containers on a box normally share a kernel. And so you can run a Canonical, Ubuntu on user space, on a Red Hat kernel or vice versa. But Windows and Linux kernels are too different. So if you want to run Windows container, it's not going to run easily on Linux and vice versa. And you can still get this effect, if you want it, by also using a virtual machine. But then you've got the management overhead of managing the virtual machine, managing the containers, all the complexity that that involves. You have to get a VHD or AMI or something like that, as well a container image and you lose a lot of that sort of experience. >> John, first of all, I have to say congratulations to Microsoft. When the announcement was made that Windows containers were going to be developed, I have to say that I and most of my peers were a little bit skeptical as to how fast that would work; the development cycle. Probably because we have lots of experience and it's always okay, we understand how many man years this usually takes, but you guys hit and were delivering, got through the Betas, so can you speak to us about where we are with Windows containers? And one of the things people want to kind of understand is, compared to like Linux containers, how do you expect the adoption of that now that it's generally available to roll out? Do I have to wait for the next server refresh, OS refresh, how do you expect your customers to adopt and embrace? >> Well we were able to get this to work so quickly because if you remember, Docker didn't actually invent containers. They took a bunch of kernel primitives that were in Linux and put a really great user experience on it. And I'm not taking anything away from Docker by doing that, because oftentimes in the technology industry, it's easy to make something that was complicated, powerful, but not easy to use. And Windows already had a lot of those kernel primitives, same sort of similar kind of kernel primitives built-in. They had to take out Java javax, I think when Windows 2000. And so it was kind of the same experience. We took the Docker engine, so we got the API, we were using the open source project, so we have complete compatibility. And then we just had to write a basically a new back-end, and that's why it was able to come up rather quickly. And now we're in a mode you know, Windows server updates things more incrementally, than we did in the past. So this will just keep on improving as time goes on. >> Okay, one of the other big announcements in the keynote this morning was LinuxKit. And it was open source project, we actually saw Solomon move it to open source during the keynote, when they laid out the ecosystems for it like IBM, HPE, INTEL and Microsoft. So what does that mean for Microsoft? You are now a provider of Linux? How are we supposed to look at this? >> Yeah. So we're working with all the Linux vendors. So if you saw our blog about the work we did today. We also have announcements from SUSE and Red Hat and Canonical, and the usual people. And one of the things I said in this box, I said look there's the new model is that you could choose both the Linux container that you want and the kernel that you want to run it on. And we're open to all sorts of things. But we have been working with Docker for a long time. On making sure that there was a great experience for running Docker for Linux on Windows. This thing called Docker for Windows. Which they developed. And we have been helping out. And that's basically an earlier generation of this same Linux technology. So it's just the next step on that journey. >> Microsoft's pretty well recognized to have a robust solution for a hybrid cloud. Cause of course you go your Azure stack, that you're putting on premises. There's Azure itself, it's really the cloud first methodology that you've been rolling through and you offer as a service. Containers really anywhere in your environment, baked in anywhere? How should we be thinking about this going forward? >> Yeah absolutely. I mean one of the points of containers in general, one of the attractive parts of containers is that they run everywhere. Including from your laptop, to the various clouds to bare metal, to virtualized environments. And so we have both things. We want Windows containers, where we're the vendor of the container. We want those to work everywhere. And we also, as the vendors of Azure and Azure Stack, and just server system center, and other older enterprise technologies. We want containers to work on all those things. So both directions. I mean, that's kind of the world we're in now, where everything works everywhere. >> Can you square you container strategy as reflected in your partnership with Docker, With your serverless computer strategy for Azure Functions? I'm trying to get a sense for Microsoft's overall approach to running containers as it relates to the Azure strategy. >> In some ways, you can think of this as a serverless functions mode as a step even further. You just deploy a hardware machine and install everything on it. Next thing, you'd have a virtual machine and you install everything. And then you put your code and all its affinities to the container. And with serverless with Azure Functions, it's like, well why do any of that? Just write a function. Now at the same time, we think there's lots of reasons. Under the covers, all of these past systems, going all the way back, that's how Docker started. Run a container underneath the covers. in the same place, it's not literally a Docker container, but the same place down in functions has that sort of a capability. And we're certainly thinking about how Docker can handle for work in that serverless model in the future. >> See one of my core focus areas for Wikibon as an analyst, is looking at developers going more deeply into deep learning and machine learning. To what extent is Microsoft already taking its core tools in that area and containerizing them and enabling access to that functionality through serverless APIs and functions and so forth in Azure? On the serverless stuff, I'm not on the serverless team. I'm not really qualified to explain everything on their end. I do know that the CNT team has a Docker container that they put the bits in. There's the Azure Machine Learning team who's been working a lot of these sort of technologies. I'm just not the right guy to answer that question. >> As you talk to your customers, where does this fit in to the whole discussion? Do containers just happen in the background? Is it helping them with some of their application modernization? Does it help Microsoft change the way we architect things? What's kind of the practitioner, your ultimate end user viewpoint on this? Well cloud adoption is at all points on the curve simultaneously. Even the inside of individual companies. So everybody's in it, in a kind of different place. The two models that I think people have really concentrated on, is on one end, the path at least is infrastructure where you just bring your existing applications and another one would be PADS, where you rewrite the application for a more modern architecture, more cloud centric architecture. And containers fit kind of squarely in the middle of that in some respects. Because in many ways and primarily, I see Docker containers as a better form of infrastructure. It is an easier, more portable way to get all your dependency together and run them everywhere. So a lot of lift-and-shift works is in there, but once you're in containers, it is also easier to break the components apart and put them back together into a more microservice oriented cloud-native model. >> I think that's a great point because we've been having this discussion about okay, there's applications that I'm rewriting, but then I've got this huge amount of applications that I need some way to have the bridge to the future, if you will. Because I don't know, there's one analyst firm that calls it bimodal, but to customers we talked to in general, we don't segment everything we do. I have application type infrastructure and I need to be able to live across multiple environments. Wrapping versus refactoring. >> And they do both. But I always prefer to, you know some people come in and they talk about legacy and they're developers. I'm a developer, right? Developers we always want to rewrite everything. And there's a time and place to doing that. But the legacy applications are required for those applications to work. And if you don't need to refactor that thing, if you can get it into a container or virtual machine or however, and get it into that more environment, and then work around it, re-architect it, it's a whole different set of approaches. It's a good conversation to have with a customer to understand. I've seen people go both too slow, and I see people refactor their whole thing and then try to figure out how to get it to work again. >> So Microsoft has a gigantic user base, What kind of things are you doing to help educate and help the people that had certification or jobs were running exchange to move towards this new kind of world and cloud in general. And containers specifically maybe. >> Well we have a ton of stuff. I'm not familiar with the certification programs myself, but we certainly have our Developer Evangelism team, out going out training people. We've been trying to improve our documentation. And we have a bunch of guidance on cloud migration and things like that. There is a real challenge and it's the same problem for our customers and anybody looking at cloud. Is to re-educate people who have been working in some of their previous moment. Which is another reason again, where the lift and shift stuff is, you can make it more like it is on Premise, or more like it is on your laptop. It makes that journey a little easier. But we're indefinitely in one of those points where the industry is changing so fast, I personally have to spend a lot of time, What's going on? What happened this day? What's new today coming to the conference, I learn new things. >> You bring up a huge challenge that we see. I kind of like Docker has their two delivery models. They've got the Community Edition, CE, and the Enterprise Edition, EE. An EE feels more like traditional software. It's packaged, it's on the regular release cycle. CE is, Solomon talked this morning about the edge pieces. Can I keep up with every six months, or can I have stuff flying at me? People inside of Docker can't keep up with the pace of change that much. What do you see, I mean, I think back to the major Windows operating system releases that we used to, like the Intel tick-tock on releases. It's the pace of change is tough for everyone, how are you helping, you know with you product development and customers, you know, take advantage of things and try to keep up with this rapidly changing ecosystem? >> This is a constant challenge with physically software now. We can't afford to only ever ship things every three years. And at the same time there's stability. So with the major products like Windows, we have these stable branches, where things are pretty much the same going along. And then there's an inactive branch Where things are coming down and the changes and the updates are coming. I'd say the one biggest difference I'd say, but you know I've been in this industry for a long time. So say between the '90s and now, is that we have so much of it is actually off servers. Where when something crashes, we get a crash dump and we can debug the thing and so going out in the field we have much more capability in finding what's going on in the customer base than we did 20 years ago. But other than that, it's just a really hard challenge to both satisfy people that can't have anything to change, and everything changing. >> John you've been watching this for a number of years, what do we still have left to do? We come back to DockerCon next year, you know, we'll have more people, it'll be a bigger event, but you know, what's the progression, what kind of things are you looking forward to the ecosystem and yourself and Docker, knocking down and moving customers forward with? >> The first year was kind of like, what is this thing? Second year was now, the individual Docker container is there now how do you orchestrate them and next step is how do we network these things. And there's an initiative now to standardize on storage, for storage systems and docker containers. Monitoring. There's a lot of things that are still to do. We have a long ways to go. On the other side, I think this other track, which we talked about today, which is that virtualization and containers are going to blur and mend, and I don't think that seven years from now we're going to be talking about containers or virtual machines, we're just going to be saying it's some unit of compute and then there's so much in knobs and tweaks that you want it a little more isolated, you want it a little less isolated, you trade off some performance for something else. >> Business capability, in other words the enterprise architecture framework of business capabilities, will be paramount in terms of composing applications or microservices. From what I understand you saying. >> Yeah, I think where we're really going to get to is a model where people we get past this basics of storage of networking and start working up the next level So things like Helm or DCS Universe, or Storm Stacks, where you can describe more of an application, it just keeps moving up. And so I think in seven years, we won't be talking so much about this, it'll some other disruption, right? But there won't be talking about this virtualization layer as much as building apps again. >> On a visual composition of microservices, what is Microsoft doing, you say that you long ago entered Microsoft during the Vizio acquisition, what's Microsoft doing to enable more visual composition across these functions, across orchestrated team-like environments going forward? >> I think there is some work going on. It's not my area again, on visual composition, despite the fact that I came from Vizio. I kind of got away from that space >> Well I'm betraying my age. I remember that period. >> All right. Well John, always a pleasure catching up with you and thank you so much for joining us for this segment. Look forward to watching Microsoft going forward. >> Thanks. Thank you for having me. We'll be back with lots more coverage here from DockerCon 2017. You're watching theCUBE.
SUMMARY :
Brought to you by Docker John, had the pleasure of interviewing you two years ago. So it's been a lot of change in that time, of kind of the key messages you were trying to get across is the ability to run Linux containers And one of the things people want to kind of understand is, And now we're in a mode you know, in the keynote this morning was LinuxKit. and the kernel that you want to run it on. Cause of course you go your Azure stack, I mean one of the points of containers in general, Can you square you container strategy as And then you put your code I'm just not the right guy to answer that question. Does it help Microsoft change the way we architect things? the bridge to the future, if you will. And if you don't need to refactor that thing, and help the people that had certification or jobs There is a real challenge and it's the same problem and the Enterprise Edition, EE. So say between the '90s and now, is that we have On the other side, I think this other track, From what I understand you saying. where you can describe more of an application, despite the fact that I came from Vizio. I remember that period. up with you and thank you so much for joining Thank you for having me.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
John Gossman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
14 years | QUANTITY | 0.99+ |
Solomon | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
two models | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
21 years | QUANTITY | 0.99+ |
Docker | TITLE | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
two delivery models | QUANTITY | 0.99+ |
INTEL | ORGANIZATION | 0.99+ |
DockerCon 2017 | EVENT | 0.99+ |
Windows | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
DockerCon | EVENT | 0.99+ |
Windows 2000 | TITLE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
20 years ago | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
#DockerCon | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
next year | DATE | 0.98+ |
MySQL | TITLE | 0.98+ |
Docker | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
first methodology | QUANTITY | 0.97+ |
Azure Stack | TITLE | 0.97+ |
both things | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Red Hat | TITLE | 0.97+ |
Java javax | TITLE | 0.96+ |
CNT | ORGANIZATION | 0.96+ |
one end | QUANTITY | 0.96+ |
Azure | TITLE | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
both directions | QUANTITY | 0.94+ |