Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020
>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)
SUMMARY :
the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Deierling | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Paresh Kharya | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
200 gig | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
10,000 computers | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
200 | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Paresh | PERSON | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
Cumulus Networks | ORGANIZATION | 0.99+ |
Iraq | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
around $7 billion | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
each application | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
second | QUANTITY | 0.99+ |
20 nanosecond | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
NetQ | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
10,000 data centers | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
three key elements | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
thousands of cores | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
Jensen | PERSON | 0.97+ |
Apollo | ORGANIZATION | 0.97+ |
Jensen | ORGANIZATION | 0.96+ |
single computer | QUANTITY | 0.96+ |
HPE Discover | ORGANIZATION | 0.95+ |
single model | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
hundred gig | QUANTITY | 0.94+ |
InfiniBand | ORGANIZATION | 0.94+ |
DENT | ORGANIZATION | 0.93+ |
GTC | EVENT | 0.93+ |
Scott Raynovich, Futuriom | Future Proof Your Enterprise 2020
>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. (smooth music) >> Hi, I'm Stu Miniman, and welcome to this special exclusive presentation from theCUBE. We're digging into Pensando and their Future Proof Your Enterprise event. To help kick things off, welcoming in a friend of the program, Scott Raynovich. He is the principal analyst at Futuriom coming to us from Montana. I believe first time we've had a guest on the program in the state of Montana, so Scott, thanks so much for joining us. >> Thanks, Stu, happy to be here. >> All right, so we're going to dig a lot into Pensando. They've got their announcement with Hewlett Packard Enterprise. Might help if we give a little bit of background, and definitely I want Scott and I to talk a little bit about where things are in the industry, especially what's happening in networking, and how some of the startups are helping to impact what's happening on the market. So for those that aren't familiar with Pensando, if you followed networking I'm sure you are familiar with the team that started them, so they are known, for those of us that watch the industry, as MPLS, which are four people, not to be confused with the protocol MPLS, but they had very successfully done multiple spin-ins for Cisco, Andiamo, Nuova and Insieme, which created Fibre Channel switches, the Cisco UCS, and the ACI product line, so multiple generations to the Nexus, and Pensando is their company. They talk about Future Proof Your Enterprise is the proof point that they have today talking about the new edge. John Chambers, the former CEO of Cisco, is the chairman of Pensando. Hewlett Packard Enterprise is not only an investor, but also a customer in OEM piece of this solution, and so very interesting piece, and Scott, I want to pull you into the discussion. The waves of technology, I think, the last 10, 15 years in networking, a lot it has been can Cisco be disrupted? So software-defined networking was let's get away from hardware and drive towards more software. Lots of things happening. So I'd love your commentary. Just some of the macro trends you're seeing, Cisco's position in the marketplace, how the startups are impacting them. >> Sure, Stu. I think it's very exciting times right now in networking, because we're just at the point where we kind of have this long battle of software-defined networking, like you said, really pushed by the startups, and there's been a lot of skepticism along the way, but you're starting to see some success, and the way I describe it is we're really on the third generation of software-defined networking. You have the first generation, which was really one company, Nicira, which VMware bought and turned into their successful NSX product, which is a virtualized networking solution, if you will, and then you had another round of startups, people like Big Switch and Cumulus Networks, all of which were acquired in the last year. Big Switch went to Arista, and Cumulus just got purchased by... Who were they purchased by, Stu? >> Purchased by Nvidia, who interestingly enough, they just picked up Mellanox, so watching Nvidia build out their stack. >> Sorry, I was having a senior moment. It happens to us analysts. (chuckling) But yeah, so Nvidia's kind of rolling up these data center and networking plays, which is interesting because Nvidia is not a traditional networking hardware vendor. It's a chip company. So what you're seeing is kind of this vision of what they call in the industry disaggregation. Having the different components sold separately, and then of course Cisco announced the plan to roll out their own chip, and so that disaggregated from the network as well. When Cisco did that, they acknowledged that this is successful, basically. They acknowledged that disaggregation is happening. It was originally driven by the large public cloud providers like Microsoft Azure and Amazon, which started the whole disaggregation trend by acquiring different components and then melding it all together with software. So it's definitely the future, and so there's a lot of startups in this area to watch. I'm watching many of them. They include ArcOS, which is a exciting new routing vendor. DriveNets, which is another virtualized routing vendor. This company Alkira, which is going to do routing fully in the cloud, multi-cloud networking. Aviatrix, which is doing multi-cloud networking. All these are basically software companies. They're not pitching hardware as part of their value add, or their integrated package, if you will. So it's a different business model, and it's going to be super interesting to watch, because I think the third generation is the one that's really going to break this all apart. >> Yeah, you brought up a lot of really interesting points there, Scott. That disaggregation, and some of the changing landscape. Of course that more than $1 billion acquisition of Nicira by VMware caused a lot of tension between VMware and Cisco. Interesting. I think back when to Cisco created the UCS platform it created a ripple effect in the networking world also. HP was a huge partner of Cisco's before UCS launched, and not long after UCS launched HP stopped selling Cisco gear. They got heavier into the networking component, and then here many years later we see who does the MPLS team partner with when they're no longer part of Cisco, and Chambers is no longer the CEO? Well, it's HPE front and center there. You're going to see John Chambers at HPE Discover, so it was a long relationship and change. And from the chip companies, Intel, of course, has built a sizeable networking business. We talked a bit about Mellanox and the acquisitions they've done. One you didn't mention but caused a huge impact in the industry, and something that Pensando's responding to is Amazon, but Annapurna Labs, and Annapurna Labs, a small Israeli company, and really driving a lot of the innovation when it comes to compute and networking at Amazon. The Graviton, Compute, and Nitro is what powers their Outposts solutions, so if you look at Amazon, they buy lots of pieces. It's that mixture of hardware and software. In early days people thought that they just bought kind of off-the-shelf white boxes and did it cheap, but really we see Amazon really hyper optimizes what they're doing. So Scott, let's talk a little bit about Pensando if we can. Amazon with the Nitro solutions built to Outposts, which is their hybrid solution, so the same stack that they put in Amazon they can now put in customers' data center. What Pensando's positioning is well, other cloud providers and enterprise, rather than having to buy something from Amazon, we're going to enable that. So what do you think about what you've seen and heard from Pensando, and what's that need in the market for these type of solutions? >> Yes, okay. So I'm glad you brought up Outposts, because I should've mentioned this next trend. We have, if you will, the disaggregated open software-based networking which is going on. It started in the public cloud, but then you have another trend taking hold, which is the so-called edge of the network, which is going to be driven by the emergence of 5G, and the technology called CBRS, and different wireless technologies that are emerging at the so-called edge of the network, and the purpose of the edge, remember, is to get closer to the customer, get larger bandwidth, and compute, and storage closer to the customer, and there's a lot of people excited about this, including the public cloud providers, Amazon's building out their Outposts, Microsoft has an Edge stack, the Azure Edge Stack that they've built. They've acquired a couple companies for $1 billion. They acquired Metaswitch, they acquired Affirmed Networks, and so all these public cloud providers are pushing their cloud out to the edge with this infrastructure, a combination of software and hardware, and that's the opportunity that Pensando is going after with this Outposts theme, and it's very interesting, Stu, because the coopetition is very tenuous. A lot of players are trying to occupy this edge. If you think about what Amazon did with public cloud, they sucked up all of this IT compute power and services applications, and everything moved from these enterprise private clouds to the public cloud, and Amazon's market cap exploded, right, because they were basically sucking up all the money for IT spending. So now if this moves to the edge, we have this arms race of people that want to be on the edge. The way to visualize it is a mini cloud. Whether this mini cloud is at the edge of Costco, so that when Stu's shopping at Costco there's AI that follows you in the store, knows everything you're going to do, and predicts you're going to buy this cereal and "We're going to give you a deal today. "Here's a coupon." This kind of big brother-ish AI tracking thing, which is happening whether you like it or not. Or autonomous vehicles that need to connect to the edge, and have self-driving, and have very low latency services very close to them, whether that's on the edge of the highway or wherever you're going in the car. You might not have time to go back to the public cloud to get the data, so it's about pushing these compute and data services closer to the customers at the edge, and having very low latency, and having lots of resources there, compute, storage, and networking. And that's the opportunity that Pensando's going after, and of course HPE is going after that, too, and HPE, as we know, is competing with its other big mega competitors, primarily Dell, the Dell/VMware combo, and the Cisco... The Cisco machine. At the same time, the service providers are interested as well. By the way, they have infrastructure. They have central offices all over the world, so they are thinking that can be an edge. Then you have the data center people, the Equinixes of the world, who also own real estate and data centers that are closer to the customers in the metro areas, so you really have this very interesting dynamic of all these big players going after this opportunity, putting in money, resources, and trying to acquire the right technology. Pensando is right in the middle of this. They're going after this opportunity using the P4 networking language, and a specialized ASIC, and a NIC that they think is going to accelerate processing and networking of the edge. >> Yeah, you've laid out a lot of really good pieces there, Scott. As you said, the first incarnation of this, it's a NIC, and boy, I think back to years ago. It's like, well, we tried to make the NIC really simple, or do we build intelligence in it? How much? The hardware versus software discussion. What I found interesting is if you look at this team, they were really good, they made a chip. It's a switch, it's an ASIC, it became compute, and if you look at the technology available now, they're building a lot of your networking just in a really small form factor. You talked about P4. It's highly programmable, so the theme of Future Proof Your Enterprise. With anything you say, "Ah, what is it?" It's a piece of hardware. Well, it's highly programmable, so today they position it for security, telemetry, observability, but if there's other services that I need to get to edge, so you laid out really well a couple of those edge use cases and if something comes up and I need that in the future, well, just like we've been talking about for years with software-defined networking, and network function virtualization, I don't want a dedicated appliance. It's going to be in software, and a form factor like Pensando does, I can put that in lots of places. They're positioning they have a cloud business, which they sell direct, and expect to have a couple of the cloud providers using this solution here in 2020, and then the enterprise business, and obviously a huge opportunity with HPE's position in the marketplace to take that to a broad customer base. So interesting opportunity, so many different pieces. Flexibility of software, as you relayed, Scott. It's a complicated coopetition out there, so I guess what would you want to see from the market, and what is success from Pensando and HPE, if they make this generally available this month, it's available on ProLiant, it's available on GreenLake. What would you want to be hearing from customers or from the market for you to say further down the road that this has been highly successful? >> Well, I want to see that it works, and I want to see that people are buying it. So it's not that complicated. I mean I'm being a little superficial there. It's hard sometimes to look in these technologies. They're very sophisticated, and sometimes it comes down to whether they perform, they deliver on the expectation, but I think there are also questions about the edge, the pace of investment. We're obviously in a recession, and we're in a very strange environment with the pandemic, which has accelerated spending in some areas, but also throttled back spending in other areas, and 5G is one of the areas that it appears to have been throttled back a little bit, this big explosion of technology at the edge. Nobody's quite sure how it's going to play out, when it's going to play out. Also who's going to buy this stuff? Personally, I think it's going to be big enterprises. It's going to start with the big box retailers, the Walmarts, the Costcos of the world. By the way, Walmart's in a big competition with Amazon, and I think one of the news items you've seen in the pandemic is all these online digital ecommerce sales have skyrocketed, obviously, because people are staying at home more. They need that intelligence at the edge. They need that infrastructure. And one of the things that I've heard is the thing that's held it back so far is the price. They don't know how much it's going to cost. We actually ran a survey recently targeting enterprises buying 5G, and that was one of the number one concerns. How much does this infrastructure cost? So I don't actually know how much Pensando costs, but they're going to have to deliver the right ROI. If it's a very expensive proprietary NIC, who pays for that, and does it deliver the ROI that they need? So we're going to have to see that in the marketplace, and by the way, Cisco's going to have the same challenge, and Dell's going to have the same challenge. They're all racing to supply this edge stack, if you will, packaged with hardware, but it's going to come down to how is it priced, what's the ROI, and are these customers going to justify the investment is the trick. >> Absolutely, Scott. Really good points there, too. Of course the HPE announcement, big move for Pensando. Doesn't mean that they can't work with the other server vendors. They absolutely are talking to all of them, and we will see if there are alternatives to Pensando that come up, or if they end up singing with them. All right, so what we have here is I've actually got quite a few interviews with the Pensando team, starting with I talked about MPLS. We have Prem, Jane, and Sony Giandoni, who are the P and the S in MPLS as part of it. Both co-founders, Prem is the CEO. We have Silvano Guy who, anybody that followed this group, you know writes the book on it. If you watched all the way this far and want to learn even more about it, I actually have a few copies of Silvano's book, so if you reach out to me, easiest way is on Twitter. Just hit me up at @Stu. I've got a few copies of the book about Pensando, which you can go through all those details about how it works, the programmability, what changes and everything like that. We've also, of course, got Hewlett Packard Enterprise, and while we don't have any customers for this segment, Scott mentioned many of the retail ones. Goldman Sachs is kind of the marquee early customer, so did talk with them. I have Randy Pond, who's the CFO, talking about they've actually seen an increase beyond what they expected at this point of being out of stealth, only a little over six months, even more, which is important considering that it's tough times for many startups coming out in the middle of a pandemic. So watch those interviews. Please hit us up with any other questions. Scott Raynovich, thank you so much for joining us to help talk about the industry, and this Pensando partnership extending with HPE. >> Thanks, Stu. Always a pleasure to join theCUBE team. >> All right, check out thecube.net for all the upcoming, as well as if you just search "Pensando" on there, you can see everything we had on there. I'm Stu Miniman, and thank you for watching theCUBE. (smooth music)
SUMMARY :
leaders all around the world, He is the principal analyst at Futuriom and how some of the startups are helping and the way I describe it is we're really they just picked up Mellanox, and it's going to be super and Chambers is no longer the CEO? and "We're going to give you a deal today. in the marketplace to take and 5G is one of the areas that it appears Scott mentioned many of the retail ones. Always a pleasure to join theCUBE team. I'm Stu Miniman, and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Montana | LOCATION | 0.99+ |
Nuova | ORGANIZATION | 0.99+ |
Andiamo | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Prem | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Costco | ORGANIZATION | 0.99+ |
Randy Pond | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Nicira | ORGANIZATION | 0.99+ |
Silvano | PERSON | 0.99+ |
more than $1 billion | QUANTITY | 0.99+ |
Jane | PERSON | 0.99+ |
first generation | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
Alkira | ORGANIZATION | 0.99+ |
Big Switch | ORGANIZATION | 0.99+ |
third generation | QUANTITY | 0.99+ |
Joseph Jacks, OSS Capital | CUBEConversation, October 2018
(bright symphony music) >> Hello, I'm John Furrier, the founder of SiliconANGLE Media and co-host of theCUBE. We're here in Paulo Alto at our studio here. I'm joining with Joseph Jacks, the founder and general partner of OSS Capital. Open Source Software Capital, is what OSS stands for. He's also the founder of KubeCon which now is part of the CNCF. It's a huge conference around Kubernetes. He's a cloud guy. He knows open source. Very well respected in the industry and also a great guest and friend of theCUBE, CUBE alumni. Joseph, great to see you. Also known as JJ. JJ, good to see you. >> Thank you for having me on again, John. >> Hey, great to have you come on. I know we've talked many times on theCUBE, but you've got some exciting news. You got a new firm, OSS Capital. Open Source Software, not operational support like a telco, but this is an investment opportunity where you're making investments. Congratulations. >> Thank you. >> So I know you can't talk about some of the specifics on the funds size, but you are actually going to go out, talk to entrepreneurs, make some equity investments. Around open source software. What's the thesis? How did you get here, why did you do it? What's motivating you, and what's the thesis? >> A lot of questions in there. Yeah, I mean this is a really profoundly huge year for open source software. On a bunch of different levels. I think the biggest kind of thing everyone anchors towards is GitHub being acquired by Microsoft. Just a couple of weeks ago, we had the two huge hadoop vendors join forces. That, I think, surprised a lot of people. MuleSoft, which is a big opensource middleware company, getting acquired by Salesforce just a year after going public. Just a huge outcome. I think one observation, just to sort of like summarize the year 2018, is actually, starting in January, almost on sort of like a monthly basis, we've observed a major sort of opensource software company outcome. And sort of kicking off the year, we had CoreOS getting acquired by Red Hat. Brandon and Alex, the founders over there, built a really interesting company in the Kubernetes ecosystem. And I think in February, Al Fresco, which is an open source content portal taking privatization outcome from a private equity firm, I believe in March we had Magento getting acquired by Adobe, which an open source based CMS. PHP CMS. So just a lot of activity for significant outcomes. Multibillion dollar outcomes of commercial open source companies. And open source software is something like 20 years old. 20 years in the making. And this year in particular, I've just seen just a huge amount of large scale outcomes that have been many years in the making from companies that have taken lots of venture funding. And in a lot of cases, sort of partially focused funding from different investors that have an affinity for open source software and sort of understand the uniqueness of the open source model when it's applied to business, when it's applied to company building. But more sort of opportunistic and sort of affinity oriented, as opposed to a pure focus. So that's kind of been part of the motivation. I'd say the more authentically compelling motivation for doing this is that it just needs to exist. This is sort of a model that is happening by necessity. We're seeing more and more software companies be open source software companies. So open source first. They're built in a distributed way. They're leveraging engineers and talent around the world. They're just part of this open source kind of philosophy. And they are fundamentally kind of commercial open source software companies. We felt that if you had a firm basically designed in a way to exclusively focus on those kind of companies, and where the firmware actually backed and supported by the founders of the largest commercial open source companies in the world before sort of the last decade. That could actually deliver a lot of value. So we've been sort of blogging a little bit about this. >> And you wrote a great post on it. I read about open source monetization. But I think one of the things I'm seeing as well that supports your thesis, and I like to get your reaction to it because I think this is something that's not really talked about, but open source is still young. I mean, you go back. I remember the days when we used to have to hide in the shadows to get licenses and pirate stuff and do all those crazy stuff. But now, it's only a couple decades away. The leaders that were investing were usually entrepreneurs that've been successful. The Rob Bearns, the Amar Wadhwa, the guy that did Spring. All these different open source. Linux, obviously, great success story. But there hasn't any been any institutional. Yeah, you got benchmark, other things, done some investments. A discipline around open source. Where open source is now table stakes in all software development. Cloud is scaling, scaling out globally. There's no real foc- There's never been a firm that's been focused on- Just open source from a commercial, while maintaining the purity and ethos of open source. I mean, is that. >> You agree? >> That's true. >> 100%, yeah. That's been the big part of creating the firm is aligning and solving for a pure focused structure. And I think what I'll say abstractly is this sort of venture capital, venture style approach to funding enterprise technology companies, software companies in general, has been to kind of find great entrepreneurs and in an abstract way that can build great technology companies. Can bring them to market, can sell them, and can scale them, and so on. And either create categories, or dominate existing categories, and disrupt incumbents, and so on. And I think while that has worked for quite a while, in the venture industry overall, in the 50, 60 years of the venture industry, lots of successful firms, I think what we're starting to see is a necessary shift toward accounting for the fundamental differences of opensource software as it relates to new technology getting created and going, and new software companies kind of coming into market. So we actually fundamentally believe that commercial open source software companies are fundamentally different. Functionally in almost every way, as compared to proprietary closed source software companies of the last 30 years. And the way we've sort of designed our firm and we'll about ten people pretty soon. We're just about a month in. We're growing the team quickly, but we're sort of a small, focused team. >> A ten's not focused small, I mean, I know venture firms that have two billion in management that don't have more than 20 people. >> Well, we have portfolio partners that are focused in different functional areas where commercial open source software companies have really fundamental differences. If you were to sort of stack rank, by function, where commercial open source software companies are really fundamentally different, sort of top to bottom. Legal would be, probably, the very top of the list. Right, in terms of license compliance management, structuring all the sort of protections and provisions around how intellectual property is actually shipped to and sold to customers. The legal licensing aspects. The commercial software licensing. This is quite a polarizing hot topic these days. The second big functional area where we have a portfolio partner focused on this is finance. Finance is another area where commercial open source software companies have to sort of behaviorally orient and apply that function very, very differently as compared to proprietary software companies. So we're crazy honored and excited to have world experts and very respected leaders in those different areas sort of helping to provide sort of different pillars of wisdom to our portfolio companies, our portfolio founders, in those different functional areas. And we provide a really focused kind of structure for them. >> Well I want to ask you the kind of question that kind of bridges the old way and new way, 'cause I definitely see you guys definitely being new and different, which is good. Or as Andy Jassy would say, you can be misunderstood for a while, but as you become successful, people will start understanding what you do. And that's a great example of Amazon. The pattern with success is traditionally the same. If we kind of encapsulate the difference between open source old and new, and that is you have something of value, and you're disrupting the market and collecting rents from it. Or revenue, or profit. So that's commercial, that's how businesses run. How are you guys going to disrupt with open source software the next generation value creation? We know how value's created, certainly in software that opensource has shown a path on how to create value in writing software if code is value and functionality's value. But to commercialize and create revenue, which is people paying something for something. That's a little bit different kind of value extraction from the value creation. So open source software can create value in functionality and value product. Now you bring it to the market, you get paid for it, you have to disrupt somebody, you have to create something. How are you looking at that? What's the vision of the creation, the extraction of value, who's disrupted, is it greenfield new opportunities? What's your vision? >> A lot of nuance and complexity in that question. What I would say is- >> Well, open source is creating products. >> Well, open source is the basis for creating products in a different kind of way. I'll go back to your question around let's just sort of maybe simplify it as the value creation and the value capture dynamics, right? We've sort of written a few posts about this, and it's subtle, but it's easy to understand if you look at it from a fundamental kind of perspective. We actually believe, and we'll be publishing research on this, and maybe even sort of more principled scientific, perhaps, even ways of looking at it. And then blog posts and research. We believe that open source software will always generate or create orders of magnitude more value than any constituent can capture. Right, and that's a fundamental way of looking at it. So if you see how cloud providers are capturing value that open source creates, whether it's Elasticsearch, or Postgres, or MySQL or Hadoop. And then commercial open source software companies that capture value that open source software creates, whether it's companies like Confluent around Kafka, or Cloudera around Hadoop, or Databricks around Apache Spark. Or whether it's the creators of those projects. The creators of Spark and Hadoop and Elasticsearch, sometimes many of them are the founders of those companies I mentioned, and sometimes they're not. We just believe regardless of how that sort of value is captured by the cloud providers, the commercial vendors, or the creators, the value created relative to the value captured will always be orders and orders of magnitude greater. And this is expressed in another way, which this may be easier to understand, it's a sort of reinforcing this kind of assertion that there's orders of magnitude value created far greater than what can be captured. If you were to do a survey, which we're currently in the process of doing, and I'm happy to sort of say that publicly for the first time here, of all the commercial open source software companies that have projects with large significant adoption, whether, say for example, it's Docker, with millions of users, or Apache Hadoop. How many Hadoop deployments there are. How many customers' companies are there running Hadoop deployments. Or it may be even MySQL. How many MySQL installations are there. And then you were to sort of survey those companies and see how many end users are there relative to how many customers are paying for the usage of the project. It would probably be something like if there were a million users of a given project, the company behind that project or the cloud provider, or say the end user, the developer behind the project, is unlikely to capture more than, say, 1% or a couple percent of those end users to companies, to paying companies, to paying customers. And many times, that's high. Many times, 1% to 2% is very high. Often, what we've seen actually anecdotally, and we're doing principled research around this, and we'll have data here across a large number of companies, many times it's a fraction of 1%. Which is just sort of maybe sometimes 10% of 1%, or even smaller. >> So the practitioners will be making more money than the actual vendors? >> Absolutely right. End users and practitioners always stand to benefit far greater because of the fundamental nature of open source. It's permissionless, it's disaggregated, the value creation dynamics are untethered, and it is fundamentally freely available to use, freely available to contribute to, with different constraints based on the license. However, all those things are sort of like disaggregating the creating of technology into sort of an unbounded network. And that's really, really incredible. >> Okay, so first of all, I agree with your premise 100%. We've seen it with CUBE, where videos are free. >> And that's a good thing. All those things are good. >> And Dave Vellante says this all the time on theCUBE. And we actually pointed this out and called this in the Hadoop ecosystem in 2012. In fact, we actually said that on theCUBE, and it turned out to be true, 'cause look at Hortonworks and Cloudera had to merge because, again, the market changed very quickly >> Value Creation. >> Because value >> Was created around them in the immediate cloud, etc. So the question is, that changes the valuation mechanisms. So if this true, which we believe it is. Just say it is. Then the traditional net present value cash flow metric of the value of the firm, not your firm, but, like, if I'm an open source firm, I'm only one portion of the extraction. I'm a supplier, and I'm an enabler, the valuation on cash flow might not be as great as the real impact. So the question I have for you, have you thought about the valuation? 'Cause now you're thinking about bigger construct community network effects. These are new dynamics. I don't think anyone's actually crunched a valuation model around this. So if someone knew that, say for example, an open source project created all this value, and they weren't necessarily harvesting it from a cash flow perspective, there might be other ways to monetize it. Have you though about that, and what's your reaction to that concept? 'Cause capitalism would kind of shake down the system. 'Cause why would someone be motivated to participate if they're not capturing any value? So if the value shifts, are they still going to be able to participate? You follow the logic I'm trying to- >> I definitely do. I think what I would say to that is we expect and we encourage and we will absolutely heavily invest in more business model innovation in the area of open source. So what I mean by that is, and it's important to sort of qualify a few things there. There's a huge amount of polarization and lack of consensus, lack of industry consensus on what it actually means to have or implement an open source based business model. In fact there's a lot of people who just sort of point blankedly assert that an opensource business model does not exist. We believe that many business models for monetizing and commercializing open source exist. We've blogged and written about a few of them. Their services and training and support. There's open core, which is very effective in sort of a spectrum of ways to implement open core. Around the core, you can have a thin crust or a thick crust. There's SAS. There are hardware based distribution models, things like Sourcefire, and Cumulus Networks. And there are also network based approaches. For example, project called Storj or Stor-J. Being developed and run now by Ben Golub, who's the former CEO of Docker. >> CUBE alumni. >> Ben's really great open source veteran. This is a network, kind of decentralized network based approach of sort of right sizing the production and consumption of the resource of a storage based open source project in a decentralized network. So those are sort of four or five ways to commercializing value, however, four or five ways of commercializing value, however what we believe is that there will be more business model innovation. There will be more developments around how you can better capture more, or in different ways, the value that open source creates. However, what I will say though, is it is unrealistic to expect two things. It is unrealistic and, in fact, unfair to expect that any of those constituents will contribute back to open source proportional to the value that they received from it, or the benefit, and I'm actually paraphrasing Doug Cutting there, who tweeted this a couple of years ago. Very profoundly deep, wise tweet, which I very strongly agree with. And it is also unrealistic to expect a second thing, which is that any of those constituents can capture a material portion of the value that open source creates, which I would assert is many trillions of dollars, perhaps tens of trillions of dollars. It's really hard to quantify that. And it's not just dollars in economic sense, it's dollars in productivity time saved, new markets, new areas, and so on. >> Yeah, I think this is interesting, and I think that we'll be an open book at that. But I will say that what I've observed in looking through all these CUBE interviews, I think that business model innovation absolutely is something that is an IP. >> We need it. Well, it's now intellectual property, the business model isn't, hey I went to business school, learned this at Babson or Harvard, I learned this business model. We're going to do SAS premium. Okay, I get that. There's going to be very interesting new innovations coming, and I think that's the new IP. 'Cause open source, if it's community based, there's going to be formulas. So that's going to be really inter- Okay, so now let's get back to actual funding itself. You guys are doing early stage. Can you take us through the approach? >> We're very focused on early stage, investing, and backing teams that are, just sort of welcoming the idea of a commercial entity around their open source project. Or building a business fundamentally dependent on an open source project or maybe even more than one. The reason for that is this is really where there's a lot of structural inefficiency in supporting and backing those types of founders. >> I think one of the things with ... is with that acquisition. They were pure on the open source side, doing a great job, didn't want to push the business model too hard because the open source, let's face it, you got people like, eh, I don't want to get caught on the business side, and get revenue, perverse incentives might come up, or fear of incentives that might be different or not aligned. Was a great a value. >> I think so. >> So Red Hat got a steal on that one. But as you go forward, there's going to be certainly a lot more stuff. We're seeing a lot of it now in CNCF, for instance. I want to get your thoughts on this because, being the co founder of KubeCon, and donating it to the CNCF, Kubernetes is the hottest thing on the planet, as we talked about many years ago. What's your take on that, now? I see exciting things happening. What is the impact of Kubernetes, in your opinion, to the world, and where do you see that evolving rapidly, and where is the focus here as the people should be paying attention to? >> I think that Kubernetes replaces EC2. Kubernetes is a disaggregated API for distributed computing anywhere. And it happens to be portable and able to run on any kind of computer infrastructure, which sort of makes it like a liquid disaggregated EC2-like API. Which a lot of people have been sort of chasing and trying to implement for many years with things like OpenStack or Eucalyptus. But interestingly, Kubernetes is sort of the right abstraction for distributed computing, because it meets people where they are architecturally. It's sort of aligned with this current movement around distributed systems first designs. Microservices, packaging things in small compartmentalized units. >> Good for integrating of existing stuff. >> Absolutely, and it's very composable, un-opinionated architecturally. So you can sort of take an application and structure it in any given way, and as long as it has this sort of isolation boundary of a container, you can run it on Kubernetes without needing to sort of retrofit the architecture, which is really awesome. I think Kubernetes is a foundational part of the next kind of computing paradigm in the same way that Linux was foundational to the computing paradigm that gave rise to the internet. We had commodity hardware meeting open source based sort of cost reduction and efficiency, which really Linux enabled, and the movement toward scale out data center infrastructure that supported the Internet's sort of maturity and infrastructure. I think we're starting to see the same type of repeat effect thanks to Kubernetes basically being really well received by engineers, by the cloud providers. It's now the universal sort of standard for running container based applications on the different cloud providers. >> And think having the non-technical opinion posture, as you said, architectural posture, allows it to be compatible with a new kind of heterogeneous. >> Heterogeneity is critical. >> Heterogeneity is key, 'cause it's not just within the environment, it's also within each vendor, or customer has more heterogeneity. So, okay, now that's key. So multi cloud, I want to get your thoughts on multi cloud, because now this goes into some of things that might build on top of if Kubernetes continues to go down the road that you say it does. Then the next question is, stateful applications, service meshes. >> A lot of buzz words. A lot of buzz words in there. Stateful application's real because at a certain point in time, you have a maturity curve with critical infrastructure that starts to become appealing for stateful mission critical storage systems, which is typically where you have all the crown jewels of a given company's infrastructure, whether it's a transactional system, or reading and writing core customer, or financial service information, or whatever it is. So Kubernetes' starting to hit this maturity curve where people are migrating really serious mission critical storage workloads onto that platform. And obviously we're going to start to see even more critical work loads. We're starting to see Edge workloads because Kubernetes is a pretty low footprint system, so you can run it on Edge devices, you can even run it on microcontrollers. We're sort of past the experimental, you know, fun and games was Raspberry Pi, sort of towers, and people actually legitimately doing real world Edge kind of deployments with Kubernetes. We're absolutely starting to see multi-geo, multi-replication, multi-cloud sort of style architectures becoming real, as well. Because Kubernetes is this API that the industry's agreeing upon sufficiently. We actually have agreement around this sort of surface area for distributed system style computing that if cloud providers can actually standardize on in a way that lets application specific vendors or new types of application deployment models innovate further, then we can really unlock this sort of tight coupling of proprietary services inside cloud providers and disaggregate it. Which is really exciting, and I forget the Netscape, Jim Barksdale. Bundling, un-bundling. We're starting to see the un-bundling of proprietary cloud computing service API's. Things like Kinesis, and ALB and ELB and proprietary storage services, and these other sticky services get un-bundled because of two big things. Open source, obviously, we have open source alternative data paths. And then we have Kubernetes which allows us to sort of disaggregate things out pretty easily. >> I want to hear your thoughts, one final concept, before we break, 'cause I was having a private conversation with three people besides myself. A big time CIO of a company that if I said the name everyone would go, oh my god, that guy is huge, he's seen it all going back many, many ways. Currently done a lot of innovation. A hardcore network chip guy who knows networking, old school infrastructure. And then a cloud native application founder who knows a lot about software development and is state-of-the-art cloud native. So cloud native, all experienced, old-school, kind of about my age, a cloud native app developer, a big time CIO, and a chip networking kind of infrastructure guy. And we're talking, and one thing that came out, I want to get you thoughts on this, he says, so what's going on with DevOps, how do you see this service mesh, is a stay for (mumbles) on top of the stack, no stacks, horizontally scalable. And the comment that came out was storage and networking have had this relationship with everything since day one. Network moves a packet from point A to point B, and nothing happens in between, maybe some inspection. And storage goes from here now to the then, because you store it. He goes, that premise moves up the stacks, so then the cloud native guy goes, well that's what's happening up at the top, there's a lot of moving things around, workloads and or services, provisioning services, and then from now to then state. In real time. And what dawned on the next conversation the CIO goes, well this is exactly our challenge. We have under the hood infrastructure being programmable, >> We're having some trouble with the connection. Please try again. >> My phone's calling me. >> Programmable connections. >> So you got the programmable on the top of the stack too, so the CIO said, that's exactly the problem we're trying to solve. We're trying to solve some of these network storage concepts now at an application level. Your thoughts to that. >> Well, I think if I could tease apart everything you just said, which is profound synthesis of a lot of different things, I think we've started to see application logic leak out of application code itself into dedicated layers that are really good at doing one specific thing. So traditionally we had some crud style kind of behavioral semantics implemented around business logic. And then, inside of that, you also had libraries for doing connectivity and lookups and service discovery and locking and key management and encryption and coordination with other types of applications. And all that stuff was sort of shoved into the single big application binary. And now, we're starting to see all those language runtime specific parts of application code sort of crack or leak out into these dedicated, highly scalable, Unix philosophy oriented sort of like layers. So things like Envoy are really just built for the sort of nervous system layer of application communication fabric up and down the layer two through layer seven sort of protocol transport stack, which is really profound. We're seeing things like Vault from Hashicorp handle secure key storage persistence of application dedication, authorization, metadata and information to sort of access different systems and end points. And that's a dedicated sort of stateful layer that you can sort of fragment out and delegate sort of application specific functionality to, which is really great for scalability reasons. And on, and on, and on. So we've started to see that, and I think one way of looking at that is it's a cycle. It's the sort of bundling and un-bundling aspect. >> One of the granny level services are getting a really low level- >> Yeah, it's a sort of like bundling and un-bundling and so we've got all this un-bundling happening out of application code to these dedicated layers. The bundling back may happen. I've actually seen a few Bay Area companies go like, we're going back to the monolith 'cause it actually gives us lots of efficiencies in things that we though were trade offs before. We're actually comfortable with a big monorepo, and one or two core languages, and we're going to build everything into these big binaries, and everyone's going to sort of live in the same source code repository and break things out through folders or whatever. There's a lot of really interesting things. I don't want to say we're sort of clear on where this bundling, un-bundling is happening, but I do think that there's a lot of un-bundling happening right now. And there's a lot of opportunity there. >> And the open source, obviously, driving it. So final question for you, how many deals have you done? Can you talk a little bit about the firm? And exciting things and plans that you have going forward. >> Yeah, we're going to be making a lot of announcements over the next few months, and we're, I guess, extremely thrilled. I don't want to say overwhelmed, 'cause we're able to handle all of the volume and inquiries and inbound interest. We're really honored and thrilled by the reception over the last couple weeks from announcing the firm on the first of October, sort of before the Hortonworks Cloudera merger. The JFrog funding announcement that week. The Elastic IPO. Just a lot of really awesome things happened that week. This is obviously before Microsoft open sourced all their patents. We'll be announcing more investments that we've made. We announced our first one on the first of October as well with the announcement of the firm. We've made a good number of investments. We're not able to talk to much about our first initiative, but you'll hear more about that in the near future. >> Well, we're excited. I think it's the timing's perfect. I know you've been working on this kind of vision for a while, and I think it's really great timing. Congratulations, JJ >> Thank you so much. Thanks for having me on. >> Joesph Jacks, also known as JJ, founder and general partner of OSS Capital, Open Source Software Capital, co founder of KubeCon, which is now part of the CNCF. A real great player in the community and the ecosystem, great to have him on theCUBE, thanks for coming in. I'm John Furrier, thanks for watching. >> Thanks, John. (bright symphony music)
SUMMARY :
Hello, I'm John Furrier, the founder of SiliconANGLE Media Hey, great to have you come on. on the funds size, but you are actually going to go out, And sort of kicking off the year, hide in the shadows to get licenses And the way we've sort of designed our firm that have two billion in management structuring all the sort of that kind of bridges the old way and new way, A lot of nuance and complexity in that question. Well, open source is the basis for creating products far greater because of the fundamental nature Okay, so first of all, I agree with your premise 100%. And that's a good thing. because, again, the market changed very quickly of the value of the firm, Around the core, you can have a thin crust or a thick crust. sort of right sizing the and I think that we'll be an open book at that. So that's going to be really inter- The reason for that is this is really where because the open source, let's face it, What is the impact of Kubernetes, in your opinion, Which a lot of people have been sort of chasing the computing paradigm that gave rise to the internet. allows it to be compatible with the road that you say it does. We're sort of past the experimental, that if I said the name everyone would go, We're having some trouble that's exactly the problem we're trying to solve. and delegate sort of and everyone's going to sort of live in the same source code And the open source, obviously, driving it. sort of before the Hortonworks Cloudera merger. I think it's the timing's perfect. Thank you so much. A real great player in the community and the ecosystem, (bright symphony music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ben Golub | PERSON | 0.99+ |
February | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
March | DATE | 0.99+ |
January | DATE | 0.99+ |
Joseph Jacks | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Paulo Alto | LOCATION | 0.99+ |
two billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Joseph | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
OSS Capital | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
JJ | PERSON | 0.99+ |
Joesph Jacks | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Doug Cutting | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Sourcefire | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
second | QUANTITY | 0.99+ |
Cumulus Networks | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
Jim Barksdale | PERSON | 0.99+ |
1% | QUANTITY | 0.99+ |
five ways | QUANTITY | 0.99+ |
MuleSoft | ORGANIZATION | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
October 2018 | DATE | 0.99+ |
JFrog | ORGANIZATION | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Open Source Software Capital | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
first initiative | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Babson | ORGANIZATION | 0.99+ |
three people | QUANTITY | 0.99+ |
Rob Bearns | PERSON | 0.99+ |
2% | QUANTITY | 0.99+ |
OSS | ORGANIZATION | 0.99+ |
Alex | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Confluent | ORGANIZATION | 0.98+ |
Al Fresco | ORGANIZATION | 0.98+ |
Ben | PERSON | 0.98+ |
Bay Area | LOCATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
Netscape | ORGANIZATION | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
single | QUANTITY | 0.98+ |
more than 20 people | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
one observation | QUANTITY | 0.98+ |
Storj | ORGANIZATION | 0.97+ |
KubeCon | ORGANIZATION | 0.97+ |
second thing | QUANTITY | 0.97+ |
two core languages | QUANTITY | 0.97+ |
ten | QUANTITY | 0.97+ |
each vendor | QUANTITY | 0.97+ |
Andrius Benokraitis, Red Hat - Red Hat Summit 2017
>> Red Hat OpenShift Container Platform >> Announcer: Live from Boston, Massachusetts, it's theCube Covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to theCube's coverage, I'm Rebecca Knight your host, here with Stu Miniman. Our guest now is Andrius Benokraitis, he is the Principle Product Manager at Ansible Red Hat Network Automation, thanks so much Andrius. >> Thanks for having me I appreciate it. >> This is your first time on the program. >> Andrius: First time. >> We're nice, >> Really nervous, so, okay. we don't bite. >> Start a little bit with your new to the company relatively >> Andrius: Relatively. >> networking guy by background, can you give us a little bit about your background. >> Sure, I mean, I actually started at Red Hat in 2003. And then did about four five jobs there for about 11 years. And then jumped, went to a startup named Cumulus Networks for about two years. Great crew, and then, now I'm at Ansible, been there since about December, so working on the Network Automation Use Case for Ansible. >> Alright, so networking, has a little bit of coverage here, I remember, you know, something like the Open Daylight stuff and I have, actually there are a couple of Red Hatters that I interviewed at one show ended up forming a company that got bought by Dockers, so you know, there's definitely networking people, but maybe give us a broad view of where networking fits into this stuff that you're working on specifically. >> Yeah, sure thing. I think it's interesting to point out that as everything started in the compute side, and everything started to get disaggregated, the networking side has come along for the ride per se. It's been a little bit behind. When we talk about networking a lot of people just think automatically that's the end. And we're actually trying to think a little bit lower level, so layer one, layer two, layer three, so switching, routing, firewalls, load balancers, all those things are still required in the data center. And when people started using Ansible, it started five years ago on the compute side, a lot of the people started saying, I need to run the whole rec, and I'm not a CCIE, and I don't really know what to do there but I've been thrown in to do something, I'm a cloud admin, the new title right. I have to run the network, so what do I do. I don't know anything about networking, I'm just trying to be good enough, well, I know Ansible, so why don't I just treat switches like servers, and just treat them like, like what I know, they just have a lot more interfaces, but they just treat it that way. So a lot of the expertise came from the ground up with the opensource model and said this is the new use case. >> Well, Jay Rivers, the founder of Cumulus, it's like networking will just be a Linux operating model, you know, extended to the network, which is always like, hey, sounds like a company like Red Hat should be doing that kind of stuff. >> Exactly, it's interesting to see a Bash prompt in the networking right, it's familiar to a lot of people, in the devop space, absolutely. >> So it's a very rapidly changing time, as we know, in this digital computing age, the theme of this conference is the power of the individual, celebrating that individual, the developer, empowering the developers to take risks, be able to fail, make changes, modify. You're not a developer, but you manage developers, you lead developers, how do you work on creating that context, that Jim Whitehurst talked about today. >> I think it starts with, the true empowerment, you have the majority of the networking platforms are still proprietary and walled off, walled off gardens, they're black boxes you can't really do much with them, but you still have the ability to SSH into them, you have familiar terms and concepts from the server side in the networking side. So as long as you have SSH in the box and you know your CLI commands to make changes, you can utilize that in part of Ansible to generate larger abstractions to use the play books in order to build out your data center, with the terms and the Lexicon of YAML, the language of Ansible, things that you already know and utilizing that and going further. >> Can you speak to us a little bit about customers, you know, what's holding them back, how are you guys moving them forward to the more agile development space? >> Our customers are mostly brownfield, they're trying to extend what they already have. They have all their gear, they have everything they have that they need but they're trying to do things better. >> I don't find greenfield customers when it comes to the network side of the house, I mean we've all got what I have and we knew that IT's always additive, so, I mean that's got to be a challenge. >> It's a huge challenge. >> Something you can help with right? >> It's a huge challenge, and I think from the network operators and network engineers, a lot of them are saying, again, they're looking at their friends on the compute side, and they can spin up VMs and provision hardware instantaneously, but why does it have to take four to six weeks to provision a VLAN or get a VLAN added to a network switch? That sounds ridiculous, so a lot of the network engineers and operators are saying, well I think I can be as agile as you, so we can actually work together, using a common framework, common language with Ansible, and we can get things done, and we can get all of this stuff I hate doing, and we don't have to do that anymore, we can worry about more important things in our network, like designing the next big thing, if you want to do BGP, design your BGP infrastructure, you want to move from a layer two to a layer three or an SDN solution. >> I love that you talk about everybody, kind of the software wave and breaking down silos, network and storage people are like, oh my God, you're taking my job away. >> Exactly, completely, no, we're not taking your job. We are augmenting what you already have. We're giving you more tools in your tool belt to do better at your job, and that's truly it, we don't have to, people can be smarter so, if you want to add a VLAN, that can be a code snippet created by the sys admin, it can be in Git, and then the network engineer can say, oh yeah, that looks good, and then I just say, submit. What we see today with some of the customers is, yeah, I want to automate, I really want to automate, and you say, great, let's automate. But then you start getting, you peel back the onion, and you start seeing that, well, how are you managing your inventory, how are you managing your endpoints. And they're like, I have a spreadsheet? And you're like, as a networking guy I guess you, (excited clamoring) >> Networking is scary for a lot, >> It's super scary, yeah. >> So how, do you break that down? >> You do what you can, you do it in small pieces, we're not trying to change the world, we're not trying to say, you're going to go 100% devops in the network. Start small, start with something, like again, you really hate doing, if you want to change, something really low risk, things you really hate doing, just start small, low risk things. And then you can propagate that, and as you start getting confidence, and you start getting the knowledge, and the teams, and every one starts, everyone has to be bought in by the way. This is not something you just go in and say, go do it. You have to have everyone on board, the entire organization, it can't be bottom up, it can't be top down, everyone has to be on board. >> And Andrius, when I talk to people in the networking space, risk is the number one thing they're worried about. They buy on risk, they build on risk, and the problem we have with the networks, they're too many things that are manual. So if I'm typing in some you know, 16 digit hexadecimal code >> From notepad, manually you're copying and pasting >> from like a spreadsheet. Copying and pasting, or gosh, so things like that, the room for error is too high. So there's the things that we need to be able to automate, so that we don't have somebody that's tired or just, wait, was that a one or an L or an I. I don't know, so we understand that it actually should be able to reduce risk, increase security, all the things that the business is telling you. >> All these network vendors have virtual instances. You can do all your testing and deployment, all your testing and your infrastructure, and you can do everything in Jenkins and have all your networking switches, virtually, you can have your whole data center in a virtual environment if you want. So if you talk about lower risk, instead of just copying and pasting, and oh was that a slash 24 or a slash 16, oops, I mean that looked right, but it was wrong, but did it go through test, it probably didn't. And then someone's going to get paged at three in the morning, and a router's down, an edge router's down and your toast. So enabling the full devops cycle of continuous integration. So bringing in the same concepts that you have on the compute side, testing, changes, in a full cycle, and then doing that. >> You talked about the importance of buy in and also the difficulties of getting buy in. How much of that is an impediment to the innovation process, but one of the things we've been talking about, is can big companies innovate? What are the challenges that you see, and how do you overcome them? >> That is the number one, that is the biggest issue right now in the network space, is getting buy in. Whether it's someone who has done it on their own, someone can just install Ansible and do something, and then deploy a switch, but if they leave the company and there's no remediation, if it's not in the MOP, if it's not in the Method of Procedure, no one knows about it. So it has to be part of your, you want to keep all the things you have, all the good things you have today with your checks and balances in the networking, and the CIOs and the people at the top have to understand, you can keep all that stuff, but you have to buy in to the automation framework, and everyone has to be onboard to understand how it fits in in order to go from where you are today to where you want to be. >> At the show here what's exciting your customers? You know, give us a little bit of a viewpoint for people that are checking out your stuff, what to expect. >> Well I think the one thing is they're not used to seeing, they think it's black magic, they think it's just magic. They're like, I can use the same things for everything? I say, yeah, you can. The development processes, the innovation in the community, you know for example, if you want to assist, go ACI Module, it's in GitHub, it's in Cisco's GitHub, you can just go ahead and do that. Now we're trying, starting to migrate those things into core. So the more that we get innovation in the community, and that we have the vendors and the partners driving it, and you're seeing that today, you know, we have F5 here we have Cisco, we have Juniper we have Avi, all those people, you know, they have certified platforms with Ansible, Ansible Core, which is going to be integrated with Ansible Tower, we have full buy in from them. They want to meet with us and say how can we do better. How can we innovate with you to drive the nexgen data centers with our products. >> You talked about yourself as a boomerang employee, what is the value in that, and are you seeing a lot of colleagues who are bouncing around and then coming back from ... >> Absolutely, I think pre acquisition Ansible, the vast majority of the people, I believe were ex-Red Hatters that went to Ansible. So what's really nice to come back home and understand the people that left, that came back to understand already what the, >> And people feel that way, it's a coming home? >> Yeah, it's a coming home, it really is. They understand, you know, they came back, they understood the values of opensource and the culture, again, I started Red Hat in 2003, I see the great things, I see new people getting hired and I see the same things I saw back then, 2003, 2004, with all the great things that people are doing, and the culture. You know, Jim's done a great job at keeping the culture how it is, even way back then when there was only 400 people when I started. >> Andrius, extend that culture, I think about the network community and opensource and you know, you talk about, there's risk there, and you know, you think about, I grew up with kind of enterprise, infrastructure mentality, it's like, don't touch it, don't play with it. We always joked, I got every thing there, really don't walk by it and definitely, you know, some zip tie or duct tape's going to come apart. Are we getting better, is networking embracing this? >> Yes, for sure. I think the nice thing is you start seeing these communities pop up. You're starting to see network operators and engineers, they've been historically, if they don't know the answer, they won't go find it. They kind of may be shy, shy to ask for help, per se. >> If it wasn't on their certification, >> Exactly. >> They weren't going to do it. >> If it wasn't there I'm not going to go, we're bringing them into, so we have, whether there's slack instance, there are networking communities, networking automation, communities, just for network automation. And there's one, there's an Ansible channel, on the network decode, select channel, has almost 800 people on it. So they're coming and now they have a place, they have a safe place to ask questions. They don't have to kind of guess or say, you know what, I'm not going to do that. And know they have a safe place for network engineers, for network engineers to get into the net devop space. >> Another one of the sort of sub themes of this summit is people's data strategy, and customers and vendors, how they're dealing with the massive amounts of data that they're customers are generating. What is your data strategy, and how are you using data? >> So there's two aspects here. So the data can be the actual playbooks themselves, the actual, the golden master images, so you can pull configs from switches, and you can store them and you can use them for continuous compliance. You can say, you know, a rogue engineer might make a change, you know, configuration drift happens. But you need to be able to make those comparisons to the other versions. So we're utilizing things like Git, so you're data strategy can be in the cloud, it can be similar on your side, you can do Stash locally. For part of the operations piece, you can use that. A second piece is, log aggregation is a big piece of the Ansible. So when you actually want to make sure that a change happens, that it's been successful, and that you want to ensure continuous compliance, all that data has to go somewhere, right? So you can utilize Ansible Tower as an aggregator, you can go off using the integrations like Splunk and some other log aggregation connectors with Ansible Tower to help utilize your data strategy with the partners that are really the driving, the people that know data and data structures, so we can use them. >> And one of the other issues is the building the confidence to make decisions with all the data, are you working on that too with your team? >> Yes, we are working with that, and that's part of the larger tower organization, so it goes beyond networking. So, whatever networking gets, everyone else gets. When we started developing Ansible Core and the community and Ansible Tower in-house, we think about networking and we think about Windows, that's a huge opportunity there, you know, we're talking about AWS in the cloud. So cloud instances, these are all endpoints that Ansible can manage, and it's not just networking, so we have to make sure that all of the pieces, all of the endpoints can be managed directly. Everyone benefits from that. >> Andrius thank you so much for your time we appreciate it. >> Thanks again for having me. >> I'm Rebecca Knight for Stu Miniman, thank you very much for joining us. We'll be back after this.
SUMMARY :
Brought to you by Red Hat. he is the Principle Product Manager we don't bite. can you give us a little bit about your background. And then did about four five jobs there for about 11 years. I remember, you know, something like So a lot of the expertise came from the ground up you know, extended to the network, in the networking right, it's familiar to a lot of people, empowering the developers to take risks, the language of Ansible, things that you already know that they need but they're trying to do things better. the network side of the house, I mean we've all got like designing the next big thing, if you want to do BGP, I love that you talk about everybody, and you start seeing that, and you start getting the knowledge, and the problem we have with the networks, all the things that the business is telling you. and you can do everything in Jenkins What are the challenges that you see, all the good things you have today At the show here what's exciting your customers? How can we innovate with you to drive the nexgen and are you seeing a lot of colleagues that came back to understand already what the, They understand, you know, they came back, and you know, you talk about, there's risk there, you start seeing these communities pop up. They don't have to kind of guess or say, you know what, the massive amounts of data that and that you want to ensure continuous compliance, and the community and Ansible Tower in-house, Andrius thank you so much for your time thank you very much for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay Rivers | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Andrius Benokraitis | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Cumulus Networks | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
two aspects | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Red Hatters | ORGANIZATION | 0.98+ |
16 digit | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.98+ |
Ansible Red Hat Network Automation | ORGANIZATION | 0.98+ |
Ansible Tower | ORGANIZATION | 0.98+ |
five years ago | DATE | 0.98+ |
Jenkins | TITLE | 0.98+ |
First time | QUANTITY | 0.98+ |
about 11 years | QUANTITY | 0.98+ |
Andrius | PERSON | 0.98+ |
Juniper | ORGANIZATION | 0.97+ |
400 people | QUANTITY | 0.97+ |
about two years | QUANTITY | 0.97+ |
Dockers | ORGANIZATION | 0.97+ |
Linux | TITLE | 0.96+ |
Windows | TITLE | 0.96+ |
Ansible Core | ORGANIZATION | 0.95+ |
Red Hat Summit 2017 | EVENT | 0.95+ |
Git | TITLE | 0.93+ |
about four five jobs | QUANTITY | 0.93+ |
Andrius | TITLE | 0.9+ |
almost 800 people | QUANTITY | 0.89+ |
three | DATE | 0.87+ |
YAML | TITLE | 0.86+ |
layer one | QUANTITY | 0.85+ |
GitHub | TITLE | 0.85+ |
theCube | ORGANIZATION | 0.84+ |
Avi | ORGANIZATION | 0.84+ |
one show | QUANTITY | 0.82+ |
layer three | QUANTITY | 0.77+ |
Hat | ORGANIZATION | 0.71+ |
layer two | QUANTITY | 0.7+ |
Stash | TITLE | 0.68+ |
F5 | ORGANIZATION | 0.68+ |
layer | QUANTITY | 0.67+ |
one thing | QUANTITY | 0.65+ |
Splunk | ORGANIZATION | 0.65+ |
about | DATE | 0.62+ |
OpenShift Container Platform | TITLE | 0.62+ |
Red | TITLE | 0.6+ |
three | OTHER | 0.59+ |