Image Title

Search Results for Open Compute Project conference:

Sumit Puri, Liqid | CUBEConversation, March 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're at our Palo Alto studios having a CUBE Conversation, we're just about ready for the madness of the conference season to start in a few months, so it's nice to have some time to have things a little calmer in the studio, and we're excited to have a new company, I guess they're not that new, but they're relatively new, they've been working on a really interesting technology around infrastructure, and we welcome to the studio, first time, I think, Sumit Puri, CEO and co-founder of Liqid, welcome. >> Thank you guys, very very happy to be here. >> And joined by our big brain, David Floyer, of course, the CTO and co-founder of Wikibon and knows all things infrastructure. Dave, always good to see you. >> It's so good to see you. >> All right, so let's jump into this, Sumit, give us the basic overview of Liqid, what are you guys all about, little bit of the company background, how long you've been around. No, absolutely, absolutely, Liqid is a software-defined infrastructure company, the technology that we've developed is referred to as composable infrastructure, think, dynamic infrastructure, and what we do, is we go and we turn data center resources from statically-configured boxes to dynamic, agile infrastructure. Our core technology is two-part. Number 1, we have a fabric layer, that allows you to interconnect off-the-shelf hardware, but more importantly, we have a software layer, that allows you to orchestrate, or dynamically configure servers, at the bare metal. >> So, who are you selling these solutions to? What's your market, what's the business case for this solution? >> Absolutely, so first, I guess, let me explain a little bit about what we mean by composable infrastructure. Rather than building servers by plugging devices into the sockets of the motherboard, with composability it's all about pools, or trays, of resources. A tray of CPUs, a tray of SSDs, a tray of GPUs, a tray of networking devices, instead of plugging those into a motherboard, we connect those into a fabric switch, and then we come in with our software, and we orchestrate, or recompose, at the bare metal. Grab this CPU, grab those four SSDs, these eight GPUs, and build me a server, just like you were plugging devices into the motherboard, except you're defining it in software, on the other side, you're getting delivered infrastructure of any size, shape, or ratio that you want. Except that infrastructure is dynamic, when we need another GPU in our server, we don't send a guy with a cart to plug the device in, we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud, on the infrastructure that you are forced to own. And now, to answer your question of where we find a natural fit for our solution, one primary area is obviously cloud. If you're building a cloud environment, whether you're providing cloud as a service or whether you're providing cloud to your internal customers, building a more dynamic, agile cloud is what we enable. >> So, is the use case more just to use your available resources and reconfigure it to set something that basically runs that way for a while, or are customers more using it to dynamically reconfigure those resources based on, say, a temporary workload, is kind of a classic cloud example, where you need a bunch of something now, but not necessarily forever. >> Sure. The way we look at the world is very much around resource utilization. I'm buying this very expensive hardware, I'm deploying it into my data center, typical resource utilization is very low, below 20%, right? So what we enable is the ability to get better resource utilization out of the hardware that you're deploying inside your data center. If we can take a resource that's utilized 20% of the time because it's deployed as a static element inside of a box and we can raise the utilization to 40%, does that mean we are buying less hardware inside of our data center? Our argument is yes, if we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware. >> So it's a fairly simple business case, then. To do that. So who are your competition in this area? Is it people like HP or Intel, or, >> That's a great question, I think both of those are interesting companies, I think HPE is the 800-pound gorilla in this term called composability and we find ourselves a slightly different approach than the way that those guys take it, I think first and foremost, the way that we're different is because we're disaggregated, right? When we sell you trays of resources, we'll sell you a tray of SSD or a tray of GPUs, where HP takes a converged solution, right? Every time I'm buying resources for my composable rack, I'm paying for CPUs, SSDs, GPUs, all of those devices as a converged resource, so they are converged, we are disaggregated. We are bare metal, we have a PCIe-based fabric up and down the rack, they are an ethernet-based fabric, there are no ethernet SSDs, there are no ethernet GPUs, at least today, so by using ethernet as your fabric, they're forced to do virtualization protocol translation, so they are not truly bare metal. We are bare metal, we view of them more as a virtualized solution. We're an open ecosystem, we're hardware-agnostic, right? We allow our customers to use whatever hardware that they're using in their environment today. Once you've kind of gone down that HP route, it's very much a closed environment. >> So what about some of the customers that you've got? Which sort of industries, which sort of customers, I presume this is for the larger types of customers, in general, but say a little bit about where you're making a difference. >> No, absolutely, right? So, obviously at scale, composability has even more benefit than in smaller deployments, I'll give you just a couple of use case examples. Number one, we're working with a transportation company, and what happens with them at 5 p.m. is actually very different than what happens at 2 a.m., and the model that they have today is a bunch of static boxes and they're playing a game of workload matching. If the workload that comes in fits the appropriate box, then the world is good. If the workload that comes in ends up on a machine that's oversized, then resources are being wasted, and what they said was, "We want to take a new approach. "We want to study the workload as it comes in, "dynamically spin up small, medium, large, "depending on what that workload requires, "and as soon as that workload is done, "free the resources back into the general pool." Right, so that's one customer, by taking a dynamic approach, they're changing the TCO argument inside of their environment. And for them, it's not a matter of am I going dynamic or am I going static, everyone knows dynamic infrastructure is better, no one says, "Give me the static stuff." For them, it's am I going public cloud, or am I going on prem. That's really the question, so what we provide is public cloud is very easy, but when you start thinking about next-generation workloads, things that leverage GPUs and FPGAs, those instantiations on public cloud are just not very cheap. So we give you all of that flexibility that you're getting on public cloud, but we save you money by giving you that capability on prem. So that's use case number one. Another use case is very exciting for us, we're working with a studio down in southern California, and they leverage these NVIDIA V100 GPUs. During the daytime, they give those GPUs to their AI engineers, when the AI engineers go home at night, they reprogram the fabric and they use those same GPUs for rendering workloads. They've taken $50,000 worth of hardware and they've doubled the utilization of that hardware. >> The other use case we talked about before we turned the cameras on there, was pretty interesting, was kind of multiple workloads against the same data set, over a series of time where you want to apply different resources. I wonder if you can unpack that a little bit because I think that's a really interesting one that we don't hear a lot about. So, we would say about 60 plus to 70% of our deployments in one way or another touch the realm of AI. AI is actually not an event, AI is a workflow, what do we do? First we ingest data, that's very networking-centric. Then we scrub and we clean the data, that's actually CPU-centric. Then we're running inference, and then we're running training, that's GPU-centric. Data has gravity, right? It's very difficult to move petabytes of data around, so what we enable is the composable AI platform, leave data at the center of the universe, reorchestrate your compute, networking, GPU resources around the data. That's the way that we believe that AI is approached. >> So we're looking forward in the future. What are you seeing where you can make a difference in this? I mean, a lot of changes happening, there's Gen 4 coming out in PCIe, there's GPUs which are moving down to the edge, how do see, where do you see you're going to make a difference, over the next few years. >> That's a great question. So I think there's 2 parts to look at, right? Number one is the physical layer, right? Today we build or we compose based upon PCIe Gen 3 because for the first time in the data center, everything is speaking a common language. When SSDs moved to NVMe, you had SSDs, network cards, GPUs, CPUs, all speaking a common language which was PCIe. So that's why we've chosen to build our fabric on this common interconnect, because that's how we enable bare metal orchestration without translation and virtualization, right? Today, it's PCIe Gen 3, as the industry moves forward, Gen 4 is coming. Gen 4 is here. We've actually announced our first PCIe Gen 4 products already, and by the end of this year, Gen 4 will become extremely relevant into the market. Our software has been architected from the beginning to be physical layer-agnostic, so whether we're talking PCIe Gen 3, PCIe Gen 4, in the future something referred to as Gen Z, (laughing) it doesn't matter for us, we will support all of those physical layers. For us it's about the software orchestration. >> I would imagine, too, like TPUs and other physical units that are going to be introduced in the system, too, you're architected to be able to take those, new-- >> Today, today we're doing CPUs, GPUs, NVMe devices and we're doing NICs. We just made an announcement, now we're orchestrating Optane memory with Intel. We've made an announcement with Xilinx where we're orchestrating FPGAs with Xilinx. So this will continue, we'll continue to find more and more of the resources that we'll be able to orchestrate for a very simple reason, everything has a common interconnect, and that common interconnect is PCIe. >> So this is an exciting time in your existence. Where are you? I mean, how far along are you to becoming the standard in this industry? >> Yeah, no, that's a great question, and I think, we get asked a lot is what company are you most similar to or are you most like at the early stage. And what we say is we, a lot of time, compare ourselves to VMware, right? VMware is the hypervisor for the virtualization layer. We view ourselves as that physical hypervisor, right? We do for physical infrastructure what VMware is doing for virtualized environments. And just like VMware has enabled many of the market players to get virtualized, our hope is we're going to enable many of the market players to become composable. We're very excited about our partnership with Inspur, just recently we've announced, they're the number three server vendor in the world, we've announced an AI-centric rack, which leverages the servers and the storage solutions from Inspur tied to our fabric to deliver a composable AI platform. >> That's great. >> Yeah, and it seems like the market for cloud service providers, 'cause we always talk about the big ones, but there's a lot of them, all over the world, is a perfect use case for you, because now they can actually offer the benefits of cloud flexibility by leveraging your infrastructure to get more miles out of their investments into their backend. >> Absolutely, cloud, cloud service providers, and private cloud, that's a big market and opportunity for us, and we're not necessarily chasing the big seven hyperscalers, right? We'd love to partner with them, but for us, there's 300 other companies out there that can use the benefit of our technology. So they necessarily don't have the R&D dollars available that some of the big guys have, so we come in with our technology and we enable those cloud service providers to be more agile, to be more competitive. >> All right, Sumit, before we let you go, season's coming up, we were just at RSA yesterday, big shows comin' up in May, where you guys, are we going to cross paths over the next several weeks or months? >> No, absolutely, we got a handful of shows coming up, very exciting season for us, we're going to be at the OCP, the Open Compute Project conference, actually next week, and then right after that, we're going to be at the NVIDIA GPU Technology Conference, we're going to have a booth at both of those shows, and we're going to be doing live demos of our composable platform, and then at the end of April, we're going to be at the Dell Technology World conference in Las Vegas, where we're going to have a large booth and we're going to be doing some very exciting demos with the Dell team. >> Sumit, thanks for taking a few minutes out of your day to tell us a story, it's pretty exciting stuff, 'cause this whole flexibility is such an important piece of the whole cloud value proposition, and you guys are delivering it all over the place. >> Well, thank you guys for making the time today, I was excited to be here, thank you. >> All right, David, always good to see you, >> Good to see you. >> Smart man, alright, I'm Jeff Frick, you're watching theCUBE from theCUBE studios in Palo Alto, thanks for watching, we'll see you next time. (upbeat music)

Published Date : Mar 8 2019

SUMMARY :

in the heart of Silicon Valley, of the conference season to start in a few months, of course, the CTO and co-founder of Wikibon little bit of the company background, and then we come in with our software, So, is the use case more just to use from 20% to 40%, our belief is we can do So who are your competition in this area? When we sell you trays of resources, So what about some of the customers that you've got? So we give you all of that flexibility That's the way that we believe that AI is approached. how do see, where do you see you're going to make a difference, and by the end of this year, of the resources that we'll be able to orchestrate I mean, how far along are you many of the market players to become composable. the benefits of cloud flexibility that some of the big guys have, so we come in and then right after that, we're going to be at of the whole cloud value proposition, Well, thank you guys for making the time today, thanks for watching, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Jeff FrickPERSON

0.99+

20%QUANTITY

0.99+

$50,000QUANTITY

0.99+

DavidPERSON

0.99+

Sumit PuriPERSON

0.99+

2 a.m.DATE

0.99+

40%QUANTITY

0.99+

2 partsQUANTITY

0.99+

SumitPERSON

0.99+

Palo AltoLOCATION

0.99+

InspurORGANIZATION

0.99+

5 p.m.DATE

0.99+

DavePERSON

0.99+

March 2019DATE

0.99+

DellORGANIZATION

0.99+

Las VegasLOCATION

0.99+

800-poundQUANTITY

0.99+

firstQUANTITY

0.99+

next weekDATE

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.99+

MayDATE

0.99+

FirstQUANTITY

0.99+

HPORGANIZATION

0.99+

yesterdayDATE

0.99+

LiqidORGANIZATION

0.99+

bothQUANTITY

0.99+

two-partQUANTITY

0.99+

WikibonORGANIZATION

0.98+

first timeQUANTITY

0.98+

LiqidPERSON

0.98+

NVIDIAORGANIZATION

0.98+

TodayDATE

0.98+

300 other companiesQUANTITY

0.98+

XilinxORGANIZATION

0.98+

70%QUANTITY

0.97+

southern CaliforniaLOCATION

0.96+

Dell Technology WorldEVENT

0.96+

one customerQUANTITY

0.95+

end of AprilDATE

0.95+

end of this yearDATE

0.95+

Open Compute Project conferenceEVENT

0.95+

CUBE ConversationEVENT

0.95+

V100COMMERCIAL_ITEM

0.93+

NVIDIA GPU Technology ConferenceEVENT

0.93+

about 60 plusQUANTITY

0.93+

below 20%QUANTITY

0.93+

OCPEVENT

0.92+

VMwareORGANIZATION

0.91+

fourQUANTITY

0.91+

Palo Alto, CaliforniaLOCATION

0.89+

Silicon Valley,LOCATION

0.88+

eight GPUsQUANTITY

0.88+

one wayQUANTITY

0.86+

seven hyperscalersQUANTITY

0.86+

petabytesQUANTITY

0.86+

PCIe Gen 3OTHER

0.85+

Gen ZOTHER

0.8+

Gen 4OTHER

0.79+

next several weeksDATE

0.76+

PCIe Gen 4COMMERCIAL_ITEM

0.74+

handful of showsQUANTITY

0.74+

three server vendorQUANTITY

0.73+

yearsDATE

0.69+

caseQUANTITY

0.69+

one primary areaQUANTITY

0.69+

oneQUANTITY

0.68+

VMwareTITLE

0.67+

Number 1QUANTITY

0.67+

theCUBEORGANIZATION

0.64+

PCIe Gen 4OTHER

0.61+

Day 3 Open | Red Hat Summit 2017


 

>> (upbeat music) Live from Boston Massachusetts. It's theCube! Covering Red Hat Summit 2017. Brought to you by Red Hat. >> It is day three of the Red Hat Summit, here in Boston Massachusetts. I'm Rebecca Knight. Along with Stu Miniman. We are wrapping up this conference Stu. We just had the final keynote of the morning. Before the cameras were rolling, you were teasing me a little bit that you have more scoop on the AWS deal. I'm interested to hear what you learned. >> (Stu) Yeah, Rebecca. First of all, may the fourth be with you. >> (Rebecca) Well, thank you. Of course, yes. And also with you. >> (Stu) Always. >> Yeah. (giggles) >> (Stu) So, day three of the keynote. They started out with a little bit of fun. They gave out some "May The Fourth Be With You" t-shirts. They had a little Star Wars duel that I was Periscoping this morning. So, love their geeking out. I've got my Millennium Falcon cuff links on. >> (Rebecca) You're into it. >> I saw a bunch of guys wearing t-shirts >> (Rebecca) Princess Leia was walking around! >> Princess Leia was walking around. There were storm troopers there. >> (Rebecca) Which is a little sad to see, but yes. >> (Stu) Uh, yeah. Carrie Fisher. >> Yes. >> Absolutely, but the Amazon stuff. Sure, I think this is the biggest news coming out of the show. I've said this a number of times. And we're still kind of teasing out exactly what it is. Cause, partially really this is still being built out. There's not going to be shipping until later this year. So things like how pricing works. We're still going to get there. But there's some people that were like "Oh wait!' "Open shift can be in AWS, that's great!" "But then I can do AWS services on premises." Well, what that doesn't mean, of course is that I don't have everything that Amazon does packaged up into a nice little container. We understand how computer coding works. And even with open-source and how we can make things server-less. And it's not like I can take everything that everybody says and shove it in my data center. It's just not feasible. What that means though, is it is the same applications that I can run. It's running in OpenShift. And really, there's the hooks and the API's to make sure that I can leverage services that are used in AWS. Of course, from my standpoint I'm like "OK!" So, tell me a little bit about how what latency there's going to be between those services. But it will be well understood as we build these what it's going to be use for. Certain use cases. We already talked to Optim. I was really excited about how they could do this for their environment. So, it's something we expect to be talking about throughout the rest of the year. And by the time we get to AWS Reinvent the week after Thanksgiving, I expect we'll have a lot more detail. So, looking forward to that. >> (Rebecca) And it will be rolled out too. So we'll have a really good sense of how it's working in the marketplace. >> (Stu) Absolutely. >> So other thoughts on the key note. I mean, one of the things that really struck me was talking about open-source. The history of open-source. It started because of a need to license existing technologies in a cheaper way. But then, really, the point that was made is that open-source taught tech how to collaborate. And then tech taught the world how to collaborate. Because it really was the model for what we're seeing with crowdsourcing solutions to problems facing education, climate change, the developing world. So I think that that is really something that Red Hat has done really well. In terms of highlighting how open-source is attacking many of the worlds most pressing problems. >> (Stu) Yeah, Rebecca I agree. We talked with Jim Whitehurst and watched him in the keynotes in previous days. And talked about communities and innovation and how that works. And in a lot of tech conferences it's like "Okay, what are the business outcomes?" And here it's, "Well, how are we helping the greater good?" "How are we helping education?" It was great to see kids that are coding and doing some cool things. And they're like, "Oh yeah, I've done Java and all these other things." And the Red Hat guys were like, "Hey >> (Rebecca) We're hiring. Yeah. (giggles) >> can we go hire this seventh grader?" Had the open-source hardware initiative that they were talking about. And how they can do that. Everything from healthcare to get a device that used to be $10,000 to be able to put together the genome. Is I can buy it on Amazon for What was it? Like six seven hundred dollars and put it together myself. So, open-source and hardware are something we've been keeping an eye on. We've been at the Open Compute Project event. Which Facebook launched. But, these other initiatives. They had.... It was funny, she said like, "There's the internet of things." And they have the thing called "The Thing" that you can tie into other pieces. There was another one that weaved this into fabric. And we can sensor and do that. We know healthcare, of course. Lot's of open-source initiatives. So, lots of places where open-source communities and projects are helping proliferate and make greater good and make the world a greater place. Flattening the world in many cases too. So, it was exciting to see. >> And the woman from the Open-Source Association. She made this great point. And she wasn't trying to be flip. But she said one of our questions is: Are you emotionally ready to be part of this community? And I thought that that was so interesting because it is such a different perspective. Particularly from the product side. Where, "This is my IP. This is our idea. This is our lifeblood. And this is how we're going to make money." But this idea of, No. You need to be willing to share. You need to be willing to be copied. And this is about how we build ideas and build the next great things. >> (Stu) Yeah, if you look at the history of the internet, there was always. Right, is this something I have to share information? Or do we build collaboration? You know, back to the old bulletin board days. Through the homebrew computing clubs. Some of the great progress that we've made in technology and then technology enabling beyond have been because we can work in a group. We can work... Build on what everyone else has done. And that's always how science is done. And open-source is just trying to take us to the next level. >> Right. Right. Right. And in terms of one of the last... One of the last things that they featured in the keynote was what's going on at the MIT media lab. Changing the face of agriculture. And how they are coding climate. And how they are coding plant nutrition. And really this is just going to have such a big change in how we consume food and where food is grown. The nutrients we derive from fruit. I was really blown away by the fact that the average apple we eat in the grocery store has been around for 14 months. Ew, ew! (laughs) So, I mean, I'm just exciting what they're doing. >> Yeah, absolutely right. If we can help make sure people get clean water. Make sure people have availability of food. Shorten those cycles. >> (Rebecca) Right, right. Exactly. >> The amount of information, data. The whole Farm to Table Initiative. A lot of times data is involved in that. >> (Rebecca) Yeah. It's not necessarily just the stuff that you know, grown on the roof next door. Or in the farm a block away. I looked at a local food chain that's everywhere is like Chipotle. You know? >> (Rebecca) Right. >> They use data to be able to work with local farmers. Get what they can. Try to help change some of the culture pieces to bring that in. And then they ended up the keynote talking more about innovation award winners. You and I have had the chance to interview a bunch of them. It's a program I really like. And talking to some of the Red Hatters there actually was some focus to work with... Talk to governments. Talk to a lot of internationals. Because when they started the program a few years ago. It started out very U.S.-centric. So, they said "Yeah." It was a little bit coincidence that this year it's all international. Except for RackSpace. But, we should be blind when we think about who has great ideas and good innovation. And at this conference, I bumped into a lot of people internationally. Talked to a few people coming back from the Red Sox game. And it was like, "How was it?" And they were like, "Well, I got a hotdog and I understood this. But that whole ball and thing flying around, I don't get it." And things like that. >> So, they're learning about code but also baseball. So this is >> (Stu) Yeah, what's your take on the global community that you've seen at the show this week? >> (Rebecca) Well, as you've said, there are representatives from 70 countries here. So this really does feel like the United Nations of open-source. I think what is fascinating is that we're here in the states. And so we think about these hotbeds of technological innovation. We're here in Boston. Of course there's Silicon Valley. Then there are North Carolina, where Red Hat's based. Atlanta, Austin, Seattle, of course. So all these places where we see so much innovation and technological progress taking place here in the states. And so, it can be easy to forget that there are also pockets all over Europe. All over South America. In Africa, doing cool things with technology. And I think that that is also ... When we get back to one of the sub themes of this conference... I mean, it's not a sub theme. It is the theme. About how we work today. How we share ideas. How we collaborate. And how we manage and inspire people to do their best work. I think that that is what I'd like to dig into a little today. If we can. And see how it is different in these various countries. >> Yeah, and this show, what I like is when its 13th year of the show, it started out going to a few locations. Now it's very stable. Next year, they'll be back in San Francisco. The year after, they'll be back here in Boston. They've go the new Boston office opening up within walking distance of where we are. Here GE is opening up their big building. I just heard there's lots of startups when I've been walking around the area. Every time I come down to the Sea Port District. It's like, "Wow, look at all the tech." It's like, Log Me In is right down the road. There's this hot little storage company called Wasabi. That's like two blocks away. Really excited but, one last thing back on the international piece. Next week's OpenStack Summit. I'll be here, doing theCube. And some of the feedback I've been getting this week It's like, "Look, the misperception on an OpenStack." One of the reasons why people are like, "Oh, the project's floundering. And it's not doing great, is because the two big use case. One, the telecommunication space. Which is a small segment of the global population. And two, it's gaining a lot of traction in Europe and in Asia. Whereas, in North America public cloud has kind of pushed it aside a little bit. So, unfortunately the global tech press tends to be very much, "Oh wait, if it's seventy-five percent adoption in North America, that's what we expect. If its seventy-five percent overseas, it's not happening. So (giggles) it's kind of interesting. >> (Rebecca) Right. And that myopia is really a problem because these are the trends that are shaping our future. >> (Stu) Yeah, yeah. >> So today, I'm also going to be talking to the Women In Tech winners. That very exciting. One of the women was talking about how she got her idea. Or really, her idea became more formulated, more crystallized, at the Grace Hopper Conference. We, of course, have a great partnership with the Grace Hopper Conference. So, I'm excited to talk to her more about that today too. >> (Stu) Yeah, good lineup. We have few more partners. Another customer EasiER AG who did the keynote yesterday. Looking forward to digging in. Kind of wrapping up all of this. And Rebecca it's been fun doing it with you this week. >> And I'm with you. And may the force... May the fourth be with you. >> And with you. >> (giggles) Thank you, we'll have more today later. From the Red Hat Summit. Here in Boston, I'm Rebecca Knight for Stu Miniman. (upbeat music)

Published Date : May 4 2017

SUMMARY :

Brought to you by Red Hat. We just had the final keynote of the morning. may the fourth be with you. And also with you. They had a little Star Wars duel that I was Periscoping Princess Leia was walking around. (Stu) Uh, yeah. And by the time we get to AWS Reinvent (Rebecca) And it will be rolled out too. is attacking many of the worlds most pressing problems. And the Red Hat guys were like, "Hey (Rebecca) We're hiring. And we can sensor and do that. And the woman from the Open-Source Association. Some of the great progress that we've made in technology And in terms of one of the last... If we can help (Rebecca) Right, right. The amount of information, data. It's not necessarily just the stuff that You and I have had the chance to interview a bunch of them. So this is And so, it can be easy to forget And some of the feedback I've been getting this week And that myopia is really a problem One of the women was talking about how she And Rebecca it's been fun doing it with you this week. And may the force... From the Red Hat Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Jim WhitehurstPERSON

0.99+

Rebecca KnightPERSON

0.99+

BostonLOCATION

0.99+

ChipotleORGANIZATION

0.99+

EuropeLOCATION

0.99+

AsiaLOCATION

0.99+

AmazonORGANIZATION

0.99+

North CarolinaLOCATION

0.99+

$10,000QUANTITY

0.99+

Red HatORGANIZATION

0.99+

GEORGANIZATION

0.99+

AtlantaLOCATION

0.99+

FacebookORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SeattleLOCATION

0.99+

AustinLOCATION

0.99+

AfricaLOCATION

0.99+

WasabiORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Carrie FisherPERSON

0.99+

Boston MassachusettsLOCATION

0.99+

San FranciscoLOCATION

0.99+

Next yearDATE

0.99+

North AmericaLOCATION

0.99+

South AmericaLOCATION

0.99+

Red SoxORGANIZATION

0.99+

seventy-five percentQUANTITY

0.99+

OneQUANTITY

0.99+

Next weekDATE

0.99+

yesterdayDATE

0.99+

70 countriesQUANTITY

0.99+

13th yearQUANTITY

0.99+

JavaTITLE

0.99+

OpenShiftTITLE

0.99+

this weekDATE

0.99+

todayDATE

0.99+

six seven hundred dollarsQUANTITY

0.98+

Grace Hopper ConferenceEVENT

0.98+

twoQUANTITY

0.98+

Red Hat SummitEVENT

0.98+

StuPERSON

0.98+

two blocksQUANTITY

0.98+

OpenStack SummitEVENT

0.98+

oneQUANTITY

0.98+

Sea Port DistrictLOCATION

0.98+

United NationsORGANIZATION

0.98+

this yearDATE

0.97+

later this yearDATE

0.97+

fourthQUANTITY

0.97+

Star WarsTITLE

0.97+

Red Hat Summit 2017EVENT

0.97+

May The Fourth Be With YouTITLE

0.96+

Princess LeiaPERSON

0.96+

Ihab Tarazi, Equinix - Open Networking Summit 2017 - #ONS2017 - #theCUBE


 

>> Narrator: Live from Santa Clara, California it's theCUBE. Covering Open Networking Summit 2017. Brought to you by the Linux Foundation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're in Santa Clara at the Open Networking Summit 2017. We haven't been here for a couple years. Obviously Open is everywhere. It's in hardware, it's in compute, it's in store, and it's certainly in networking as well. And we're excited to be joined first off by Scott Raynovich who will be co-hosting for the next couple of days. Good to see you again Scott. >> Good to see you. >> And our next guest is Ihab Tarazi. He's the EVP and CTO of Equinix. Last time we saw Ihab was at Open Compute Project last year, so great to see you again. >> Yeah, thank you very much, good to be here. I really enjoyed the interview last year so thanks for having me again. >> Now you set it at the high bar, so hopefully we can pull it off again. >> We can do it. >> So first off for folks that aren't familiar with Equinix, give them kind of an overview. Because you don't have quite the profile of Amazon and Google and the other cloud providers, but you're a pretty important piece of the infrastructure. >> Ihab: Yeah absolutely. While we're nowhere close to the size of those players, the place we play in the universe is very significant. We are the edge of the cloud, I would say. We enable all these players, they're all our biggest customers. As well all the networks are our biggest customers. We have over 2,000 clouds in our data centers and over 1,400 networks. We have one of the largest global data center networks. We have 150 data centers and four eMarkets around the world. And that number is going to get a little bigger. Now we announce the acquisition of Verizon data center assets. So we'll have more data centers and a few more markets. >> I heard about the Verizon acquisition, so congratulations, just adding more infrastructure. But let's unpack it a little bit. Two things I want to dig into. One is you said you have clouds in your data centers. So what do you mean by that? >> Yeah the way the cloud architecture is deployed is that the big cloud providers will have these big data centers where they build them themselves and it hosts the applications. And then they work with an edge for the cloud. Either a caching edge or compute edge, or even a network edge in data centers like ours where they connect to all their enterprise customers and all the networks. So we have a significant number of edges, we have 21 markets around the world. We have just about the big list of names, edges, that you can connect to automatically. From AWS, Google, Microsoft, Salesforce.com, Oracle, anybody else you think of. >> So this is kind of an extension of what we heard back a long time ago with you guys and like Amazon specifically on this direct connect. So you are the edge between somebody else's data center and these giant cloud providers. >> Absolutely. And since the last time we talked, we've added a lot more density. More edge nodes and more markets and more new cloud providers. Everywhere from the assess to the infrastructure as a service provider. >> And why should customers care? What's the benefit to your customers for that? >> Yeah the benefit is really significant. These guys want direct access to the cloud for high performance and security. So everybody wants to build the hybrid cloud. Now it's very clear the hybrid cloud is the architecture of choice. You want to build a hybrid cloud, then you want to deploy in a data center and connect to the cloud. And the second thing that's happening, nobody's using just one cloud. Everybody's doing a multi-cloud. So if you want 40, 50 clouds like most companies do, most CIOs, then you're going to want to be in a data center that has as many as possible. If you're going to go global, connect to multi-cloud and have that proximity, you're going to have a hard time finding somebody like Equinix out there. >> Yeah but I've got a question. You mentioned the Verizon deal. There was a trend for a while where all these big service providers were buying data centers, including AT&T, CenturyLink, and now the trend appears to have reversed. Now they're selling the data centers that they bought. I'd love your insight on that. Why that just wasn't their core competency? Why are the selling them back to people like Equinix. >> Yeah that's a good question. What's happened over time as the cloud materialized, is the data canters are much more valuable if they're neutral. If you can come in and connect to all the clouds and all the networks, customers are much more likely to come in. And therefore if a data center is owned by a single network, customers are not as likely to want to use it because they want to use all the networks and all the clouds. And our model of neutrality and how we set up exchanges, and how we provide interconnection, and the whole way we do customer service, is the kind of things people are looking for. >> So you're the Switzerland of the cloud. >> And so the same assets become much more valuable in this new model. >> And I don't know if people understand quite how much direct connection and peer-to-peer, and how much of that's going on, especially in a business-to-business context to provide a much better experience. Versus you know the wild wooly internet of days of old where you're hopping all over the place, Lord knows how many hops you're taking. A lot of that's really been locked down. >> I think the most important step people can think about is by 2020 90% of all the internet, or at least 80 to 90, will be home to the top 10 clouds. Therefore the days of the wild internet, while that continues to be significant, the cloud access and interconnection is very critical, and continues to be even bigger. >> Go ahead. >> So tell us what the logistics are of managing the growth, like you opening how many data centers a year, and how much equipment are you moving into these data centers. We spend over a billion dollars a year on upgrading, adding capacity, and building new data centers. We usually announce five, six, new ones a year. We usually have 20 plus projects, if not more, active at any time. So we have a very focused process and people across the globe manage this thing. We don't want to go dark in any of our key matters like Washington DC, the D.C. market, or let's say the San Jose, Silicon Valley, etc. Because customers want to come in and continue to add and continue to bring people. And that means not only expanding the existing data centers, but buying land and building more data centers beside it, and continue to expand where we need to. And then every year or so we go into one or two more emerging markets. We went into Dubai a while ago and we continue to develop it. And those become long term investments to continue to build our global infrastructure. The last few years we've made massive acquisitions between Telecity in Europe, Bit-isle in Japan, and now the Verizon assents that expanded our footprint significantly into new markets, Eastern Europe, give us bigger markets in places like Tokyo which helped us get to where we are today. >> One of the themes in networking and cloud in general is that the speed of light is just too damn slow. At the end of the day, stuff's got to travel and it actually takes longer than you would think. So does having all these, increased presence, increased egos, increased physical locations, help you address some of that? Because you've got so many more points kind of into this private network if you will. >> Oh yeah absolutely. The content has become more and more localized by market. And the more you have things like IOT and devices pulling in more data, not all the data needs to go all over the globe. And also there is now jurisdiction and laws that require some of the content to stay. So the market approach that we have is becoming the center of mass for where the data resides. And once the data gets into our data center, the value of the data is how you exchange it with other pieces of information, and increasingly how you make immediate decisions on it, you know with automation and machine learning. So when you go to that environment you need massive capacity, very low latency, to many data warehouses or data lakes, and you want to connect that to the software that can make decisions. So that's how we see the world is evolving now. One thing we see though is that complementing that will be a new edge that will form. A lot of people in this conference were talking about that. A lot of the discussion about the open networks here is how we support the 5G, all the explosion of devices, and what we see that connecting to that dense market approach that we have where the data is housed. >> That's interesting you just mentioned all the devices which was going to be my next question. So the internet of things, how will this change the data center edge, as you refer to it? >> Yeah that's the biggest question in the industry, especially for networks. And the same discussion happened at Mobile Work Congress here a little while ago. People now believe that there'll be this compute edge, that the network will be a compute edge. Because you want to be able to put compute, keep pushing it out all the way to the edge. And that edge needs to support today's technologies but also all the open wireless spectrum, all the low powered networks, open R which is one of the frequencies for the millimeter frequencies, and also the 5G as you know. So when you add all that up you're going to need this edge to support. So all the different wireless options plus some amount of compute, and that problem is very hard to solve without an open source model, which is where a lot of people are here looking for solutions. >> It's interesting because your definition of the edge feels like it's kind of closer to the cloud where's there's a lot of converstion, we do a lot of stuff with GE about the edge, which is you know right out there on the device and the sensor. Because as you said depending on the application, depending on the optimization, depending on what you're trying to do, the device is some level of compute and store that's going to be done locally, and some of it will go upstream and get processed and come downstream. But you're talking about a different edge. Or you know of see you guys extending all the way down to that edge. >> We don't see ourselves extending at this time but definitely it's something we're spending a lot of time analyzing to see what happens. I would say a couple of big stats is that today our edge is maybe 100 milliseconds from devices in a market or a lot less in some cases. The new technology will make that even shorter. So with the new technology like you said, you can't beat the speed of light, but with more direct connections you'll get to 40, 50 milliseconds, which is fantastic for the vast majority of applications people want. There'll be very few applications that need much slower latency all the way down to the sub-10 millisecond. For those somebody like a network would need to put compute at the edge to do some of it. So that world of both types will continue. But even the ones that need the very low latency, for some of the data it still needs to compare it to other sources of data and connect to clouds and networks but some of the data will still come back to our data centers. So I think this is how we see the world evolving but it's early days and a lot of brain power will be spent on that. >> So as you look forward to 2017, what are some of the big items on your plate that you're trying to take down for this calendar year? >> The biggest thing I want on our list is that we have an explosion of software model. Everybody who was a software now has a software platform. When we were at OCP for example you saw NetApp, they showed their software as an open source. Every single company from security to storage, even networking, are now creating their platform available as a software. Well those platforms have no place to go today. They have no deployment model. So one of the things we are working on is how we create a deployment model for this as a service model. And most of them is open source, so it needs decoupling of software and hardware. So we are really actively working with all these to create an open source software and just software in general, ecosystem plus this whole open source hardware. >> So do you guys have a pretty aggressive software division inside Equinix, especially in these open source projects? Or how do you kind of interact with them? >> Our model is to enable the industry. So we have some of our tools but mostly for enabling customers and customer service, as well as some of the basic interconnection we do. The vast majority of all the stuff is our partners, and these are our customers. So our model is to enable them and to connect them to everybody else they need at ecosystem to succeed and help them set up as a service model. And as the enterprise customers come to our data center, how to they connect to them. So I would say that's one of the most sought after missions when we go to conferences like this. Everybody who announced today is talking to us about how they enable the announcements they make and given our place in the universe, we would be a very key player in enabling that ecosystem. >> Do you have like a special lab where you test these new technologies? Or how do you do that? >> Yeah that's the plan. And we connect this effort to also what we're doing with OCP and Telecom Infrastructure Project where we have a leadership position and highly engaged. We are creating a lab environment where people can come in and test not only the hardware from TIP and OCP, but also the software from open network, but many other open source software in general under the Linux Foundation or others. In our situation not only can they test it against each other, but they can test the performance against the entire world. How does this work with the internet, the cloud? And that leading us to deployment and go to market models that people are looking for. >> Alright sounds pretty exciting. Equinix, a company that probably handles more of your internet traffic than you ever thought. >> Ihab: That's very true. >> Well thanks again for stopping by. We'll look for you at our next open source show. >> Thank you very much. >> Ihab Tarazi from Equinix. He's Scott Raynovich, I'm Jeff Frick, you're watching theCube from Open Networking Summit 2017, see you next time after this short break. (techno music)

Published Date : Apr 4 2017

SUMMARY :

Brought to you by the Linux Foundation. Good to see you again Scott. so great to see you again. I really enjoyed the interview last year Now you set it at the high bar, and Google and the other cloud providers, We are the edge of the cloud, I would say. So what do you mean by that? and it hosts the applications. So you are the edge between somebody else's data center And since the last time we talked, And the second thing that's happening, Why are the selling them back to people like Equinix. and all the clouds. And so the same assets become and how much of that's going on, is by 2020 90% of all the internet, and people across the globe manage this thing. At the end of the day, stuff's got to travel And the more you have things like IOT So the internet of things, and also the 5G as you know. on the device and the sensor. for some of the data it still needs to So one of the things we are working on is And as the enterprise customers come to our data center, Yeah that's the plan. internet traffic than you ever thought. We'll look for you at our next open source show. see you next time after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerizonORGANIZATION

0.99+

EquinixORGANIZATION

0.99+

CenturyLinkORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

Ihab TaraziPERSON

0.99+

Scott RaynovichPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

IhabPERSON

0.99+

OracleORGANIZATION

0.99+

oneQUANTITY

0.99+

AT&TORGANIZATION

0.99+

Santa ClaraLOCATION

0.99+

ScottPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

JapanLOCATION

0.99+

40QUANTITY

0.99+

150 data centersQUANTITY

0.99+

100 millisecondsQUANTITY

0.99+

TelecityORGANIZATION

0.99+

fiveQUANTITY

0.99+

21 marketsQUANTITY

0.99+

2017DATE

0.99+

last yearDATE

0.99+

DubaiLOCATION

0.99+

Washington DCLOCATION

0.99+

D.C.LOCATION

0.99+

TokyoLOCATION

0.99+

todayDATE

0.99+

sixQUANTITY

0.99+

20 plus projectsQUANTITY

0.99+

San JoseLOCATION

0.99+

Santa Clara, CaliforniaLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

second thingQUANTITY

0.99+

Salesforce.comORGANIZATION

0.99+

2020DATE

0.99+

EuropeLOCATION

0.99+

Two thingsQUANTITY

0.99+

fourQUANTITY

0.99+

OneQUANTITY

0.99+

Open Networking Summit 2017EVENT

0.98+

OCPORGANIZATION

0.98+

both typesQUANTITY

0.98+

over 2,000 cloudsQUANTITY

0.98+

over 1,400 networksQUANTITY

0.98+

GEORGANIZATION

0.98+

#ONS2017EVENT

0.97+

Bit-isleORGANIZATION

0.96+

Eastern EuropeLOCATION

0.95+

one cloudQUANTITY

0.95+

sub-10 millisecondQUANTITY

0.95+

over a billion dollars a yearQUANTITY

0.95+

10 cloudsQUANTITY

0.94+

40, 50 millisecondsQUANTITY

0.94+

NetAppTITLE

0.94+

firstQUANTITY

0.93+

SwitzerlandLOCATION

0.92+

a yearQUANTITY

0.92+

Mobile Work CongressEVENT

0.91+

90%QUANTITY

0.91+

TIPORGANIZATION

0.91+

50 cloudsQUANTITY

0.9+

90QUANTITY

0.9+

theCUBEORGANIZATION

0.88+

Raejeanne Skillern | Google Cloud Next 2017


 

>> Hey welcome back everybody. Jeff Frick here with theCUBE, we are on the ground in downtown San Francisco at the Google Next 17 Conference. It's this crazy conference week, and arguably this is the center of all the action. Cloud is big, Google Cloud Platform is really coming out with a major enterprise shift and focus, which they've always had, but now they're really getting behind it. And I think this conference is over 14,000 people, has grown quite a bit from a few years back, and we're really excited to have one of the powerhouse partners with Google, who's driving to the enterprise, and that's Intel, and I'm really excited to be joined by Raejeanne Skillern, she's the VP and GM of the Cloud Platform Group, Raejeanne, great to see you. >> Thank you, thanks for having me. >> Yeah absolutely. So when we got this scheduled, I was thinking, wow, last time I saw you was at the Open Compute Project 2015, and we were just down there yesterday. >> Yesterday. And we missed each other yesterday, but here we are today. >> So it's interesting, there's kind of the guts of the cloud, because cloud is somebody else's computer that they're running, but there is actually a computer back there. Here, it's really kind of the front end and the business delivery to people to have the elastic capability of the cloud, the dynamic flexibility of cloud, and you guys are a big part of this. So first off, give us a quick update, I'm sure you had some good announcements here at the show, what's going on with Intel and Google Cloud Platform? >> We did, and we love it all, from the silicon ingredients up to the services and solutions, this is where we invest, so it's great to be a part of yesterday and today. I was on stage earlier today with Urs Holzle talking about the Google and Intel Strategic Alliance, we actually announced this alliance last November, between Diane Green and Diane Bryant of Intel. And we had a history, a decade plus long of collaborating on CPU level optimization and technology optimization for Google's infrastructure. We've actually expanded that collaboration to cover hybrid cloud orchestration, security, IOT edge to cloud, and of course, artificial intelligence, machine learning, and deep learning. So we still do a lot of custom work with Google, making sure our technologies run their infrastructure the best, and we're working beyond the infrastructure to the software and solutions with them to make sure that those software and solutions run best on our architecture. >> Right cause it's a very interesting play, with Google and Facebook and a lot of the big cloud providers, they custom built their solutions based on their application needs and so I would presume that the microprocessor needs are very specific versus say, a typical PC microprocessor, which has a more kind of generic across the board type of demand. So what are some of the special demands that cloud demands from the microprocessor specifically? >> So what we've seen, right now, about half the volume we ship in the public cloud segment is customized in some way. And really the driving force is always performance per dollar TCO improvement. How to get the best performance and the lowest cost to pay for that performance. And what we've found is that by working with the top, not just the Super Seven, we call them, but the Top 100, closely, understanding their infrastructure at scale, is that they benefit from more powerful servers, with performance efficiency, more capability, more richly configured platforms. So a lot of what we've done, these cloud service providers have actually in some cases pushed us off of our roadmap in terms of what we can provide in terms of performance and scalability and agility in their infrastructure. So we do a lot of tweaks around that. And then of course, as I mentioned, it's not just the CPU ingredients, we have to optimize in the software level, so we do a lot of co-engineering work to make sure that every ounce of performance and efficiency is seen in their infrastructure. And that's how they, their data center is their cost to sales, they can't afford to have anything inefficient. So we really try to partner to make sure that it is completely tailor-optimized for that environment. >> Right, and the hyperscale, like you said, the infrastructure there is so different than kind of classic enterprise infrastructure, and then you have other things like energy consumption, which, again, at scale, itty bitty little improvements >> It's expensive. >> Make a huge impact. And then application far beyond the cloud service providers, so many of the applications that we interact with now today on a day to day basis are cloud-based applications, whether it is the G Suite for documents or this or that, or whether it's Salesforce, or whether we just put in Asana for task tracking, and Slack, and so many of these things are now cloud-based applications, which is really the way we work more and more and more on our desktops. >> Absolutely. And one of the things we look at is, applications really have kind of a gravity. Some applications are going to have a high affinity to public cloud. You see Tustin Dove, you see email and office collaboration already moving into the public cloud. There are some legacy applications, complex, some of the heavier modeling and simulation type apps, or big huge super computers that might stay on premise, and then you have this middle ground of applications, that, for various reasons, performance, security, data governance, data gravity, business need or IP, could go between the public cloud or stay on premise. And that's why we think it's so important that the world recognizes that this really is about a hybrid cloud. And it's really nice to partner with Google because they see that hybrid cloud as the end state, or they call it the Multi Cloud. And their Kubernetes Orchestration Platform is really designed to help that, to seamlessly move those apps from on a customer's premise into the Google environment and have that flow. So it's a very dynamic environment, we expect to see a lot of workloads kind of continue to be invested and move into the public cloud, and people really optimizing end-to-end. >> So you've been in the data center space, we talked a little bit before we went live, you've been in the data center space for a long, long time. >> Long time. >> We won't tell you how long. (laughing) >> Both: Long time. >> So it must be really exciting for you to see this shift in computing. There's still a lot of computing power at the edge, and there's still a lot of computing power now in our mobile devices and our PCs, but so much more of the heavy lift in the application infrastructure itself is now contained in the data center, so much more than just your typical old-school corporate data centers that we used to see. Really fun evolution of the industry, for you. >> Absolutely, and the public cloud is now one of the fastest growing segments in the enterprise space, in the data center space, I should say. We still have a very strong enterprise business. But what I love is it's not just about the fact that the public cloud is growing, this hybrid really connects our two segments, so I'm really learning a lot. It's also, I've been at Intel 23 years, most of it in the data center, and last year, we reorganized our company, we completely restructured Intel to be a cloud and IoT company. And from a company that for multiple decades was a PC or consumer-based client device company, it is just amazing to have data center be so front and center and so core to the type of infrastructure and capability expansion that we're going to see across the industry. We were talking about, there isn't going to be an industry left untouched by technology. Whether it's agriculture, or industrial, or healthcare, or retail, or logistics. Technology is going to transform them, and it all comes back to a data center and a cloud-based infrastructure that can handle the data and the scale and the processing. >> So one of the new themes that's really coming on board, next week will it be a Big Data SV, which has grown out of Hadoop and the old big data conversation. But it's really now morphing into the next stage of that, which is machine learning, deep learning, artificial intelligence, augmented reality, virtual reality, so this whole 'nother round that's going to eat up a whole bunch of CPU capacity. But those are really good cloud-based applications that are now delivering a completely new level of value and application sophistication that's driven by power back at the data center. >> Right. We see, artificial intelligence has been a topic since the 50s. But the reality is, the technology is there today to both capture and create the data, and compute on the data. And that's really unlocking this capabilities. And from us as a company, we see it as really something that is going to not just transform us as a business but transform the many use cases and industries we talked about. Today, you or I generate about a gig and a half of data, through our devices and our PC and tablet. A smart factory or smart plane or smart car, autonomous car, is going to generate terabytes of data. Right, and that is going to need to be stored. Today it's estimated only about 5% of the data captured is used for business insight. The rest just sits. We need to capture the data, store the data efficiently, use the data for insights, and then drive that back into the continuous learning. And that's why these technologies are so amazing, what they're going to be able to do, because we have the technology and the opportunity in the business space, whether it's AI for play or for good or for business, AI is going to transform the industry. >> It's interesting, Moore's Law comes up all the time. People, is Moore's Law done, is Moore's Law done? And you know, Moore's Law is so much more than the physics of what he was describing when he first said that in the first place, about number of transistors on a chip. It's really about an attitude, about this unbelievable drive to continue to innovate and iterate and get these order of magnitude of increase. We talked to David Floyer at OCP yesterday, and he's talking about it's not only the microprocessors and the compute power, but it's the IO, it's the networking, it's storage, it's flash storage, it's the interconnect, it's the cabling, it's all these things. And he was really excited that we're getting to this massive tipping point, of course in five years we'll look back and think it's archaic, of these things really coming together to deliver low latency almost magical capabilities because of this combination of factors across all those different, kind of the three horseman of computing, if you will, to deliver these really magical, new applications, like autonomous vehicles. >> Absolutely. And we, you'll hear Intel talk about Jevons Paradox, which is really about, if you take something and make it cheaper and easier to consume, people will consume more of it. We saw that with virtualization. People predicted oh everything's going to slow down cause you're going to get higher utilization rates. Actually it just unlocked new capabilities and the market grew because of it. We see the same thing with data. Our CEO will talk about, data is the new oil. It is going to transform, it's going to unlock business opportunity, revenue growth, cost savings in environment, and that will cause people to create more services, build new businesses, reach more people in the industry, transform traditional brick and mortar businesses to the digital economy. So we think we're just on the cusp of this transformation, and the next five to 10 years is going to be amazing. >> So before we let you go, again, you've been doing this for 20 plus years, I wasn't going to say anything, she said it, I didn't say it, and I worked at Intel the same time, so that's good. As you look forward, what are some of your priorities for 2017, what are some of the things that you're working on, that if we get together, hopefully not in a couple years at OCP, but next year, that you'll be able to report back that this is what we worked on and these are some of the new accomplishments that are important to me? >> So I'm really, there's a number of things we're doing. You heard me mention artificial intelligence many, many times. In 2016, Intel made a number of significant acquisitions and investments to really ensure we have the right technology road map for artificial intelligence. Machine learning, deep learning, training and inference. And we've really shored up that product portfolio, and you're going to see these products come to market and you're going to see user adoption, not just in my segment, but transforming multiple segments. So I'm really excited about those capabilities. And a lot of what we'll do, too, will be very vertical-based. So you're going to see the power of the technology, solving the health care problem, solving the retail problem, solving manufacturing, logistics, industrial problems. So I like that, I like to see tangible results from our technology. The other thing is the cloud is just growing. Everybody predicted, can it continue to grow? It does. Companies like Google and our other partners, they keep growing and we grow with them, and I love to help figure out where they're going to be two or three years from now, and get our products ready for that challenge. >> Alright, well I look forward to our next visit. Raejeanne, thanks for taking a few minutes out of your time and speaking to us. >> It was nice to see you again. >> You too. Alright, she's Raejeanne Skillern and I'm Jeff Frick, you're watching theCUBE, we're at the Google Cloud Next Show 2017, thanks for watching. (electronic sounds)

Published Date : Mar 9 2017

SUMMARY :

of the Cloud Platform Group, Raejeanne, great to see you. the Open Compute Project 2015, and we were just And we missed each other yesterday, but here we are today. and the business delivery to people to have the best, and we're working beyond the infrastructure and a lot of the big cloud providers, about half the volume we ship in the public cloud segment so many of the applications that we interact with And one of the things we look at is, we talked a little bit before we went live, We won't tell you how long. is now contained in the data center, and a cloud-based infrastructure that can handle the data and the old big data conversation. Right, and that is going to need to be stored. and the compute power, but it's the IO, and the next five to 10 years is going to be amazing. of the new accomplishments that are important to me? and investments to really ensure we have the right and speaking to us. to see you again. we're at the Google Cloud Next Show 2017,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane BryantPERSON

0.99+

RaejeannePERSON

0.99+

Diane GreenPERSON

0.99+

David FloyerPERSON

0.99+

GoogleORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

Raejeanne SkillernPERSON

0.99+

2016DATE

0.99+

2017DATE

0.99+

yesterdayDATE

0.99+

Urs HolzlePERSON

0.99+

FacebookORGANIZATION

0.99+

IntelORGANIZATION

0.99+

YesterdayDATE

0.99+

OCPORGANIZATION

0.99+

next yearDATE

0.99+

next weekDATE

0.99+

20 plus yearsQUANTITY

0.99+

todayDATE

0.99+

23 yearsQUANTITY

0.99+

TodayDATE

0.99+

G SuiteTITLE

0.99+

BothQUANTITY

0.99+

Intel Strategic AllianceORGANIZATION

0.99+

last yearDATE

0.99+

five yearsQUANTITY

0.99+

two segmentsQUANTITY

0.99+

twoQUANTITY

0.99+

Cloud Platform GroupORGANIZATION

0.98+

last NovemberDATE

0.98+

over 14,000 peopleQUANTITY

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.97+

about 5%QUANTITY

0.96+

bothQUANTITY

0.96+

three yearsQUANTITY

0.96+

Tustin DovePERSON

0.96+

AsanaTITLE

0.94+

50sDATE

0.93+

Google Next 17 ConferenceEVENT

0.93+

Open Compute Project 2015EVENT

0.92+

Top 100QUANTITY

0.89+

first placeQUANTITY

0.89+

Kubernetes Orchestration PlatformTITLE

0.88+

SlackTITLE

0.87+

three horsemanQUANTITY

0.87+

Google Cloud NextTITLE

0.86+

Google Cloud PlatformTITLE

0.86+

earlier todayDATE

0.85+

MoorePERSON

0.85+

10 yearsQUANTITY

0.83+

SalesforceTITLE

0.81+

Jevons ParadoxORGANIZATION

0.81+

theCUBEORGANIZATION

0.8+

about halfQUANTITY

0.79+

fiveQUANTITY

0.77+

San FranciscoLOCATION

0.77+

Moore's LawTITLE

0.73+

Cloud PlatformTITLE

0.73+

a decadeQUANTITY

0.72+

terabytesQUANTITY

0.67+

few years backDATE

0.67+

SevenTITLE

0.65+

HadoopTITLE

0.63+

couple yearsQUANTITY

0.56+