Image Title

Search Results for Oxford university:

Rachel Botsman, University of Oxford | Coupa Insp!re EMEA 2019


 

>> Announcer: From London, England, it's theCUBE! Covering Coupa Insp!re'19 EMEA. Brought to you by Coupa. >> Hey, welcome to theCUBE. Lisa Martin on the ground in London at Coupa Insp!re'19. Can you hear all the buzz around me? You probably can hear it, it's electric. The keynote just ended, and I'm very pleased to welcome, fresh from the keynote stage, we have Rachel Botsman, author and trust expert from Oxford University. Rachel, welcome to theCUBE! >> Thank you for having me. >> Your talk this morning about the intersection of trust and technology, to say it's interesting is an understatement. You had some great examples where you showed some technology brands, that we all know, and have different relationships with: Uber, Facebook, and Amazon. And the way that you measured the audience is great, you know, clap the brand that you trust the most. And it was so interesting, because we expect these technology brands to, they should be preserving our information, but we've also seen recent history, some big examples, of that trust being broken. >> Rachel: Yeah, yeah. >> Talk to us about your perspectives. >> So what I thought was interesting, well kind of unexpected for me, was no one clapped for Facebook, not one person in the room. And this is really interesting to me, because the point that I was making is that trust is really, really contextual, right? So if I had said to people, do you trust on Facebook that you can find your friends from college, they probably would've clapped. But do I trust them with my data, no. And this distinction is so important, because if you lose trust in one area as a company or a brand, and it can take time, you lose that ability to interact with people. So our relationship and our trust relationship with brands is incredibly complicated. But I think, particular tech brands, what they're realizing is that, how badly things go wrong when they're in a trust crisis. >> Talk to me about trust as a currency. You gave some great examples this morning. Money is the currency for transactions, where trust is the currency of interactions. >> Yeah, well I was trying to frame things, not because they sound nice, but how do you create a lens where people can really understand, like what is the value of this thing, and what is the role that it plays? And I'm never going to say money's not important; money is very important. But people can understand money; people value money. And I think that's because it has a physical, you can touch it, and it has an agreed value, right? Trust I actually don't believe can be measured. Trust is, what is it? It's something there, there's a connection between people. So you know when you have trust because you can interact with people. You know when you have trust because you can place their faith in them, you can share things about yourself and also share things back. So it's kind of this idea that, think of it as a currency, think of it as something that you should really value that is incredibly fragile in any situation in any organization. >> How does a company like Coupa, or an Amazon or a Facebook, how do they leverage trust and turn it into a valuable asset? >> Yeah, I don't like the idea that you sort of unlock trust. I think companies that really get it right are companies that think day in and day out around behaviors and culture. If you get behaviors and culture right, like the way people behave, whether they have empathy, whether they have integrity, whether you feel like you can depend on them, trust naturally flows from that. But the other thing that often you find with brands is they think of trust as like this reservoir, right? So it's different from awareness and loyalty; it's not like this thing that, you can have this really full up battery which means then you can launch some crazy products and everyone will trust it. We've seen this with like, Mattel, the toy brand. They launched a smart system for children called Aristotle, and within six months they had to pull it because people didn't trust what it was recording and watching in people's bedrooms. We were talking about Facebook and the cryptocurrency Libra, their new smart assistants; I wouldn't trust that. Amazon have introduced smart locks; I don't know if you've seen these? >> Lisa: Yes. >> Where if you're not home, it's inconvenient for a very annoying package slip. So you put in an Amazon lock and the delivery person will walk into your home. I trust Amazon to deliver my parcels; I don't trust them to give access to my home. So what we do with the trust and how we tap into that, it really depends on the risk that we're asking people to take. >> That's a great point that you bring about Amazon, because you look at how they are infiltrating our lives in so many different ways. There's a lot of benefits to it, in terms of convenience. I trust Amazon, because I know when I order something it's going to arrive when they say it will. But when you said about trust being contextual and said do you trust that Amazon pays their taxes, I went wow, I hadn't thought of it in that way. Would I want to trust them to come into my home to drop off a package, no. >> Rachel: Yeah. >> But the, I don't know if I want to say infiltration, into our lives, it's happening whether we like it or not. >> Well I think Amazon is really interesting. First of all because so often as consumers, and I'm guilty, we let convenience trump trust. So we talk about trust, but, you know what, like, if I don't really trust that Uber driver but I really want to get somewhere, I'll get in the car, right? I don't really trust the ethics of Amazon as a company or like what they're doing in the world, but I like the convenience. I predict that Amazon is actually going to go through a major trust crisis. >> Lisa: Really? >> Yeah. The reason why is because their trust is largely, I talked about capability and character. Amazon's trust is really built around capability. The capability of their fulfillment centers, like how efficient they are. Character wobbles, right? Like, does Bezos have integrity? Do we really feel like they care about the bookshops they're eating up? Or they want us to spend money on the right things? And when you have a brand and the trust is purely built around capability and the character piece is missing, it's quite a precarious place to be. >> Lisa: I saw a tweet that you tweeted recently. >> Uh oh! (laughs) >> Lisa: On the difference between capability and character. >> Yes, yeah. >> Lisa: And it was fascinating because you mentioned some big examples, Boeing. >> Yes. >> The two big air disasters in the last year. Facebook, obviously, the security breach. WeWork, this overly aggressive business model. And you said these companies are placing the blame, I'm not sure if that's the right word-- >> No no, the blame, yeah. >> On product or service capabilities, and you say it really is character. Can you talk to our audience about the difference, and why character is so important. >> Yeah, it's so interesting. So you know, sometimes you post things. I actually post more on LinkedIn, and suddenly like, you hit a nerve, right? Because I don't know, it's something you're summarizing that many people are feeling. And so the point of that was like, if you look at Boeing, Theranos was another example, WeWork, hundreds of banks, when something goes wrong they say it was a flaw in the product, it was a flaw in the system, it's a capability problem. And I don't think that's the case. Because the root cause of capability problems come from character and culture. And so, capability is really about the competence and reliability of someone or a product or service. Character is how someone behaves. Character gets to their intentions and motives. Character gets to, did they know about it and not tell us. Even VW is another example. >> Lisa: Yes. >> So it's not the product that is the issue. And I think we as consumers and citizens and customers, where many companies get it wrong in a trust crisis is they talk about the product fix. We won't forgive them, or we won't start giving them our trust again until we really believe something's changed about their character. I'm not sure anything has changed with Facebook's culture and character, which is why they're struggling with every move that they take, even though their intentions might be good. That's not how people in the world are viewing them. >> Do you think, taking Boeing as an example, I fly a lot, I'm sure you do as well. >> Rachel: Yeah. >> When those accidents happened, I'm sure everybody, including myself, was checking, what plane is this? >> Rachel: Yeah. >> Because when you know, especially once data starts being revealed, that demonstrated pilots, test pilots, were clearly saying something isn't right here, why do you think a company like Boeing isn't coming out and addressing that head on from an integrity perspective? Do you think that could go a long way in helping their brand reputation? >> I never, I mean I do get it, I'm married to a lawyer so I understand, legal gets involved, governance gets involved, so it's like, let's not disclose that. They're so worried about the implications. But it's this belief they can keep things hidden. It's a continual pattern, right? And that they try to show empathy, but really it comes across as some weird kind of sympathy. They don't really show humility. And so, when the CEO sits there, I have to believe he feels the pain of the human consequence of what happened. But more importantly, I have to believe it will never happen again. And again, it's not necessarily, do I trust the products Boeing creates, it's do I trust the people? Do I trust the decisions that they're making? And so it's really interesting to watch companies, Samsung, right? You can recover from a product crisis, with the phones, and they kind of go away. But it's much harder to recover from what, Boeing is a perfect example, has become a cultural crisis. >> Right, right. Talk to us about the evolution of trust. You talked about these three waves. Tell our audience about that, and what the third wave is and why we're in it, benefits? And also things to be aware of. >> Yes! (laughs) I didn't really talk about this today, because it's all about inspiration. So just to give you a sense, the way I think about trust is three chapters of human history. So the first one is called local trust; all running around villages and communities. I knew you, I knew your sister, I knew whoever was in that village. And it was largely based on reputation. So, I borrowed money from someone I knew, I went to the baker. Now this type of trust, it was actually phenomenally effective, but we couldn't scale it. So when we wanted to trade globally, the Industrial Revolution, moving to cities, we invented what I call institutional trust. And that's everything from financial systems to insurance products, all these mechanisms that allow trust to flow on a different level. Now what's happening today, it's not those two things are going away and they're not important; they are. It's that what technology inherently does, particularly networks, marketplaces, and platforms, is it takes this trust that used to be very hierarchical and linear, we used to look up to the CEO, we used to look up to the expert, and it distributes it around networks and platforms. So you can see that at Coupa, right? And this is amazing because it can unlock value, it can create marketplaces. It can change the way we share, connect, collaborate. But I think what's happened is that, sort of the idealism around this and the empowerment is slightly tinged, in a healthy way, realizing a lot can go wrong. So distributed trust doesn't necessarily mean distributed responsibility. My biggest insight from observing many of these communities is that, we like the idea of empowerment, we like the idea of collaboration, and we like the idea of control, but when things go wrong, they need a center. Does that make sense? >> Lisa: Absolutely, yes. >> So, a lot of the mess that we're seeing in the world today is actually caused by distributed trust. So when I like, read a piece of information that isn't from a trusted source and I make a decision to vote for someone, just an example. And so we're trying to figure out, what is the role of the institution in this distributed world? And that's why I think things have got incredibly messy. >> It certainly has the potential for that, right? Looking at, one of the things that I also saw that you were talking about, I think it was one of your TED Talks, is reputation capital. And you said you believe that will be more powerful than credit history in the 21st century. How can people, like you and I, get, I want to say control, over our reputation, when we're doing so many transactions digitally-- >> Rachel: I know. >> And like I think you were saying in one of your talks, moving from one country to another and your credit history doesn't follow you. How can somebody really control their trust capital and creative positive power from it? >> They can't. >> They can't? Oh no! >> I don't want to disappoint you, but there's always something in a TED speech that you wish you could take out, like 10 years later, and be like, not that you got it wrong, but that there's a naivety, right? So it is working in some senses. So what is really hard is like, if I have a reputation on Airbnb, I have a reputation on Amazon, on either side of the marketplace, I feel like I own that, right? That's my value, and I should be able to aggregate that and use that to get a loan, or get a better insurance, because it's a predictor of how I behave in the future. So I don't believe credit scores are a good predictor of behavior. That is very hard to do, because the marketplaces, they believe they own the data, and they have no incentive to share the reputation. So believe me, like so many companies after, actually it was wonderful after that TED Talk, many tried to figure out how to aggregate reputation. Where I have seen it play out as an idea, and this is really very rewarding, is many entrepreneurs have taken the idea and gone to emerging markets, or situations where people have no credit history. So Tala is a really good example, which is a lending company. Insurance companies are starting to look at this. There's a company called Traity. Where they can't get a loan, they can't get a product, they can't even open a bank account because they have no traditional credit history. Everyone has a reputation somewhere, so they can tap into these networks and use that to have access to things that were previously inaccessible. So that's the application I'm more excited about versus having a trust score. >> A trust score that we would be able to then use for our own advantages, whether it's getting a job, getting a loan. >> Yeah, and then unfortunately what also happened was China, and God forbid that I in any way inspired this decision, decided they would have a national trust score. So they would take what you're buying online and what you were saying online, all these thousands of interactions, and that the government would create a trust score that would really impact your life: the schools that your children could go to, and there's a blacklist, and you know, if you jaywalk your face is projected and your score goes down. Like, this is like an episode of Black Mirror. >> It's terrifying. >> Yeah. >> There's a fine line there. Rachel, I wish we had more time, because we could keep going on and on and on. But I want to thank you-- >> A pleasure. >> For coming right from the keynote stage to our set; it was a pleasure to meet you. >> On that dark note. >> Yes! (laughing) For Rachel Botsman, I'm Lisa Martin. You're watching theCUBE from Coupa Insp!re London '19. Thanks for watching. (digital music)

Published Date : Nov 6 2019

SUMMARY :

Brought to you by Coupa. Can you hear all the buzz around me? And the way that you measured the audience is great, So if I had said to people, do you trust on Facebook Talk to me about trust as a currency. So you know when you have trust Yeah, I don't like the idea that you sort of unlock trust. and the delivery person will walk into your home. and said do you trust that Amazon pays their taxes, But the, I don't know if I want to say infiltration, So we talk about trust, but, you know what, And when you have a brand and the trust you mentioned some big examples, And you said these companies are placing the blame, and you say it really is character. And so the point of that was like, So it's not the product that is the issue. I fly a lot, I'm sure you do as well. And that they try to show empathy, And also things to be aware of. So just to give you a sense, the way I think about trust So, a lot of the mess that we're seeing in the world today I also saw that you were talking about, And like I think you were saying in one of your talks, and be like, not that you got it wrong, A trust score that we would be able and what you were saying online, But I want to thank you-- For coming right from the keynote stage to our set; Yes!

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Rachel BotsmanPERSON

0.99+

BoeingORGANIZATION

0.99+

RachelPERSON

0.99+

LisaPERSON

0.99+

UberORGANIZATION

0.99+

CoupaORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

FacebookORGANIZATION

0.99+

Black MirrorTITLE

0.99+

SamsungORGANIZATION

0.99+

MattelORGANIZATION

0.99+

LondonLOCATION

0.99+

AirbnbORGANIZATION

0.99+

three chaptersQUANTITY

0.99+

London, EnglandLOCATION

0.99+

21st centuryDATE

0.99+

Oxford UniversityORGANIZATION

0.99+

last yearDATE

0.99+

University of OxfordORGANIZATION

0.99+

VWORGANIZATION

0.99+

two thingsQUANTITY

0.99+

first oneQUANTITY

0.99+

thousandsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

10 years laterDATE

0.98+

TalaORGANIZATION

0.98+

BezosPERSON

0.98+

two big air disastersQUANTITY

0.98+

TED TalkTITLE

0.98+

todayDATE

0.98+

TheranosORGANIZATION

0.98+

six monthsQUANTITY

0.97+

one personQUANTITY

0.97+

oneQUANTITY

0.97+

hundreds of banksQUANTITY

0.97+

AristotleORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

third waveEVENT

0.95+

FirstQUANTITY

0.94+

one areaQUANTITY

0.94+

Industrial RevolutionEVENT

0.93+

TED TalksTITLE

0.93+

ChinaLOCATION

0.92+

one countryQUANTITY

0.91+

Coupa Insp!ORGANIZATION

0.82+

WeWorkORGANIZATION

0.82+

TraityORGANIZATION

0.78+

three wavesEVENT

0.76+

theCUBE!ORGANIZATION

0.74+

this morningDATE

0.74+

EMEA 2019EVENT

0.7+

Ian Buck, NVIDIA | AWS re:Invent 2021


 

>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.

Published Date : Nov 30 2021

SUMMARY :

knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ian buckPERSON

0.99+

John FarrellPERSON

0.99+

NvidiaORGANIZATION

0.99+

Ian BuckPERSON

0.99+

AWSORGANIZATION

0.99+

Ian buckPERSON

0.99+

GregPERSON

0.99+

2014DATE

0.99+

AmazonORGANIZATION

0.99+

John FordPERSON

0.99+

James HamiltonPERSON

0.99+

NetflixORGANIZATION

0.99+

G fiveCOMMERCIAL_ITEM

0.99+

NVIDIAORGANIZATION

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

G 5gCOMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

AndroidTITLE

0.99+

Oxford universityORGANIZATION

0.99+

2013DATE

0.98+

amazon.comORGANIZATION

0.98+

over twoQUANTITY

0.98+

twoQUANTITY

0.98+

first timeQUANTITY

0.97+

single serviceQUANTITY

0.97+

2021DATE

0.97+

two frontsQUANTITY

0.96+

singleQUANTITY

0.96+

over 20 million artifactsQUANTITY

0.96+

eachQUANTITY

0.95+

about 65 new updatesQUANTITY

0.93+

Siemens energyORGANIZATION

0.92+

over 150 different STKsQUANTITY

0.92+

single GPUQUANTITY

0.91+

two new instancesQUANTITY

0.91+

first thingQUANTITY

0.9+

FranceLOCATION

0.87+

two particular fieldQUANTITY

0.85+

SageMakerTITLE

0.85+

TritonTITLE

0.82+

first cloud providersQUANTITY

0.81+

NGCORGANIZATION

0.77+

80 ofQUANTITY

0.74+

past monthDATE

0.68+

x86COMMERCIAL_ITEM

0.67+

lateDATE

0.67+

two thousandsQUANTITY

0.64+

pandemicsEVENT

0.64+

past few yearsDATE

0.61+

G4ORGANIZATION

0.6+

RACOMMERCIAL_ITEM

0.6+

KudaORGANIZATION

0.59+

ECSORGANIZATION

0.55+

10 GOTHER

0.54+

SageMakerORGANIZATION

0.49+

TensorFlowOTHER

0.48+

KsORGANIZATION

0.36+

PA3 Ian Buck


 

(bright music) >> Well, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're here joined by Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. I'm John Furrrier, host of theCUBE. Ian, thanks for coming on. >> Oh, thanks for having me. >> So NVIDIA, obviously, great brand. Congratulations on all your continued success. Everyone who does anything in graphics knows that GPU's are hot, and you guys have a great brand, great success in the company. But AI and machine learning, we're seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing in ML and AI that's accelerating computing to the cloud? >> Yeah. I mean, AI is kind of driving breakthroughs and innovations across so many segments, so many different use cases. We see it showing up with things like credit card fraud prevention, and product and content recommendations. Really, it's the new engine behind search engines, is AI. People are applying AI to things like meeting transcriptions, virtual calls like this, using AI to actually capture what was said. And that gets applied in person-to-person interactions. We also see it in intelligence assistance for contact center automation, or chat bots, medical imaging, and intelligence stores, and warehouses, and everywhere. It's really amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time. >> You know, Ian, I'd love to get your thoughts on how the world's evolved, just in the past few years alone, with cloud. And certainly, the pandemic's proven it. You had this whole kind of fullstack mindset, initially, and now you're seeing more of a horizontal scale, but yet, enabling this vertical specialization in applications. I mean, you mentioned some of those apps. The new enablers, this kind of, the horizontal play with enablement for, you know, specialization with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >> Yeah. The innovation's on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIs, as well as machine learning techniques, that are just being invented by researchers and the community at large, including Amazon. You know, it started with these convolutional neural networks, which are great for image processing, but has expanded more recently into recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic, graph neural networks, where the actual graph now is trained as a neural network. You have this underpinning of great AI technologies that are being invented around the world. NVIDIA's role is to try to productize that and provide a platform for people to do that innovation. And then, take the next step and innovate vertically. Take it and apply it to a particular field, like medical, like healthcare and medical imaging, applying AI so that radiologists can have an AI assistant with them and highlight different parts of the scan that may be troublesome or worrying, or require some more investigation. Using it for robotics, building virtual worlds where robots can be trained in a virtual environment, their AI being constantly trained and reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box. To activate that, we are creating different vertical solutions, vertical stacks, vertical products, that talk the languages of those businesses, of those users. In medical imaging, it's processing medical data, which is obviously a very complicated, large format data, often three-dimensional voxels. In robotics, it's building, combining both our graphics and simulation technologies, along with the AI training capabilities and difference capabilities, in order to run in real time. Those are just two simple- >> Yeah, no. I mean, it's just so cutting-edge, it's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just go back to the late 2000s, how unstructured data, or object storage created, a lot of people realized a lot of value out of that. Now you got graph value, you got network effect, you got all kinds of new patterns. You guys have this notion of graph neural networks that's out there. What is a graph neural network, and what does it actually mean from a deep learning and an AI perspective? >> Yeah. I mean, a graph is exactly what it sounds like. You have points that are connected to each other, that establish relationships. In the example of Amazon.com, you might have buyers, distributors, sellers, and all of them are buying, or recommending, or selling different products. And they're represented in a graph. If I buy something from you and from you, I'm connected to those endpoints, and likewise, more deeply across a supply chain, or warehouse, or other buyers and sellers across the network. What's new right now is, that those connections now can be treated and trained like a neural network, understanding the relationship, how strong is that connection between that buyer and seller, or the distributor and supplier, and then build up a network to figure out and understand patterns across them. For example, what products I may like, 'cause I have this connection in my graph, what other products may meet those requirements? Or, also, identifying things like fraud, When patterns and buying patterns don't match what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two, captured by the frequency of how often I buy things, or how I rate them or give them stars, or other such use cases. This application, graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, is very exciting to a new application of applying AI to optimizing business, to reducing fraud, and letting us, you know, get access to the products that we want. They have our recommendations be things that excite us and want us to buy things, and buy more. >> That's a great setup for the real conversation that's going on here at re:Invent, which is new kinds of workloads are changing the game, people are refactoring their business with, not just re-platforming, but actually using this to identify value. And also, your cloud scale allows you to have the compute power to, you know, look at a note in an arc and actually code that. It's all science, it's all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS, specifically? >> Yeah, AWS have been a great partner, and one of the first cloud providers to ever provide GPUs to the cloud. More recently, we've announced two new instances, the G5 instance, which is based on our A10G GPU, which supports the NVIDIA RTX technology, our rendering technology, for real-time ray tracing in graphics and game streaming. This is our highest performance graphics enhanced application, allows for those high-performance graphics applications to be directly hosted in the cloud. And, of course, runs everything else as well. It has access to our AI technology and runs all of our AI stacks. We also announced, with AWS, the G5 G instance. This is exciting because it's the first Graviton or Arm-based processor connected to a GPU and successful in the cloud. The focus here is Android gaming and machine learning inference. And we're excited to see the advancements that Amazon is making and AWS is making, with Arm in the cloud. And we're glad to be part of that journey. >> Well, congratulations. I remember, I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was teasing this out, that they're going to build their own, get in there, and build their own connections to take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new interfaces, and the new servers, new technology that you guys are doing, you're enabling applications. What do you see this enabling? As this new capability comes out, new speed, more performance, but also, now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >> Well, so first off, I think Arm is here to stay. We can see the growth and explosion of Arm, led of course, by Graviton and AWS, but many others. And by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there's an open architecture, to the entire ecosystem. We can help bring it forward to the state of the art in AI machine learning and graphics. All of our software that we release is both supportive, both on x86 and on Arm equally, and including all of our AI stacks. So most notably, for inference, the deployment of AI models, we have the NVIDIA Triton inference server. This is our inference serving software, where after you've trained a model, you want to deploy it at scale on any CPU, or GPU instance, for that matter. So we support both CPUs and GPUs with Triton. It's natively integrated with SageMaker and provides the benefit of all those performance optimizations. Features like dynamic batching, it supports all the different AI frameworks, from PyTorch to TensorFlow, even a generalized Python code. We're activating, and help activating, the Arm ecosystem, as well as bringing all those new AI use cases, and all those different performance levels with our partnership with AWS and all the different cloud instances. >> And you guys are making it really easy for people to use use the technology. That brings up the next, kind of, question I wanted to ask you. I mean, a lot of people are really going in, jumping in big-time into this. They're adopting AI, either they're moving it from prototype to production. There's always some gaps, whether it's, you know, knowledge, skills gaps, or whatever. But people are accelerating into the AI and leaning into it hard. What advancements has NVIDIA made to make it more accessible for people to move faster through the system, through the process? >> Yeah. It's one of the biggest challenges. You know, the promise of AI, all the publications that are coming out, all the great research, you know, how can you make it more accessible or easier to use by more people? Rather than just being an AI researcher, which is obviously a very challenging and interesting field, but not one that's directly connected to the business. NVIDIA is trying to provide a fullstack approach to AI. So as we discover or see these AI technologies become available, we produce SDKs to help activate them or connect them with developers around the world. We have over 150 different SDKs at this point, serving industries from gaming, to design, to life sciences, to earth sciences. We even have stuff to help simulate quantum computing. And of course, all the work we're doing with AI, 5G, and robotics. So we actually just introduced about 65 new updates, just this past month, on all those SDKs. Some of the newer stuff that's really exciting is the large language models. People are building some amazing AI that's capable of understanding the corpus of, like, human understanding. These language models that are trained on literally the content of the internet to provide general purpose or open-domain chatbots, so the customer is going to have a new kind of experience with the computer or the cloud. We're offering those large language models, as well as AI frameworks, to help companies take advantage of this new kind of technology. >> You know, Ian, every time I do an interview with NVIDIA or talk about NVIDIA, my kids and friends, first thing they say is, "Can you get me a good graphics card?" They all want the best thing in their rig. Obviously the gaming market's hot and known for that. But there's a huge software team behind NVIDIA. This is well-known. Your CEO is always talking about it on his keynotes. You're in the software business. And you do have hardware, you are integrating with Graviton and other things. But it's a software practice. This is software. This is all about software. >> Right. >> Can you share, kind of, more about how NVIDIA culture and their cloud culture, and specifically around the scale, I mean, you hit every use case. So what's the software culture there at NVIDIA? >> Yeah, NVIDIA's actually a bigger, we have more software people than hardware people. But people don't often realize this. And in fact, that it's because of, it just starts with the chip, and obviously, building great silicon is necessary to provide that level of innovation. But it's expanded dramatically from there. Not just the silicon and the GPU, but the server designs themselves. We actually do entire server designs ourselves, to help build out this infrastructure. We consume it and use it ourselves, and build our own supercomputers to use AI to improve our products. And then, all that software that we build on top, we make it available, as I mentioned before, as containers on our NGC container store, container registry, which is accessible from AWS, to connect to those vertical markets. Instead of just opening up the hardware and letting the ecosystem develop on it, they can, with the low-level and programmatic stacks that we provide with CUDA. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make them so available. >> And programmable software is so much easier. I want to get that plug in for, I think it's worth noting that you guys are heavy hardcore, especially on the AI side, and it's worth calling out. Getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about, and looking at how they're doing? >> Yeah. For training, it's all about time-to-solution. It's not the hardware that's the cost, it's the opportunity that AI can provide to your business, and the productivity of those data scientists which are developing them, which are not easy to come by. So what we hear from customers is they need a fast time-to-solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it. >> John Furrier: Often. >> So in training, it's time-to-solution. For inference, it's about your ability to deploy at scale. Often people need to have real-time requirements. They want to run in a certain amount of latency, in a certain amount of time. And typically, most companies don't have a single AI model. They have a collection of them they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure. Leveraging the Triton inference server, I mentioned before, can actually run multiple models on a single GPU saving costs, optimizing for efficiency, yet still meeting the requirements for latency and the real-time experience, so that our customers have a good interaction with the AI. >> Awesome. Great. Let's get into the customer examples. You guys have, obviously, great customers. Can you share some of the use cases examples with customers, notable customers? >> Yeah. One great part about working at NVIDIA is, as technology company, you get to engage with such amazing customers across many verticals. Some of the ones that are pretty exciting right now, Netflix is using the G4 instances to do a video effects and animation content from anywhere in the world, in the cloud, as a cloud creation content platform. We work in the energy field. Siemens energy is actually using AI combined with simulation to do predictive maintenance on their energy plants, preventing, or optimizing, onsite inspection activities and eliminating downtime, which is saving a lot of money for the energy industry. We have worked with Oxford University. Oxford University actually has over 20 million artifacts and specimens and collections, across its gardens and museums and libraries. They're actually using NVIDIA GPU's and Amazon to do enhanced image recognition to classify all these things, which would take literally years going through manually, each of these artifacts. Using AI, we can quickly catalog all of them and connect them with their users. Great stories across graphics, across industries, across research, that it's just so exciting to see what people are doing with our technology, together with Amazon. >> Ian, thank you so much for coming on theCUBE. I really appreciate it. A lot of great content there. We probably could go another hour. All the great stuff going on at NVIDIA. Any closing remarks you want to share, as we wrap this last minute up? >> You know, really what NVIDIA's about, is accelerating cloud computing. Whether it be AI, machine learning, graphics, or high-performance computing and simulation. And AWS was one of the first with this, in the beginning, and they continue to bring out great instances to help connect the cloud and accelerated computing with all the different opportunities. The integrations with EC2, with SageMaker, with EKS, and ECS. The new instances with G5 and G5 G. Very excited to see all the work that we're doing together. >> Ian Buck, general manager and vice president of Accelerated Computing. I mean, how can you not love that title? We want more power, more faster, come on. More computing. No one's going to complain with more computing. Ian, thanks for coming on. >> Thank you. >> Appreciate it. I'm John Furrier, host of theCUBE. You're watching Amazon coverage re:Invent 2021. Thanks for watching. (bright music)

Published Date : Nov 18 2021

SUMMARY :

to theCUBE's coverage and you guys have a great brand, Really, it's the new engine And certainly, the pandemic's proven it. and the community at the things you mentioned and connections between the two, the compute power to, you and one of the first cloud providers This is kind of the harvest of all that. and all the different cloud instances. But people are accelerating into the AI so the customer is going to You're in the software business. and specifically around the scale, and build our own supercomputers to use AI especially on the AI side, and the productivity of and the real-time experience, the use cases examples Some of the ones that are All the great stuff going on at NVIDIA. and they continue to No one's going to complain I'm John Furrier, host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrrierPERSON

0.99+

Ian BuckPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IanPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Oxford UniversityORGANIZATION

0.99+

James HamiltonPERSON

0.99+

2014DATE

0.99+

NetflixORGANIZATION

0.99+

Amazon.comORGANIZATION

0.99+

G5 GCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

late 2000sDATE

0.99+

GravitonORGANIZATION

0.99+

AndroidTITLE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

Accelerated ComputingORGANIZATION

0.99+

firstQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.98+

2013DATE

0.98+

A10GCOMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

two frontsQUANTITY

0.98+

eachQUANTITY

0.98+

single serviceQUANTITY

0.98+

PyTorchTITLE

0.98+

over 20 million artifactsQUANTITY

0.97+

singleQUANTITY

0.97+

TensorFlowTITLE

0.95+

EC2TITLE

0.94+

G5 instanceCOMMERCIAL_ITEM

0.94+

over 150 different SDKsQUANTITY

0.93+

SageMakerTITLE

0.93+

G5COMMERCIAL_ITEM

0.93+

ArmORGANIZATION

0.91+

first thingQUANTITY

0.91+

single GPUQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

about 65 new updatesQUANTITY

0.89+

two new instancesQUANTITY

0.89+

pandemicEVENT

0.88+

TritonORGANIZATION

0.87+

PA3ORGANIZATION

0.87+

TritonTITLE

0.84+

InventEVENT

0.83+

G5 G.COMMERCIAL_ITEM

0.82+

two simpleQUANTITY

0.8+

Breaking Analysis with Dave Vellante: Intel, Too Strategic to Fail


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Braking Analysis with Dave Vellante. >> Intel's big announcement this week underscores the threat that the United States faces from China. The US needs to lead in semiconductor design and manufacturing. And that lead is slipping because Intel has been fumbling the ball over the past several years, a mere two months into the job, new CEO Pat Gelsinger wasted no time in setting a new course for perhaps, the most strategically important American technology company. We believe that Gelsinger has only shown us part of his plan. This is the beginning of a long and highly complex journey. Despite Gelsinger's clear vision, his deep understanding of technology and execution ethos, in order to regain its number one position, Intel we believe we'll need to have help from partners, competitors and very importantly, the US government. Hello everyone and welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis we'll peel the onion Intel's announcement of this week and explain why we're perhaps not as sanguine as was Wall Street on Intel's prospects. And we'll lay out what we think needs to take place for Intel to once again, become top gun and for us to gain more confidence. By the way this is the first time we're broadcasting live with Braking Analysis. We're broadcasting on the CUBE handles on Twitch, Periscope and YouTube and going forward we'll do this regularly as a live program and we'll bring in the community perspective into the conversation through chat. Now you may recall that in January, we kind of dismissed analysis that said Intel didn't have to make any major strategic changes to its business when they brought on Pat Gelsinger. Rather we said the exact opposite. Our view at time was that the root of Intel's problems could be traced to the fact that it wasn't no longer the volume leader. Because mobile volumes dwarf those of x86. As such we said that Intel couldn't go up the learning curve for next gen technologies as fast as its competitors and it needed to shed its dogma of being highly vertically integrated. We said Intel needed to more heavily leverage outsourced foundries. But more specifically, we suggested that in order for Intel to regain its volume lead, it needed to, we said at the time, spin out its manufacturing, create a joint venture sure with a volume leader, leveraging Intel's US manufacturing presence. This, we still believe with some slight refreshes to our thinking based on what Gelsinger has announced. And we'll talk about that today. Now specifically there were three main pieces and a lot of details to Intel's announcement. Gelsinger made it clear that Intel is not giving up its IDM or integrated device manufacturing ethos. He called this IDM 2.0, which comprises Intel's internal manufacturing, leveraging external Foundries and creating a new business unit called Intel Foundry Services. It's okay. Gelsinger said, "We are not giving up on integrated manufacturing." However, we think this is somewhat nuanced. Clearly Intel can't, won't and shouldn't give up on IDM. However, we believe Intel is entering a new era where it's giving designers more choice. This was not explicitly stated. However we feel like Intel's internal manufacturing arm will have increased pressure to serve its designers in a more competitive manner. We've already seen this with Intel finally embracing EUV or extreme ultraviolet lithography. Gelsinger basically said that Intel didn't lean into EUV early on and that it created more complexity in its 10 nanometer process, which dominoed into seven nanometer and as you know the rest of the story and Intel's delays. But since mid last year, it's embraced the technology. Now as a point of reference, Samsung started applying EUV for its seven nanometer technology in 2018. And it began shipping in early 2020. So as you can see, it takes years to get this technology into volume production. The point is that Intel realizes it needs to be more competitive. And we suspect, it will give more freedom to designers to leverage outsource manufacturing. But Gelsinger clearly signaled that IDM is not going away. But the really big news is that Intel is setting up a new division with a separate PNL that's going to report directly to Pat. Essentially it's hanging out a shingle and saying, we're open for business to make your chips. Intel is building two new Fabs in Arizona and investing $20 billion as part of this initiative. Now well Intel has tried this before earlier last decade. Gelsinger says that this time we're serious and we're going to do it right. We'll come back to that. This organizational move while not a spin out or a joint venture, it's part of the recipe that we saw as necessary for Intel to be more competitive. Let's talk about why Intel is doing this. Look at lots has changed in the world of semiconductors. When you think about it back when Pat was at Intel in the '90s, Intel was the volume leader. It crushed the competition with x86. And the competition at the time was coming from risk chips. And when Apple changed the game with iPod and iPhone and iPad, the volume equation flipped to mobile. And that led to big changes in the industry. Specifically, the world started to separate design from manufacturing. We now see firms going from design to tape out in 12 months versus taking three years. A good example is Tesla and his deal with ARM and Samsung. And what's happened is Intel has gone from number one in Foundry in terms of clock speed, wafer density, volume, lowest cost, highest margin to falling behind. TSMC, Samsung and alternative processor competitors like NVIDIA. Volume is still the maker of kings in this business. That hasn't changed and it confers advantage in terms of cost, speed and efficiency. But ARM wafer volumes, we estimate are 10x those of x86. That's a big change since Pat left Intel more than a decade ago. There's also a major chip shortage today. But you know this time, it feels a little different than the typical semiconductor boom and bust cycles. Semiconductor consumption is entering a new era and new use cases emerging from automobiles to factories, to every imaginable device piece of equipment, infrastructure, silicon is everywhere. But the biggest threat of all is China. China wants to be self-sufficient in semiconductors by 2025. It's putting approximately $60 billion into new chip Fabs, and there's more to come. China wants to be the new economic leader of the world and semiconductors are critical to that goal. Now there are those poopoo the China threat. This recent article from Scott Foster lays out some really good information. But the one thing that caught our attention is a statement that China's semiconductor industry is nowhere near being a major competitor in the global market. Let alone an existential threat to the international order and the American way of life. I think Scotty is stuck in the engine room and can't see the forest of the trees, wake up. Sure. You can say China is way behind. Let's take an example. NAND. Today China is at about 64 3D layers whereas Micron they're at 172. By 2022 China's going to be at 128. Micron, it's going to be well over 200. So what's the big deal? We say talk to us in 2025 because we think China will be at parody. That's just one example. Now the type of thinking that says don't worry about China and semi's reminds me of the epic lecture series that Clay Christiansen gave as a visiting professor at Oxford University on the history of, and the economics of the steel industry. Now if you haven't watched this series, you should. Basically Christiansen took the audience through the dynamics of steel production. And he asked the question, "Who told the steel manufacturers that gross margin was the number one measure of profitability? Was it God?" he joked. His point was, when new entrance came into the market in the '70s, they were bottom feeders going after the low margin, low quality, easiest to make rebar sector. And the incumbents nearly pulled back and their mix shifted to higher margin products and their gross margins went up and life was good. Until they lost the next layer. And then the next, and then the next, until it was game over. Now, one of the things that got lost in Pat's big announcement on the 23rd of March was that Intel guided the street below consensus on revenue and earnings. But the stock went up the next day. Now when asked about gross margin in the Q&A segment of the announcement, yes, gross margin is a if not the key metric in semi's in terms of measuring profitability. When asked Intel CFO George Davis explained that with the uptick in PCs last year there was a product shift to the lower margin PC sector and that put pressure on gross margins. It was a product mix thing. And revenue because PC chips are less expensive than server chips was affected as were margins. Now we shared this chart in our last Intel update showing, spending momentum over time for Dell's laptop business from ETR. And you can see in the inset, the unit growth and the market data from IDC, yes, Dell's laptop business is growing, everybody's laptop business is growing. Thank you COVID. But you see the numbers from IDC, Gartner, et cetera. Now, as we pointed out last time, PC volumes had peaked in 2011 and that's when the long arm of rights law began to eat into Intel's dominance. Today ARM wafer production as we said is far greater than Intel's and well, you know the story. Here's the irony, the very bucket that conferred volume adventures to Intel PCs, yes, it had a slight uptick last year, which was great news for Dell. But according to Intel it pulled down its margins. The point is Intel is loving the high end of the market because it's higher margin and more profitable. I wonder what Clay Christensen would say to that. Now there's more to this story. Intel's CFO blame the supply constraints on Intel's revenue and profit pressures yet AMD's revenue and profits are booming. So RTSMCs. Only Intel can't seem to thrive when there's this massive chip shortage. Now let's get back to Pat's announcement. Intel is for sure, going forward investing $20 billion in two new US-based fabrication facilities. This chart shows Intel's investments in US R&D, US CapEx and the job growth that's created as a result, as well as R&D and CapEx investments in Ireland and Israel. Now we added the bar on the right hand side from a Wall Street journal article that compares TSMC CapEx in the dark green to that of Intel and the light green. You can see TSMC surpass the CapEx investment of Intel in 2015, and then Intel took the lead back again. And in 2017 was, hey it on in 2018. But last year TSMC took the lead, again. And appears to be widening that lead quite substantially. Leading us to our conclusion that this will not be enough. These moves by Intel will not be enough. They need to do more. And a big part of this announcement was partnerships and packaging. Okay. So here's where it gets interesting. Intel, as you may know was late to the party with SOC system on a chip. And it's going to use its packaging prowess to try and leap frog the competition. SOC bundles things like GPU, NPU, DSUs, accelerators caches on a single chip. So better use the real estate if you will. Now Intel wants to build system on package which will dis-aggregate memory from compute. Now remember today, memory is very poorly utilized. What Intel is going to do is to create a package with literally thousands of nodes comprising small processors, big processors, alternative processors, ARM processors, custom Silicon all sharing a pool of memory. This is a huge innovation and we'll come back to this in a moment. Now as part of the announcement, Intel trotted out some big name customers, prospects and even competitors that it wants to turn into prospects and customers. Amazon, Google, Satya Nadella gave a quick talk from Microsoft to Cisco. All those guys are designing their own chips as does Ericsson and look even Qualcomm is on the list, a competitor. Intel wants to earn the right to make chips for these firms. Now many on the list like Microsoft and Google they'd be happy to do so because they want more competition. And Qualcomm, well look if Intel can do a good job and be a strong second sourced, why not? Well, one reason is they compete aggressively with Intel but we don't like Intel so much but it's very possible. But the two most important partners on this slide are one IBM and two, the US government. Now many people were going to gloss over IBM in this announcement, but we think it's one of the most important pieces of the puzzle. Yes. IBM and semiconductors. IBM actually has some of the best semiconductor technology in the world. It's got great architecture and is two to three years ahead of Intel with POWER10. Yes, POWER. IBM is the world's leader in terms of dis-aggregating compute from memory with the ability to scale to thousands of nodes, sound familiar? IBM leads in power density, efficiency and it can put more stuff closer together. And it's looking now at a 20x increase in AI inference performance. We think Pat has been thinking about this for a while and he said, how can I leave leap frog system on chip. And we think he thought and said, I'll use our outstanding process manufacturing and I'll tap IBM as a partner for R&D and architectural chips to build the next generation of systems that are more flexible and performant than anything that's out there. Now look, this is super high end stuff. And guess who needs really high end massive supercomputing capabilities? Well, the US military. Pat said straight up, "We've talked to the government and we're honored to be competing for the government/military chips boundary." I mean, look Intel in my view was going to have to fall down into face not win this business. And by making the commitment to Foundry Services we think they will get a huge contract from the government, as large, perhaps as $10 billion or more to build a secure government Foundry and serve the military for decades to come. Now Pat was specifically asked in the Q&A section is this Foundry strategy that you're embarking on viable without the help of the US government? Kind of implying that it was a handout or a bailout. And Pat of course said all the right things. He said, "This is the right thing for Intel. Independent of the government, we haven't received any commitment or subsidies or anything like that from the US government." Okay, cool. But they have had conversations and I have no doubt, and Pat confirmed this, that those conversations were very, very positive that Intel should head in this direction. Well, we know what's happening here. The US government wants Intel to win. It needs Intel to win and its participation greatly increases the probability of success. But unfortunately, we still don't think it's enough for Intel to regain its number one position. Let's look at that in a little bit more detail. The headwinds for Intel are many. Look it can't just flick a switch and catch up on manufacturing leadership. It's going to take four years. And lots can change in that time. It tells market momentum as well as we pointed out earlier is headed in the wrong direction from a financial perspective. Moreover, where is the volume going to come from? It's going to take years for Intel to catch up for ARMS if it never can. And it's going to have to fight to win that business from its current competitors. Now I have no doubt. It will fight hard under Pat's excellent leadership. But the Foundry business is different. Consider this, Intel's annual CapEx expenditures, if you divide that by their yearly revenue it comes out to about 20% of revenue. TSMC spends 50% of its revenue each year on CapEx. This is a different animal, very service oriented. So look, we're not pounding the table saying Intel's worst days are over. We don't think they are. Now, there are some positives, I'm showing those in the right-hand side. Pat Gelsinger was born for this job. He proved that the other day, even though we already knew it. I have never seen him more excited and more clearheaded. And we agreed that the chip demand dynamic is going to have legs in this decade and beyond with Digital, Edge, AI and new use cases that are going to power that demand. And Intel is too strategic to fail. And the US government has huge incentives to make sure that it succeeds. But it's still not enough in our opinion because like the steel manufacturers Intel's real advantage today is increasingly in the high end high margin business. And without volume, China is going to win this battle. So we continue to believe that a new joint venture is going to emerge. Here's our prediction. We see a triumvirate emerging in a new joint venture that is led by Intel. It brings x86, that volume associated with that. It brings cash, manufacturing prowess, R&D. It brings global resources, so much more than we show in this chart. IBM as we laid out brings architecture, it's R&D, it's longstanding relationships. It's deal flow, it can funnel its business to the joint venture as can of course, parts of Intel. We see IBM getting a nice licensed deal from Intel and or the JV. And it has to get paid for its contribution and we think it'll also get a sweet deal and the manufacturing fees from this Intel Foundry. But it's still not enough to beat China. Intel needs volume. And that's where Samsung comes in. It has the volume with ARM, has the experience and a complete offering across products. We also think that South Korea is a more geographically appealing spot in the globe than Taiwan with its proximity to China. Not to mention that TSMC, it doesn't need Intel. It's already number one. Intel can get a better deal from number two, Samsung. And together these three we think, in this unique structure could give it a chance to become number one by the end of the decade or early in the 2030s. We think what's happening is our take, is that Intel is going to fight hard to win that government business, put itself in a stronger negotiating position and then cut a deal with some supplier. We think Samsung makes more sense than anybody else. Now finally, we want to leave you with some comments and some thoughts from the community. First, I want to thank David Foyer. His decade plus of work and knowledge of this industry along with this collaboration made this work possible. His fingerprints are all over this research in case you didn't notice. And next I want to share comments from two of my colleagues. The first is Serbjeet Johal. He sent this to me last night. He said, "We are not in our grandfather's compute era anymore. Compute is getting spread into every aspect of our economy and lives. The use of processors is getting more and more specialized and will intensify with the rise in edge computing, AI inference and new workloads." Yes, I totally agree with Sarbjeet. And that's the dynamic which Pat is betting and betting big. But the bottom line is summed up by my friend and former IDC mentor, Dave Moschella. He says, "This is all about China. History suggests that there are very few second acts, you know other than Microsoft and Apple. History also will say that the antitrust pressures that enabled AMD to thrive are the ones, the very ones that starved Intel's cash. Microsoft made the shift it's PC software cash cows proved impervious to competition. The irony is the same government that attacked Intel's monopoly now wants to be Intel's protector because of China. Perhaps it's a cautionary tale to those who want to break up big tech." Wow. What more can I add to that? Okay. That's it for now. Remember I publish each week on wikibon.com and siliconangle.com. These episodes are all available as podcasts. All you got to do is search for Braking Analysis podcasts and you can always connect with me on Twitter @dvellante or email me at david.vellante, siliconangle.com As always I appreciate the comments on LinkedIn and in clubhouse please follow me so that you're notified when we start a room and start riffing on these topics. And don't forget to check out etr.plus for all the survey data. This is Dave Vellante for theCUBE insights powered by ETR, be well, and we'll see you next time. (upbeat music)

Published Date : Mar 26 2021

SUMMARY :

in Palo Alto in Boston, in the dark green to that of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

Dave MoschellaPERSON

0.99+

Pat GelsingerPERSON

0.99+

AppleORGANIZATION

0.99+

2015DATE

0.99+

CiscoORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

PatPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GelsingerPERSON

0.99+

AmazonORGANIZATION

0.99+

TSMCORGANIZATION

0.99+

2011DATE

0.99+

JanuaryDATE

0.99+

2018DATE

0.99+

2025DATE

0.99+

IrelandLOCATION

0.99+

$10 billionQUANTITY

0.99+

$20 billionQUANTITY

0.99+

2017DATE

0.99+

twoQUANTITY

0.99+

QualcommORGANIZATION

0.99+

ArizonaLOCATION

0.99+

EricssonORGANIZATION

0.99+

Clay ChristensenPERSON

0.99+

IDCORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

GartnerORGANIZATION

0.99+

Clay ChristiansenPERSON

0.99+

DellORGANIZATION

0.99+

IsraelLOCATION

0.99+

David FoyerPERSON

0.99+

12 monthsQUANTITY

0.99+

IntelORGANIZATION

0.99+

ARMORGANIZATION

0.99+

last yearDATE

0.99+

ChristiansenPERSON

0.99+

10 nanometerQUANTITY

0.99+

AMDORGANIZATION

0.99+

FirstQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

20xQUANTITY

0.99+

Serbjeet JohalPERSON

0.99+

50%QUANTITY

0.99+

four yearsQUANTITY

0.99+

mid last yearDATE

0.99+

Anjanesh Babu, Oxford GLAM | On the Ground at AWS UK


 

(upbeat music) >> Welcome back to London everybody, this is Dave Vellante with The Cube, the leader in tech coverage, and we're here at AWS. We wanted to cover deeper the public sector activity. We've been covering this segment for quite some time, with the public sector summit in DC, went to Bahrain last year, and we wanted to extend that to London. We're doing a special coverage here with a number of public sector folks. Anjenesh Babu is here, he's a network manager at Oxford GLAM. Thanks very much for coming on The Cube, it's good to see you. >> Thank you.], thanks. >> GLAM, I love it. Gardens, libraries and museums, you even get the A in there, which everybody always leaves out. So tell us about Oxford GLAM. >> So we are part of the heritage collection side of the University. And I'm here representing the gardens and museums. In the divisions we've got world renown collections, which has been held for 400 years or more. It comprises of four different museums and the Oxford University Botanic Gardens and Arboretum. So in total, we're looking at five different divisions, spread across probably sixteen different sites, physical sites. And the main focus of the division is to bring out collections to the world, through digital outreach, engagement and being fun, bringing fun into the whole system. Sustainment is big, because we are basically custodians of our collections and it has to be here almost forever, in a sense. And we can only display about 1% of our collections at any one point and we've got about 8.5 million objects. So as you can imagine, the majority of that is in storage. So one way to bring this out to the wider world is to digitize them, curate them and present them, either online or in another form. So that is what we do. >> In your role as the network manager is to makes sure everything connects and works and stays up? Or maybe describe that a little more. >> So, I'm a systems architect and network manager for gardens and museums, so in my role, my primary focus is to bridge the gap between technical and the non-technical functions, within the department. And I also look after network and infrastructure sites, so there's two parts to the role, one is a BAU business as usual function where we keep the networks all going and keep the lights on, basically. The second part is bringing together designs, it's not just solving technical problems, so if I'm looking at a technical problem I step out and almost zoom out to see, what else are we looking at which could be connected, and solve the problem. For example, we could be looking at a web design solution in one part of the project, but it's not relevant just to that project. If you step out and say, we could do this in another part of the program, and we may be operating in silence and we want to breakdown those, that's part of my role as well. >> Okay, so you're technical but you also speak the language of the organization and business. We put it in quotes because you're not a business per say. Okay, so you're digitizing all these artifacts and then making them available 24/7, is that the idea? What are some of the challenges there? >> So the first challenge is only 3% of objects are actually digitized. So we have 1% on display, 3% is actually digitized, it's a huge effort, it's not just scanning or taking photographs, you've got cataloging, accessions and a whole raft of databases that goes behind. And museums historically have got their own separate database collection which is individually held different collection systems, but as public, you don't care, we don't care, we just need to look at the object. You don't want to see, that belongs to the Ashmolean Museum or the picture does. You just want to see, and see what the characteristics are. For that we are bringing together a layer, which integrates different museums, it sort of reflects what we're doing in out SIT. The museums are culturally diverse institutions and we want to keep them that way, because each has got its history, a kind of personality to it. Under the hood, the foundational architecture, systems remain the same, so we can make them modular, expandable and address the same problems. So that's how we are supporting this and making it more sustainable at the same time. >> So you have huge volume, quality is an issue because people want to see beautiful images. You got all this meta data that you're collecting, you have a classification challenge. So how are you architecting this system and what role does the Cloud play in there? >> So, in the first instance we are looking at a lot of collections were on premises in the past. We are moving as a SaaS solution at the first step. A lot of it requires cleansing of data, almost, this is the state of the images we aren't migrating, we sort of stop here let's cleanse it, create new data streams and then bring it to the Cloud. That's one option we are looking at and that is the most important one. But during all this process in the last three years with the GLAM digital program there's been huge amount of changes. To have a static sort of golden image has been really crucial. And to do that if we are going down rate of on premise and trying to build out, scale out infrastructures, it would have a huge cost. The first thing that I looked at was, explore the Cloud options and I was interested in solutions like Snowball and the Storage Gateway. Straightforward, loads up the data and it's on the Cloud, and then I can fill out the infrastructure as much as I want, because we can all rest easy, the main, day one data is in the Cloud, and it's safe, and we can start working on the rest of it. So it's almost like a transition mechanism where we start working on the data before it goes to the Cloud anyway. And I'm also looking at a Cloud clearing house, because there's a lot of data exchanges that are going to come up in the future, vendor to vendor, vendor to us and us to the public. So it sort of presents itself a kind of junction, who is going to fill the junction? I think the obvious answer is here. >> So Snowball or Gateway, basically you either Snowball or Gateway the assets into the Cloud and you decide which one to use based on the size and the cost associated with doing that, is that right? >> Yes, and convenience. I was saying this the other day at another presentation, it's addictive because it's so simple and straight forward to use, and you just go back and say it's taken me three days to transfer 30 terabytes into a Snowball appliance and on the fourth day, it appears in in my packets, so what are we missing? Nothing. Let's do it again next week. So you got the Snowball for 10 days, bring it in transfer, so it's much more straightforward than transferring it over the network, and you got to keep and eye on things. Not that it's not hard, so for example, the first workloads we transferred over to the file gateway, but there's a particular server which had problems getting things across the network, because of out dated OS on it. So we got the Snowball in and in a matter of three days the data was on the Cloud, so to effect every two weeks up on the Snowball, bring it in two weeks, in three days it goes up back on the Cloud. So there's huge, it doesn't cost us any more to keep it there, so the matter of deletions are no longer there. So just keep it on the Cloud shifting using lifecycle policies, and it's straight forward and simple. That's pretty much it. >> Well you understand physics and the fastest way to get from here to there is a truck sometimes, right? >> Well, literally it is one of the most efficient ways I've seen, and continues to be so. >> Yeah, simple in concept and it works. How much are you able to automate the end-to-end, the process that you're describing? >> At this point we have a few proof of concept of different things that we can automate, but largely because a lot of data is held across bespoke systems, so we've got 30 terabytes spread across sixteen hard disks, that's another use case in offices. We've got 22 terabytes, which I've just described, it's on a single server. We have 20 terabytes on another Windows server, so it's quite disparate, it's quite difficult to find common ground to automate it. As we move forward automation is going to come in, because we are looking at common interface like API Gateways and how they define that, and for that we are doing a lot of work with, we have been inspired a lot by the GDS API designs, and we are just calling this off and it works. That is a road we are looking at, but at the moment we don't have much in the way of automation. >> Can you talk a bit more about sustainability, you've mentioned that a couple of times, double click on that, what's the relevance, how are you achieving sustainability? Maybe you could give some examples. >> So in the past sustainability means that you buy a system and you over provision it, so you're looking for 20 terabytes over three years, lets go 50 terabytes. And something that's supposed to be here for three years gets kept going for five, and when it breaks the money comes in. So that was the kind of very brief way of sustaining things. That clearly wasn't enough, so in a way we are looking for sustainability from a new function say, we don't need to look at long-term service contracts we need to look at robust contracts, and having in place mechanisms to make sure that whatever data goes in, comes out as well. So that was the main driver and plus with the Cloud we are looking at the least model. We've got an annual expenditure set aside and that keeps it, sustainability is a lot about internal financial planning and based on skill sets. With the Cloud skill sets are really straightforward to find and we have engaged with quite a few vendors who are partnering with us, and they work with us to deliver work packages, so in a way even though we are getting there with the skills, in terms of training our team we don't need to worry about complex deployments, because we can outsource that in sprints. >> So you have shipped it from a CAPX to an OPX model, is that right? >> Yes >> So what was that like, I mean, was that life changing, was it exhilarating? >> It was exhilarating, it was phenomenally life changing, because it set up a new direction within the university, because we were the first division to go with the public Cloud and set up a contract. Again thanks to the G-Cloud 9 framework, and a brilliant account management team from AWS. So we shifted from the CAPX model to the OPX model with an understanding that all this would be considered as a leased service. In the past you would buy an asset, it depreciates, it's no longer the case, this is a leased model. The data belongs to us and it's straight forward. >> Amazon continues to innovate and you take advantage of those innovations, prices come down. How about performance in the cloud, what are you seeing there relative to your past experiences? >> I wouldn't say it's any different, perhaps slightly better, because the new SDS got the benefit of super fast bandwidth to the internet, so we've got 20 gigs as a whole and we use about 2 gigs at the moment, we had 10 gig. We had to downgrade it because, we didn't use that much. So from a bandwidth perspective that was the main thing. And a performance perspective what goes in the Cloud you frankly find no different, perhaps if anything they are probably better. >> Talk about security for a moment, how early on in the Cloud people were concerned about security, it seems to have attenuated, but security in the Cloud is different, is it not, and so talk about your security journey and what's your impression and share with our audience what you've learned. >> So we've had similar challenges with security, from security I would say there's two pots, one's the contractual security and one is the technical security. The contractual security, if we had spun up our own separate legal agreement with AWS or any other Cloud vendor, it would have taken us ages, but again we went to the digital marketplace, used the G-Cloud 9 framework and it was no brainer. Within a week we had things turned around, and we were actually the first institution to go live with and account with AWS. That is the taken care of. SDS is a third party security assessment template, which we require all our vendors to sign. As soon as we went through that it far exceeds what the SDS requires, and it's just a tick box exercise. And things like data encryption at rest, in transit it actually makes it more secure than what we are running on premise. So in a way technically it's far more secure than what we could ever have achieved that's on premise, and it's all taken care of, straight forward. >> So you've a small fraction of your artifacts today that are digitized. What's the vision, where do you want to take this? >> We're looking at, I'm speaking on behalf of gardens, this is not me, per say, I'm speaking on behalf of my team, basically we are looking at a huge amount of digitization. The collection should be democratized, that's the whole aspect, bringing it out to the people and perhaps making them curators in some form. We may not be the experts for a massive collection from say North America or the Middle East, there are people who are better than us. So we give them the freedom to make sure they can curate it in a secure, scalable manner and that's where the Cloud comes in. And we backend it using authentication that works with us, logs that works with us and roll-back mechanisms that works with us. So that's were we are looking at in the next few years. >> How would you do this without the Cloud? >> Oh. If you're doing it without the Cloud-- >> Could you do it? >> Yes, but we would be wholly and solely dependent on the University network, the University infrastructure and a single point. So when you're looking at the bandwidth it's shared by students using it network out of the university and our collection visitors coming into the university. And the whole thing, the DS infrastructure, everything's inside the university. It's not bad in its present state but we need to look at a global audience, how do you scale it out, how do you balance it? And that's what we're looking at and it would've been almost impossible to meet the goals that we have, and the aspirations, and not to mention the cost. >> Okay so you're going to be at the summit, the Excel Center tomorrow right? What are you looking forward to there for us from a customer standpoint? >> I'm looking at service management, because a lot of our work, we've got a fantastic service desk and a fantastic team. So a lot of that is looking at service management, how to deliver effectively. As you rightly say Amazon is huge on innovation and things keep changing constantly so we need to keep track of how we deliver services, how do we make ourselves more nimble and more agile to deliver the services and add value. If you look at the OS stack, that's my favorite example, so you look at the OS stack you've got seven layers going up from physical then all the way to the application. You can almost read an organization in a similar way, so you got a physical level where you've got cabling and all the way to the people and presentation layer. So right now what we are doing is we are making sure we are focusing on the top level, focusing on the strategies, creating strategies, delivering that, rather than looking out for things that break. Looking out for things that operationally perhaps add value in another place. So that's where we would like to go. >> Anjenesh, thanks so much for coming on The Cube. >> Thank you >> It was a pleasure to have you. All right and thank you for watching, keep right there we'll be back with our next guest right after this short break. You're watching The Cube, from London at Amazon HQ, I call it HQ, we're here. Right back. (upbeat music)

Published Date : May 9 2019

SUMMARY :

and we wanted to extend that to London. Gardens, libraries and museums, you even get the A in there, So we are part of the heritage collection is to makes sure everything connects and works and we may be operating in silence and we want the language of the organization and business. systems remain the same, so we can make them modular, So how are you architecting this system and what role So, in the first instance we are looking at So just keep it on the Cloud shifting using lifecycle Well, literally it is one of the most efficient ways the process that you're describing? but at the moment we don't have much how are you achieving sustainability? So in the past sustainability means So we shifted from the CAPX model to the OPX model Amazon continues to innovate and you take advantage at the moment, we had 10 gig. how early on in the Cloud people were concerned and we were actually the first institution to go live What's the vision, where do you want to take this? So we give them the freedom to make sure they can and the aspirations, and not to mention the cost. and things keep changing constantly so we need to for coming on The Cube. All right and thank you for watching,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

three yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Anjenesh BabuPERSON

0.99+

AnjeneshPERSON

0.99+

10 gigQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

LondonLOCATION

0.99+

20 gigsQUANTITY

0.99+

400 yearsQUANTITY

0.99+

10 daysQUANTITY

0.99+

three daysQUANTITY

0.99+

Anjanesh BabuPERSON

0.99+

two partsQUANTITY

0.99+

22 terabytesQUANTITY

0.99+

two potsQUANTITY

0.99+

sixteen hard disksQUANTITY

0.99+

BahrainLOCATION

0.99+

1%QUANTITY

0.99+

two weeksQUANTITY

0.99+

20 terabytesQUANTITY

0.99+

next weekDATE

0.99+

second partQUANTITY

0.99+

Middle EastLOCATION

0.99+

sixteen different sitesQUANTITY

0.99+

last yearDATE

0.99+

3%QUANTITY

0.99+

The CubeTITLE

0.99+

North AmericaLOCATION

0.99+

first stepQUANTITY

0.99+

fourth dayQUANTITY

0.99+

tomorrowDATE

0.99+

Oxford GLAMORGANIZATION

0.99+

first instanceQUANTITY

0.98+

G-Cloud 9TITLE

0.98+

one optionQUANTITY

0.98+

DCLOCATION

0.98+

first divisionQUANTITY

0.98+

oneQUANTITY

0.98+

first challengeQUANTITY

0.98+

first institutionQUANTITY

0.98+

50QUANTITY

0.98+

one pointQUANTITY

0.97+

one partQUANTITY

0.97+

single serverQUANTITY

0.97+

WindowsTITLE

0.97+

four different museumsQUANTITY

0.97+

first thingQUANTITY

0.97+

five different divisionsQUANTITY

0.97+

Oxford University Botanic GardensORGANIZATION

0.96+

terabytesQUANTITY

0.96+

todayDATE

0.96+

GatewayORGANIZATION

0.95+

eachQUANTITY

0.95+

one wayQUANTITY

0.94+

about 8.5 million objectsQUANTITY

0.94+

SnowballTITLE

0.94+

The CubeORGANIZATION

0.94+

CloudTITLE

0.92+

seven layersQUANTITY

0.92+

single pointQUANTITY

0.92+

first workloadsQUANTITY

0.91+

a weekQUANTITY

0.9+

SnowballORGANIZATION

0.89+

over three yearsQUANTITY

0.86+

AWS UKORGANIZATION

0.82+

doubleQUANTITY

0.82+

about 2 gigsQUANTITY

0.82+

Excel CenterTITLE

0.8+