James Hamilton - AWS Re:Invent 2014 - theCUBE - #awsreinvent
(gentle, upbeat music) >> Live from the Sands Convention Center in Las Vegas, Nevada, it's theCUBE, at AWs re:Invent 2014. Brought to you by headline sponsors Amazon and Trend Micro. >> Okay, welcome back everyone, we are here live at Amazon Web Services re:Invent 2014, this is theCUBE, our flagship program, where we go out to the events and extract synth from the noise. I'm John Furrier, the Founder of SiliconANGLE, I'm joined with my co-host Stu Miniman from wikibon.org, our next guest is James Hamilton, who is Vice President and Distinguished Engineer at Amazon Web Services, back again, second year in a row, he's a celebrity! Everyone wants his autograph, selfies, I just tweeted a picture with Stu, welcome back! >> Thank you very much! I can't believe this is a technology conference. (laughs) >> So Stu's falling over himself right now, because he's so happy you're here, and we are too, 'cause we really appreciate you taking the time to come on, I know you're super busy, you got sessions, but, always good to do a CUBE session on kind of what you're workin' on, certainly amazing progress you've done, we're really impressed with what you guys've done other this last year or two, but this year, the house was packed. Your talk was very well received. >> Cool. >> Every VC that I know in enterprise is here, and they're not tellin' everyone, there's a lot of stuff goin' on, the competitors are here, and you're up there in a whole new court, talk about the future. So, quickly summarize what you talked about in your session on the first day. What was the premise, what was the talks objective, and what was some of the key content? >> Gotcha, gotcha. My big objective was the cloud really is fundamentally different, this is not another little bit of nomenclature, this is something that's fundamentally different, it's going to change the way our industry operates. And what I wanted to do was to step through a bunch of examples of innovations, and show how this really is different from how IT has been done for years gone by. >> So the data center obviously, we're getting quotes after quotes, obviously we're here at the Amazon show so the quotes tend to be skewed towards this statement, but, I'm not in the data center business seems to be the theme, and, people generally aren't in the data center business, they're doing a lot of other things, and they need the data centers to run their business. With that in mind, what are the new innovations that you see coming up, that you're working on, that you have in place, that're going to be that enabler for this new data center in the cloud? So that customers can say hey, you know, I just want to get all this baggage off my back, I just run my business agile and effectively. Is it the equipment, is it the software, is it the chips? What're you doing there from an innovation standpoint? >> Yeah, what I focused on this year, and I think it's a couple important areas are networking, because there's big cost problems in networking, and we've done a lot of work in that area that we think is going to help customers a lot; the second one's database, because databases, they're complicated, they're the core of all applications, when applications run into trouble, typically it's the database at the core of it, so those are the two areas I covered, and I think that's two of the most important areas we're working right now. >> So James, we've looked back into people that've tried to do this services angle before, networking has been one of the bottlenecks, I think one of the reasons XSBs failed in the '90s, it was networking and security, grid computing, even to today. So what is Amazon fundamentally doing different today, and why now is it acceptable that you can deliver services around the world from your environment? What's different about networking today? >> It's a good question. I think it's a combination of private links between all of the regions, every major region is privately linked today. That's better cost structure, better availability, lower latency, scaling down to the data center level we run all custom Amazon designed gear, all custom Amazon designed protocol stacks. And why is that important? It's because cost of networking is actually climbing, relative to the rest of compute, and so, we need to do that in order to get costs under control and actually continue to be able to draw up costs. Second thing is customers need more networking-- more networking bandwidth per compute right now, it's, East/West is the big focus of the industry, because more bandwidth is required, we need to invest more, fast, that's why we're doing private gear. >> Yeah, I mean, it's some fascinating statistics, it's not just bandwidth, you said you do have up to 25 terabytes per second between nodes, it's latency and jitter that are hugely important, especially when you go into databases. Can you talk about just architecturally, what you do with availability zones versus if I'm going to a Google or a Microsoft, what does differentiate you? >> It is a little bit different. The parts that are the same are: every big enterprise that needs highly available applications is going to run those applications across multiple data centers, that's, so-- The way our system works is you choose the region to get close to your users, or to get close to your customers, or to be within a jurisdictional boundary. From down below the region, normally what's in a region is a data center, and customers usually are replicating between two regions. What's different in the Amazon solution, is we have availability zones within region; each availability zone is actually at least one data center. Because we have multiple data centers inside the same region it enables customers to do realtime, synchronous replication between those data centers. And so if they choose to, they can run multi-region replication just like most high end applications do today, or, they can run within an AZ, synchronous multiplication to multiple data centers. The advantage of that, is it takes less administrative complexity, if there's a failure, you never lose a transaction, where in multi-region replication, it has to be asynchronous because of the speed of light. >> Yeah, you-- >> Also, there's some jurisdictional benefits too, right? Say Germany, for instance, with a new data center. >> Yep. Yeah, many customers want to keep their data in region, and so that's another reason why you don't necessarily want to replicate it out in order to get that level of redundancy, you want to have multiple data centers in region, 100% correct >> So, how much is it that you drive your entire stack yourself that allows you to do this, I think about replication solutions, you used SRDF as an example. I worked for that, I worked for EMC for 10 years, and just doing a two site replication is challenging, >> It's hard. >> A multi site is differently, you guys, six data centers and availabilities on a bungee, you fundamentally have a different way of handling replication. >> We do, the strategy inside Amazon is to say multi-region replication is great, but because of the latency between regions, they're a long way apart, and the reality of speed of light, you can't run synchronous. If data centers are relatively close together in the same region, the replication can be done synchronously, and what that means is if there's a failure anywhere, you lose no transactions. >> Yeah. So, there was a great line you had in your session yesterday, that networking has been anti-Moore's law when it comes to pricing. Amazon is such a big player, everybody watches what you do, you buy from the ODMs, you're changing the supply chain. What's your vision as to where networking needs to go from a supply chain and equipment standpoint? >> Networking needs to be the same place where servers went 20 years ago, and that is: it needs to be on a Moore's law curve where, as we get more and more transistors on a chip, we should get lower and lower costs in a server, we should get lower and lower costs in a network. Today, an ASIC is always, which is the core of the router, is always around the same price. Each generation we add more ports to that, and so effectively we got a Moore's law price improvement happening where that ASIC stays the same price, you just keep adding ports. >> So, I got to jump in and ask ya about Open Compute, last year you said it's good I guess, I'm a fan, but we do our own thing, still the case? >> Yeah, absolutely. >> Still the case, okay doing your own thing, and just watching Open Compute which is a like a fair for geeks. >> Open Compute's very cool, the thing is, what's happening in our industry right now is hyper-specialization, instead of buying general purpose hardware that's good for a large number of customers, we're buying hardware that's targeted to a specific workload, a specific service, and so, we're not--I love what happens with Open Compute, 'cause you can learn from it, it's really good stuff, but it's not what we use; we want to target our workloads precisely. >> Yeah, that was actually the title of the article I wrote from everything I learned from you last year was: hyper-specialization is your secret sauce, so. You also said earlier this week that we should watch the mobile suppliers, and that's where service should be in the future, but I heard a, somebody sent me a quote from you that said: unfortunately ARM is not moving quite fast enough to keep up with where Intel's going, where do you see, I know you're a fan of some of the chip manufacturers, where's that moving? >> What I meant with watch ARM and understanding where servers are going, sorry, not ARM, watch mobile and understand where servers is going is: power became important in mobile, power becomes important in servers. Most functionalities being pulled up on chip, on mobile, same thing's happening in server land, and so-- >> What you're sayin' is mobile's a predictor >> Predicting. >> of the trends in the data center, >> Exactly, exactly right. >> Because of the challenges with the form factor. >> It's not so much the form factor, but the importance of power, and the importance of, of, well, density is important as well, so, it turns out the mobile tends to be a few years ahead, but all the same kinds of innovations that show up there we end up finding them in servers a few years later. >> Alright, so James, we've been, at Wikibon have a strong background in the storage world, and David Floyer our CTO said: one of the biggest challenges we had with databases is they were designed to respond to disk, and therefore there were certain kind of logging mechanisms in place. >> It's a good point. >> Can you talk a little bit about what you've done at Amazon with Aurora, and why you're fundamentally changing the underlying storage for that? >> Yeah, Aurora is applying modern database technology to the new world, and the new world is: SSDs at the base, and multiple availability zones available, and so if you look closely at Aurora you'll see that the storage engine is actually spread over multiple availability zones, and, what was mentioned in the keynote, it's a log-structured store. Log-structured stores work very very nicely on SSDs, they're not wonderful choices on spinning magnetic media. So this, what we're optimized for is SSDs, and we're not running it on spinning disk at all. >> So I got to ask you about the questions we're seeing in the crowd, so you guys are obviously doing great on the scale side, you've got the availability zones which makes a lot of sense certainly the Germany announcement, with the whole Ireland/EU data governance thing, and also expansion is great. But the government is moving fast into some enterprises, >> It's amazing. >> And so, we were talking about that last night, but people out there are sayin' that's great, it's a private cloud, the governments implementing a private cloud, so you agree, that's a private cloud or is that a public-- >> (laughing) It's not a private cloud; if you see Amazon involved, it's not a private cloud. Our view of what we're good at, and the advantages cloud brings to market are: we run a very large fleet of servers in every region, we provide a standard set of services in all those regions, it's completely different than packaged software. What the CIA has is another AWS region, it happens to be on their site, but it is just another AWS region, and that's the way they want it. >> Well people are going to start using that against you guys, so start parsing, well if it's private, it's only them then it's private, but there's some technicalities, you're clarifying that. >> It's definitely not a private cloud, the reason why we're not going to get involved with doing private clouds is: product software is different, it's innefficient, when you deliver to thousands of customers, you can't make some of the optimizations that we make. Because we run the same thing everywhere, we actually have a much more reliable product, we're innovating more quickly, we just think it's a different world. >> So James, you've talked a lot that scale fundamentally changes the way you architect and build things; Amazon's now got over a billion customers, and it's got so many services, just adding more and more, Wikibon, actually Dave Vellante, wrote a post yesterday said that: we're trying to fundamentally change the economic model for enterprise IT, so that services are now like software, when Microsoft would print an extra disk it didn't cost anything. When you're building your environment, is there more strain on your environment for adding that next thousand customers or that next big service or, did it just, do you have the substrate built that's going to help it grow for the future? >> It's a good question, it varies on the service. Usually what happens is we get better year over year over year, and what we find is, once you get a service to scale, like S3 is definitely at scale, then growth, I won't say it's easy, but it's easier to predict because you're already on a large base, and we already know how to do it fairly well. Other services require a lot more thought on how to grow it, and end up being a lot more difficult. >> So I got some more questions for ya, go on to some of the personal questions I want to ask you. Looking at this booth right here, it's Netflix guys right there, I love that service, awesome founder, just what they do, just a great company, and I know they're a big customer. But you mentioned networks, so at the Google conference we went to, Google's got some chops, they have a developer community rockin' and rollin', and then it's pretty obvious what they're doin', they're not tryin' to compete with Amazon because it's too much work, but they're goin' after the front end developer, Rails, whatnot, PHP, and really nailing the back end transport, you see it appearing, really going after to enable a Netflix, these next generation companies, to have the backbone, and not be reliant on third party networks. So I got to ask you, so as someone who's a tinkerer, a mechanic if you will of the large scale stuff, you got to get rid of that middleman on the network. What's your plans, you going to do peering? Google's obviously telegraphing they're comin' down that road. Do you guys meet their objective? Same product, better, what's your strategy? >> Yeah, it's a great question. The reason why we're running private links between our regions is the same reason that Google is, it's lower cost, that's good, it's much, much lower latency, that's really good, and it's a lot less jitter, and that's extremely important, and so it's private links, peering, customers direct connecting, that's all the reality of a modern cloud. >> And you see that, and do you have to build that in? Almost like you want to build your own chips, I'd imagine on the mobile side with the phone, you can see that, everyone's building their own chips. You got to have your own network stuff. Is that where you guys see the most improvement on the network side? Getting down to that precise hyper-specialized? >> We're not doing our own chips today, and we don't, in the networking world, and we don't see that as being a requirement. What we do see as a requirement is: we're buying our own ASICs, we're doing our own designs, we're building our own protocol stack; that's delivering great value, and that is what's deployed, private networking's deployed in all of our data centers now >> Yeah, I mean, James I wonder, you must look at Google, they do have an impressive network, they've got the undersea cables, is there anything you, that you look at them and saying: we need to move forward and catch up to them on certain, in certain pieces of the network? >> I don't think so, I think when you look at any of the big providers, they're all mature enough that they're doing, at that level, I think what we do has to be kind of similar. If private links are a better solution, then we're all going to do it, I mean. >> It makes a lot of sense, 'cause it, the impact on inspection, throttling traffic, that just creates uncertainty, so. I'm a big fan, obviously, of that direction. Alright, now a personal question. So, in talking to your wife last night, getting to know you over the years here, and Stu is obviously a big fan. There's a huge new generation of engineers coming into the market, Open Compute, I bring that up because it's such a great initiative, you guys obviously have your own business reasons to do your own stuff, I get that. But there's a whole new culture of engineering coming out, a new home brew computer club is out there forming right now my young son makes his own machines, assembling stuff. So, you're an inspiration to that whole group, so I would like you to share just some commentary to this new generation, what to do, how to approach things, what you've learned, how do you come over, on top of failure, how do you resolve that, how do you always grow? So, share some personal perspective. >> Yeah, it's an interesting question. >> I know you're humble, but, yeah. >> Interesting question. I think being curious is the most important thing possible, if anybody ever gets an opportunity to meet somebody that's the top of any business, a heart surgeon, a jet engine designer, an auto mechanic, anyone that's in the top of their business is always worth meeting 'cause you can always learn from them. One of the cool things that I find with my job is: because it spans so many different areas, it's amazing how often I'll pickup a tidbit one day talking to an expert sailor, and the next day be able to apply that tidbit, or that idea, solving problems in the cloud. >> So just don't look for your narrow focus, your advice is: talk to people who are pros, in whatever their field is, there's always a nugget. >> James a friend of mine >> Stay curious! >> Steve Todd, he actually called that Venn diagram innovation, where you need to find all of those different pieces, 'cause you're never going to know where you find the next idea. So, for the networking guys, there's a huge army of CCIEs out there, some have predicted that if you have the title administrator in your name, that you might be out of a job in five years. What do you recommend, what should they be training on, what should they be working toward to move forward to this new world? >> The history of computing is one of the-- a level of abstraction going up, never has it been the case those jobs go away, the only time jobs have ever gone away is when someone stated a level of abstraction that just wasn't really where the focus is. We need people taking care of systems, as the abstraction level goes up, there's still complexity, and so, my recommendation is: keep learning, just keep learning. >> Alright so I got to ask you, the big picture now, ecosystems out here, Oracle, IBM, these big incumbents, are looking at Amazon, scratching their head sayin': it's hard for us to change our business to compete. Obviously you guys are pretty clear in your positioning, what's next, outside of the current situation, what do you look at that needs to be built out, besides the network, that you see coming around the corner? And you don't have to reveal any secrets, just, philosophically, what's your vision there? >> I think our strategy is maybe a little bit, definitely a little bit different from some of the existing, old-school providers. One is: everyone's kind of used to, Amazon passes on value to customers. We tend to be always hunting and innovating and trying to lower costs, and passing on the value to customers, that's one thing. Second one is choice. I personally choose to run my XQL because I like the product I think it's very good value, some of our customers want to run Oracle, some of our customers want to run my XQL, and we're absolutely fine doing that, some people want to run SQL server. And so, the things that kind of differentiate us is: enterprise software hasn't dropped prices, ever, and that's just the way we were. Enterprise software is not about choice, we're all about choice. And so I think those are the two big differences, and I think those ones might last. >> Yeah, that's a good way to look at that. Now, back to the IT guy, let's talk about the CIO. Scratchin' his head sayin': okay, I got this facilities budget, and it's kind of the-- I talked to once CIO, hey says: I spend more time planning meetings around facilities, power, and cooling, than anything else on innovation, so. They have challenges here, so what's your advice, as someone who's been through a lot of engineering, a lot of large scale, to that team of people on power and cooling to really kind of go to the next level, and besides just saying okay throw some pots out there, or what not, what should they be doing, what's their roadmap? >> You mean the roadmap for doing a better job of running their facilities? >> Yeah, well there's always pressure for density, there's power's a sacred (laughs) sacred resource right now, I mean power is everything, power's the new oil, so, power's driving everything, so, they have to optimize for that, but you can't generate more power, and space, so, they want smaller spaces, and more efficiency. >> The biggest gains that are happening right now, and the biggest innovations that have been happening over the last five years in data centers is mostly around mechanical systems, and driving down the cost of cooling, and so, that's one odd area. Second one is: if you look closely at servers you'll see that as density goes up, the complexity and density of cooling them goes up. And so, getting designs that are optimized for running at higher temperatures, and certified for higher temperatures, is another good step, and we do both. >> So, James, there's such a diverse ecosystem here, I wonder if you've had a chance to look around? Anything cool outside of what Amazon is doing? Whether it's a partner, some startup, or some interesting idea that's caught your attention at the show. >> In fact I was meeting with western--pardon me, Hitachi Data Systems about three days ago, and they were describing some work that was done by Cycle Computing, and several hundred thousand doors-- >> We've had Cycle-- >> Jason came on. >> Oh, wow! >> Last year, we, he was a great guest. >> No, he was here too, just today! >> Oh, we got him on? Okay. >> So Hitachi's just, is showing me some of what they gained from this work, and then he showed me his bill, and it was five thousand six hundred and some dollars, for running this phenomenally big, multi-hundred thousand core project, blew me away, I think that's phenomenal, just phenomenal work. >> James, I really appreciate you coming in, Stu and I really glad you took the time to spend with our audience and come on theCUBE, again a great, pleasurable conversation, very knowledgeable. Stay curious, and get those nuggets of information, and keep us informed. Thanks for coming on theCUBE, James Hamilton, Distinguished Engineer at Amazon doing some great work, and again, the future's all about making it smaller, faster, cheaper, and passing those costs, you guys have a great strategy, a lot of your fans are here, customers, and other engineers. So thanks for spending time, this is theCUBE, I'm John Furrier with Stu Miniman, we'll be right back after this short break. (soft harmonic bells)
SUMMARY :
Brought to you by headline sponsors and extract synth from the noise. Thank you very much! 'cause we really appreciate you taking the time to come on, So, quickly summarize what you talked about in your session it's going to change the way our industry operates. I'm not in the data center business seems to be the theme, and I think that's two of the most and why now is it acceptable that you can deliver services private links between all of the regions, what you do with availability zones versus The parts that are the same are: Say Germany, for instance, with a new data center. and so that's another reason why So, how much is it that you you fundamentally have a different way We do, the strategy inside Amazon is to say everybody watches what you do, that ASIC stays the same price, you just keep adding ports. Still the case, okay doing your own thing, and so, we're not--I love what happens with Open Compute, where do you see, I know you're a fan of and understanding where servers are going, and the importance of, of, well, one of the biggest challenges we had with databases and so if you look closely at Aurora you'll see that So I got to ask you about the and the advantages cloud brings to market are: using that against you guys, so start parsing, when you deliver to thousands of customers, that scale fundamentally changes the way and we already know how to do it fairly well. and really nailing the back end transport, and it's a lot less jitter, and that's extremely important, Is that where you guys see the most improvement and that is what's deployed, I think when you look at any of the big providers, getting to know you over the years here, and the next day be able to apply that tidbit, or that idea, talk to people who are pros, in whatever their field is, some have predicted that if you have never has it been the case those jobs go away, besides the network, that you see coming around the corner? and that's just the way we were. I talked to once CIO, hey says: I mean power is everything, power's the new oil, so, and the biggest innovations that have been happening that's caught your attention at the show. he was a great guest. Oh, we got him on? and it was five thousand six hundred and some dollars, Stu and I really glad you took the time
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
Steve Todd | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Trend Micro | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Hitachi Data Systems | ORGANIZATION | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
two regions | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
two areas | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Each generation | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Moore | ORGANIZATION | 0.98+ |
CIA | ORGANIZATION | 0.98+ |
Las Vegas, Nevada | LOCATION | 0.98+ |
two site | QUANTITY | 0.98+ |
Second one | QUANTITY | 0.98+ |
Open Compute | TITLE | 0.98+ |
AWs re:Invent 2014 | EVENT | 0.98+ |
earlier this week | DATE | 0.98+ |
20 years ago | DATE | 0.97+ |
last night | DATE | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
second | QUANTITY | 0.97+ |
Sands Convention Center | LOCATION | 0.97+ |
first day | QUANTITY | 0.97+ |
over a billion customers | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
two big differences | QUANTITY | 0.96+ |
Second thing | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.95+ |
six | QUANTITY | 0.95+ |