Riadh Dridi, Automation Anywhere | CUBE Conversation February 2020
(upbeat music) >> Hi, and welcome to theCUBE, the leading source for insights into the world of technology and innovation. I'm your host, Donald Klein and today's topic is the exploding software segment of Robotic Process Automation, where Automation Anywhere is one of the leading providers. To have that conversation today, I'm joined by Riadh Dridi, CMO of Automation Anywhere. Welcome to the show, Riadh. >> Thank you for having me. >> Great, okay so, look, you're relatively new to Automation Anywhere, is that correct? >> Yes, I've been there for about six months now. >> Excellent, so why don't you talk a little bit about your background and how you came to the world of RPA. >> Yes, so I've been in the IT industry for about 20 years, been in the hardware space and the software space and the cloud space more recently, so when I heard about Automation Anywhere in the RPA space, did my due diligence and find out how fast this technology was catching on in enterprises, I got really, really excited and then met the management team and then get even more excited and ended up, you know, taking the job. >> Well, congratulations. >> Thank you. >> It's an exploding segment, for sure. Why don't you talk to us a little bit about what you see happening in this market and how fast it's growing. >> Yeah, so there are many studies out there, and of course we have our own internal data, but the market right now, according to Gartner is growing about 63% year over year, is the fastest growing enterprise software market in the industry right now and is projected to continue to grow at that pace for the foreseeable future. >> Okay, and let's talk about, sort of for people who are not that familiar with RPA. It's obviously an acronym that's being, you know, tossed around a lot but, you know, talk to us about Robotic Process Automation and how you define that category. >> Right, so that was one of the challenges early on is to try to put the label on this segment, which is really about automating processes end-to-end as much as possible, and so the RPA category is where, you know, some of the analysts decided to focus on, and so what it does is really allow businesses to deploy software robots to business processes so that process can be handled by bots instead of humans. The mundane, repetitive tasks that humans do as part of the end-to-end process, whether it's a order to cash process or procure to pay process, any, frankly, business process that things, that humans should not be doing, should be better suited to do more creative work. That's when, you know, bots came into play and the whole category was named, Robotic Process Automation because the robots are taking the place of the humans, in that terms of process automation. >> Got it, okay, so (mumbles) of the bots, so creating bots, right, and what's kind of fascinating about this world is that, you know, for customers that deploy this type of solution, right, they're growing a whole library of bots, right (mumbles). Maybe just walk us through an example bot and what a bot does and why this technology is so unique. >> Right, so think about, first of all, the problem that those bots are solving, right? So today you have ERP applications, CRM applications, any sort of applications in businesses to really automate a process, like I said an order to cash process, procure to pay process. That's why people have bought the technology, but what the industry has realized is after twenty years or more of using the same technology, humans were still doing part of the process that should have been automated by the software. So when you look at the average enterprises, only 20% of the steps that should be automated are automated, 80% of it is done by humans, whether it's opening files, reading documents, cutting and pasting, filling out forms, you know, playing with excel and kind of loading data into systems, data entry, a lot of it is still done by humans. So what the bots do is go in and take that work away from the humans so they can really focus on better tasks. That's really what it is. >> And so, just so everybody's kind of clear, so what's really so intelligent about these capabilities, right, take something sort of like invoices, right? Any company, you know, receiving lots and lots of invoices, all these invoices are going to be formatted in different ways. >> Right. >> Correct? >> Right. >> And historically it's been up to a human to kind of look through that invoice, pull out the relevant pieces of information, right, and enter that into the system so that the system can then issue the PO or pay the PO, et cetera, right? >> Exactly. >> But what your bots can do, or what the space as a whole, right, is they can intelligently scan these documents, and look for the kind of pieces of information, and actually load those into the system, correct? >> That's exactly right. So what the bots are doing now with computer vision, they're able to look into applications, they're able to assess the data, they're able to assess the information from that data and then process it like humans would do. So they're able to, again, get in, look at invoices or any type of, frankly, unstructured data or semi-structured data, and take that data, analyze it, and then manipulate it like a human would do. >> Excellent. >> An exception is that they are, obviously, doing it 24/7, much faster, with less errors. >> Got it, right. So you're turning people who, previously may have been focused on kind of a data entry task, right, into kind of managing a process, right? >> Exactly. So basically, what we like to say is we are taking the robot out of humans and then giving it to the robots, who are supposed to be doing the work. >> Excellent. >> And that's kind of phase one, and then phase two is obviously making those robots more intelligent, so that they're not able to do the simplest of simplest tasks, but start to be a little bit more intelligent and use AI to do things that are a little bit more advanced and more complicated. >> Okay, excellent. So look, you guys have got some news, right? >> Yup. >> You've kind of just come out with a big new release of your platform. Why don't you just kind of talk us through what the news is and what you guys have released? >> Yeah, so if you think about what the space has done so far, is taking a process, that's usually a known process, like I said, an order to cash, or even a simpler process, right? And taking look at the different steps and tasks that people have to do, and say, let's now automate those tasks and that particular process. A lot of the time is spent on trying to figure out their process. I don't know about your company, but I know in a lot of companies that I've been at, a lot of processes are not documented. So what we've announced yesterday is a bot, we call this Discovery Bot, that allows us to discover the processes that people work with. So if you're, again, an agent or a knowledge worker in an organization, you're going through a certain number of steps. The bot is going to basically analyze all those different steps, map the process, allows you to understand the flow that you're going through, and let you know that if you automate those repetitive tasks within your process, you're going to be able to save a certain amount of time and energy and have a better process in place. And then the cool thing about what we announced yesterday, and this is unique in the industry today, is the ability to create bots automatically from analyzing that process. So again, the industry has matured into analyzing processes manually, or using certain tools, but then the work had to be done by a different platform to basically create the bots from these processes. We're the only provider today that can analyze processes with the tool, and then create the bots automatically, shrinking the time for process automation end-to-end. >> Fantastic. >> Okay, and now, but also part of this release, too, right, is your kind of cloud capabilities. You've really kind of ramped up your ability to scale for the kind of largest customers. Talk a little to us about how the application functions in the cloud, how it functions on-prem. How does that all work end-to-end? >> Right, so back in November we announced the new platform called Enterprise A2019. This was the first cloud native web-based platform in the industry. And the reason why cloud native is important is because it's what gives you the benefits, in terms of scaling, in terms of TCO, in terms of easy to use, and that platform is now the core platform for the company, and so the product announcement we had yesterday allows our customers to use the same platform, except now we add this Discovery Bot at the front-end to discover the process, prioritize them, and then use the platform we've announced to automate these processes. What's very interesting about the platform is that customers can use it on-prem, can use it in the cloud. The customers, obviously, that decide to use it in the cloud will have the ability to learn more from the platform because, you know, it's going to tackle a lot more data in the cloud. Then we're going to be able to use lots of data analysis tools to be able to get the customers to extract knowledge from it and then innovate a much faster way. The people who are going to be using it on-prem, typically, are regulated industries or customers who have systems of records that are, typically, on-prem and they would like the bots to run where the systems are. So the platform is available in the cloud. It's available on-prem. It's the customer's choice to decide how to use it, but the innovation that's backed into it is what's really exciting. >> So this is kind of, I think, a fundamental point, maybe people should understand, right? So what you're, this is kind of a brave new world, right? You're saying kind of cloud native app, right, which is now ready to be used on-prem, right? >> Right >> As opposed to maybe the older world where people develop applications that were primarily based for kind of a server architecture within the firewall, right? >> Exactly. >> And then they tried to migrate it to the cloud? >> Exactly. >> So in some sense, you've done the reverse. >> Exactly. So if you were to build an application today knowing, you know, microservices architecture, knowing Java, knowing web-based, that's how you would build it. And so the fact that you've built the architecture for a modern application and then offer the options to customers to use it, either on-prem or in the cloud, is what we've done. >> Got it, great. Okay, so then what's the advantage of being able to use, so you've got this application that can scale with microservices, right? It can handle the volume that a Fortune 500 company might need. What's the advantage for them being able to do it on-prem? What does that help? >> So for some customers, it's really about regulating industries. For example, if you're a bank, or if you're a healthcare institution, the data cannot travel through the cloud. So systems of records, whether it's a CRM, whether it's HRM with some other systems of records, an ERP, usually will be on-prem and the data can travel through the cloud. So for these customers, we're saying, use the product on-prem, you have the same benefit. It's still the cloud architecture, microservices-based. It's still web-based as far as the client interface is concerned. It's the lowest TCO you can get, but you don't have to worry about getting to the cloud if that's what you decide to do. >> So, in terms of enabling digital transformation, really the requirement here is to be able to enable that both in the cloud and on-prem and do it simultaneously. >> Correct, and again, some customers will do a hybrid of both and then say, for these workflows we'll have them in the cloud, for these we'll keep them on-prem. Some customers in regulated industries will say, we don't want to do anything in the cloud, we want everything on-prem. They'll have the choice to do that. >> Understood, okay, well look, final question here. Let's talk about kind of some of the upcoming events that Automation Anywhere has going on, right? You do events all across the globe, you're now a global company. Tell us what's happening on that front. >> Yeah, so we do lots of events, you know, cause our customers are global, where we have customers in 90 countries, we have offices in 45 countries, and so we have to go where our customers are. So we have four large conferences throughout the year, one upcoming in London, we have it in Vegas, in Tokyo, and in Bangalore, as well. And it's the largest gathering of RPA minds and experts in the industry today. So what's exciting about the one that's coming up is, obviously, Discovery Bot is going to be featured at that conference. People will be able to play with the product, they'll be able to understand, you know, the latest innovations from Automation Anywhere. We have sessions that are called Build a Bots where people will be able to build their bots on-site, and that's always a popular thing for people to do. And then we're going to have some amazing speakers and top leaders who will help customers understand, you know, what's happening in digital transformation, and how intelligent automation can accelerate that transformation. >> Okay, great, and so just to understand the timing of it, so you've got a show coming up in London in the very near future here, is that right? >> Yes, I believe it's in April and then we have another one in May in Las Vegas. >> Okay, so then the big one in North America is going to be Vegas this year? >> Correct, correct, it's in May. >> Okay, great. And then, what about the, so then you also talked about Bangalore, talk about -- >> Yeah, Bangalore, I don't have all the dates in my head, so I apologize, but I think Bangalore is, I believe, in August or September, and then Tokyo, I believe, it's in June, so I'll have to confirm all those dates -- >> But one of the unique things, right, is that Bangalore show has actually been one of your largest shows of the year. >> It's been amazing. So I literally missed that show by one week. When I joined the company, I was super excited about having the ability to go visit the customers and the partners within the show. I think last year they had 6000 people, so it's an amazing opportunity this year to go see it first-hand. I don't know what the audience is going to be like, I'm assuming it's going to be more than 6000, but feeling the energy and the excitement from attendees is what I'm really looking forward to. >> Well, that just shows, right, that the software industry, particularly cloud-enabled software industry, is now a global industry, right? >> It is, it is, absolutely, because again, cloud allows those barriers to entry for companies, wherever they are, to be lowered, and customers in different regions can have the latest, greatest directly from the cloud and they both use the product, you know, when it comes out, and so that's, obviously, a super big advantage. The other thing I should be (mumbles) if I didn't say, you know, because it's also available in the cloud, and it's web-based, it's easy to use, easy to access, a lot of our first-time customers are business users. They're not even IT people, so they just go in, start playing with the product, you know, automating a few processes, and then start to scale end-to-end, and then of course they build the COE, IT gets involved. So being able to start your automation journey as small, and then grow as you scale from any parts of the world is really what this opportunity gives us. >> Okay, well thank you for your time today, Riadh. I'm fascinated, everything you guys are doing. Super hot category for those folks out there that want to touch base with Automation Anywhere, shows in London, Vegas, Bangalore, and then where was the fourth one? >> I think Tokyo -- >> Tokyo. >> And then Bangalore after that, yes. >> Okay, fantastic. >> Yes. >> Thanks for joining us today. This is Donald Klein, I'm the host of theCUBE. I'll see you next time. (upbeat music)
SUMMARY :
for insights into the world for about six months now. came to the world of RPA. and the cloud space more what you see happening in at that pace for the foreseeable future. you know, talk to us about of the end-to-end process, whether it's Got it, okay, so (mumbles) of the bots, of the steps that should going to be formatted the information from that An exception is that into kind of managing a process, right? then giving it to the robots, so that they're not able to So look, you guys have is and what you guys have released? is the ability to create in the cloud, how it functions on-prem. the ability to learn more So in some sense, And so the fact that you've It can handle the volume It's the lowest TCO you that both in the cloud and They'll have the choice to do that. the globe, you're now in the industry today. and then we have another one then you also talked about of the year. having the ability to available in the cloud, the fourth one? I'm the host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donald Klein | PERSON | 0.99+ |
Bangalore | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
February 2020 | DATE | 0.99+ |
Tokyo | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
September | DATE | 0.99+ |
August | DATE | 0.99+ |
Riadh Dridi | PERSON | 0.99+ |
last year | DATE | 0.99+ |
April | DATE | 0.99+ |
Riadh | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
6000 people | QUANTITY | 0.99+ |
excel | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
today | DATE | 0.99+ |
North America | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
this year | DATE | 0.99+ |
May | DATE | 0.99+ |
more than 6000 | QUANTITY | 0.99+ |
Automation Anywhere | ORGANIZATION | 0.98+ |
about 20 years | QUANTITY | 0.98+ |
45 countries | QUANTITY | 0.98+ |
one week | QUANTITY | 0.98+ |
90 countries | QUANTITY | 0.98+ |
about six months | QUANTITY | 0.97+ |
about 63% | QUANTITY | 0.97+ |
20% | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
four large conferences | QUANTITY | 0.91+ |
Discovery Bot | ORGANIZATION | 0.9+ |
first-time | QUANTITY | 0.9+ |
Enterprise A2019 | TITLE | 0.89+ |
fourth one | QUANTITY | 0.82+ |
twenty years | QUANTITY | 0.73+ |
phase | QUANTITY | 0.72+ |
one of | QUANTITY | 0.72+ |
Discovery Bot | TITLE | 0.71+ |
theCUBE | ORGANIZATION | 0.69+ |
phase two | QUANTITY | 0.65+ |
Automation | ORGANIZATION | 0.59+ |
challenges | QUANTITY | 0.55+ |
lots | QUANTITY | 0.55+ |
one | OTHER | 0.52+ |
Fortune | ORGANIZATION | 0.51+ |
a Bots | OTHER | 0.43+ |
Build | TITLE | 0.35+ |
Discovery | ORGANIZATION | 0.34+ |
levant | PERSON | 0.34+ |
500 | QUANTITY | 0.33+ |
Madhu Matta, Lenovo & Dr. Daniel Gruner, SciNet | Lenovo Transform 2018
>> Live from New York City it's theCube. Covering Lenovo Transform 2.0. Brought to you by Lenovo. >> Welcome back to theCube's live coverage of Lenovo Transform, I'm your host Rebecca Knight along with my co-host Stu Miniman. We're joined by Madhu Matta; He is the VP and GM High Performance Computing and Artificial Intelligence at Lenovo and Dr. Daniel Gruner the CTO of SciNet at University of Toronto. Thanks so much for coming on the show gentlemen. >> Thank you for having us. >> Our pleasure. >> So, before the cameras were rolling, you were talking about the Lenovo mission in this area to use the power of supercomputing to help solve some of society's most pressing challenges; and that is climate change, and curing cancer. Can you talk a little bit, tell our viewers a little bit about what you do and how you see your mission. >> Yeah so, our tagline is basically, Solving humanity's greatest challenges. We're also now the number one supercomputer provider in the world as measured by the rankings of the top 500 and that comes with a lot of responsibility. One, we take that responsibility very seriously, but more importantly, we work with some of the largest research institutions, universities all over the world as they do research, and it's amazing research. Whether it's particle physics, like you saw this morning, whether it's cancer research, whether it's climate modeling. I mean, we are sitting here in New York City and our headquarters is in Raleigh, right in the path of Hurricane Florence, so the ability to predict the next anomaly, the ability to predict the next hurricane is absolutely critical to get early warning signs and a lot of survival depends on that. So we work with these institutions jointly to develop custom solutions to ensure that all this research one it's powered and second to works seamlessly, and all their researchers have access to this infrastructure twenty-four seven. >> So Danny, tell us a little bit about SciNet, too. Tell us what you do, and then I want to hear how you work together. >> And, no relation with Skynet, I've been assured? Right? >> No. Not at all. It is also no relationship with another network that's called the same, but, it doesn't matter. SciNet is an organization that's basically the University of Toronto and the associated research hospitals, and we happen to run Canada's largest supercomputer. We're one of a number of computer sites around Canada that are tasked with providing resources and support, support is the most important, to academia in Canada. So, all academics, from all the different universities, in the country, they come and use our systems. From the University of Toronto, they can also go and use the other systems, it doesn't matter. Our mission is, as I said, we provide a system or a number of systems, we run them, but we really are about helping the researchers do their research. We're all scientists. All the guys that work with me, we're all scientists initially. We turned to computers because that was the way we do the research. You can not do astrophysics other than computationally, observationally and computationally, but nothing else. Climate science is the same story, you have so much data and so much modeling to do that you need a very large computer and, of course, very good algorithms and very careful physics modeling for an extremely complex system, but ultimately it needs a lot of horsepower to be able to even do a single simulation. So, what I was showing with Madhu at that booth earlier was results of a simulation that was done just prior us going into production with our Lenovo system where people were doing ocean circulation calculations. The ocean is obviously part of the big Earth system, which is part of the climate system as well. But, they took a small patch of the ocean, a few kilometers in size in each direction, but did it at very, very high resolution, even vertically going down to the bottom of the ocean so that the topography of the ocean floor can be taken into account. That allows you to see at a much smaller scale the onset of tides, the onset of micro-tides that allow water to mix, the cold water from the bottom and the hot water from the top; The mixing of nutrients, how life goes on, the whole cycle. It's super important. Now that, of course, gets coupled with the atmosphere and with the ice and with the radiation from the sun and all that stuff. That calculation was run by a group from, the main guy was from JPL in California, and he was running on 48,000 cores. Single runs at 48,000 cores for about two- to three-weeks and produced a petabyte of data, which is still being analyzed. That's the kind of resolution that's been enabled... >> Scale. >> It gives it a sense of just exactly... >> That's the scale. >> By a system the size of the one we have. It was not possible to do that in Canada before this system. >> I tell you both, when I lived on the vendor side and as an analyst, talking to labs and universities, you love geeking out. Because first of all, you always have a need for newer, faster things because the example you just gave is like, "Oh wait." "If I can get the next generation chipset." "If the networking can be improved." You know you can take that petabyte of data and process it so much faster. >> If I could only get more money to buy a bigger one. >> We've talked to the people at CERN and JPL and things like that. - Yeah. >> And it's like this is where most companies are it's like, yeah it's a little bit better, and it might make things a little better and make things nice, but no, this is critical to move along the research. So talk a little bit more about the infrastructure and what you look for and how that connects to the research and how you help close that gap over time. >> Before you go, I just want to also highlight a point that Danny made on solving humanity's greatest challenges which is our motto. He talked about the data analysis that he just did where they are looking at the surface of the ocean, as well as, going down, what is it, 264 nautical layers underneath the ocean? To analyze that much data, to start looking at marine life and protecting marine life. As you start to understand that level of nautical depth, they can start to figure out the nutrients value and other contents that are in that water to be able to start protecting the marine life. There again, another of humanity's greatest challenge right there that he's giving you... >> Nothing happens in isolation; It's all interconnected. >> Yeah. >> When you finally got a grant, you're able to buy a computer, how do you buy the computer that's going to give you the most bang for your buck? The best computer to do the science that we're all tasked with doing? It's tough, right? We don't fancy ourselves as computer architects; we engage the computer companies who really know about architecture to help us do it. The way we did our procurement was, 'Ok vendors, we have a set pot of money, we're willing to spend every last penny of this money, you give us the biggest and the baddest for our money." Now, it has to have a certain set of criteria. You have to be able to solve a number of benchmarks, some sample calculations that we provided. The ones that give you the best performance that's a bonus. It also has to be able to do it with the least amount of power, so we don't have to heat up the world and pay through the nose with power. Those are objective criteria that anybody can understand. But then, there's also the other criteria, so, how well will it run? How is it architected? How balanced is it? Did we get the iOS sub-system for all the storage that was the one that actually meets the criteria? What other extras do we have that will help us make the system run in a much smoother way and for a wide variety of disciplines because we run the biologists together with the physicists and the engineers and the humanitarians, the humanities people. Everybody uses the system. To make a long story short, the proposal that we got from Lenovo won the bid both in terms of what we got for in terms of hardware and also the way it was put together, which was quite innovative. >> Yeah. >> I want to hear about, you said give us the biggest, the baddest, we're willing to empty our coffers for this, so then where do you go from there? How closely do you work with SciNet, how does the relationship evolve and do you work together to innovate and kind of keep going? >> Yeah. I see it as not a segment or a division. I see High Performance Computing as a practice, and with any practice, it's many pieces that come together; you have a conductor, you have the orchestra, but the end of the day the delivery of that many systems is the concert. That's the way to look at it. To deliver this, our practice starts with multiple teams; one's a benchmarking team that understands the application that Dr. Gruner and SciNet will be running because they need to tune to the application the performance of the cluster. The second team is a set of solution architects that are deep engineers and understand our portfolio. Those two work together to say against this application, "Let's build," like he said, "the biggest, baddest, best-performing solution for that particular application." So, those two teams work together. Then we have the third team that kicks in once we win the business, which is coming on site to deploy, manage, and install. When Dr. Gruner talks about the infrastructure, it's a combination of hardware and software that all comes together and the software is open-source based that we built ourselves because we just felt there weren't the right tools in the industry to manage this level of infrastructure at that scale. All this comes together to essentially rack and roll onto their site. >> Let me just add to that. It's not like we went for it in a vacuum. We had already talked to the vendors, we always do. You always go, and they come to you and 'when's your next money coming,' and it's a dog and pony show. They tell you what they have. With Lenovo, at least the team, as we know it now, used to be the IBM team, iXsystems team, who built our previous system. A lot of these guys were already known to us, and we've always interacted very well with them. They were already aware of our thinking, where we were going, and that we're also open to suggestions for things that are non-conventional. Now, this can backfire, some data centers are very square they will only prescribe what they want. We're not prescriptive at all, we said, "Give us ideas about what can make this work better." These are the intangibles in a procurement process. You also have to believe in the team. If you don't know the team or if you don't know their track record then that's a no-no, right? Or, it takes points away. >> We brought innovations like DragonFly, which Dr. Dan will talk about that, as well as, we brought in for the first time, Excelero, which is a software-defined storage vendor and it was a smart part of the bid. We were able to flex muscles and be more creative versus just the standard. >> My understanding, you've been using water cooling for about a decade now, maybe? - Yes. >> Maybe you could give us a little bit about your experiences, how it's matured over time, and then Madhu will talk and bring us up to speed on project Neptune. >> Okay. Our first procurement about 10 years ago, again, that was the model we came up with. After years of wracking our brains, we could not decide how to build a data center and what computers to buy, it was like a chicken and egg process. We ended up saying, 'Okay, this is what we're going to do. Here's the money, here's is our total cost of operation that we can support." That included the power bill, the water, the maintenance, the whole works. So much can be used for infrastructure, and the rest is for the operational part. We said to the vendors, "You guys do the work. We want, again, the biggest and the baddest that we can operate within this budget." So, obviously, it has to be energy efficient, among other things. We couldn't design a data center and then put in the systems that we didn't know existed or vice-versa. That's how it started. The initial design was built by IBM, and they designed the data center for us to use water cooling for everything. They put rear door heat exchanges on the racks as a means of avoiding the use of blowing air and trying to contain the air which is less efficient, the air, and is also much more difficult. You can flow water very efficiently. You open the door of one of these racks. >> It's amazing. >> And it's hot air coming out, but you take the heat, right there in-situ, you remove it through a radiator. It's just like your car radiator. >> Car radiator. >> It works very well. Now, it would be nice if we could do even better by doing the hot water cooling and all that, but we're not in a university environment, we're in a strip mall out in the boonies, so we couldn't reuse the heat. Places like LRZ they're reusing the heat produced by the computers to heat their buildings. >> Wow. >> Or, if we're by a hospital, that always needs hot water, then we could have done it. But, it's really interesting how the option of that design that we ended up with the most efficient data center, certainly in Canada, and one of the most efficient in North America 10 years ago. Our PUE was 1.16, that was the design point, and this is not with direct water cooling through the chip. >> Right. Right. >> All right, bring us up to speed. Project Neptune, in general? >> Yes, so Neptune, as the name suggests, is the name of the God of the Sea and we chose that to brand our entire suite of liquid cooling products. Liquid cooling products is end to end in the sense that it's not just hardware, but, it's also software. The other key part of Neptune is a lot of these, in fact, most of these, products were built, not in a vacuum, but designed and built in conjunction with key partners like Barcelona Supercomputer, LRZ in Germany, in Munich. These were real-life customers working with us jointly to design these products. Neptune essentially allows you, very simplistically put, it's an entire suite of hardware and software that allows you to run very high-performance processes at a level of power and cooling utilization that's like using a much lower processor, it dissipates that much heat. The other key part is, you know, the normal way of cooling anything is run chilled water, we don't use chilled water. You save the money of chillers. We use ambient temperature, up to 50 degrees, 90% efficiency, 50 degree goes in, 60 degree comes out. It's really amazing, the entire suite. >> It's 50 Celsius, not Fahrenheit. >> It's Celsius, correct. >> Oh. >> Dr. Bruner talked about SciNet with the rado-heat exchanger. You actually got to stand in front of it to feel the magic of this, right? As geeky as that is. You open the door and it's this hot 60-, 65-degree C air. You close the door it's this cool 20-degree air that's coming out. So, the costs of running a data center drop dramatically with either the rado-heat exchanger, our direct to node product, which we just got released the SE650, or we have something call the thermal-transfer module, which replaces a normal heat sink. Where for an air cool we bring water cool goodness to an air cool product. >> Danny, I wonder if you can give us the final word, just the climate science in general, how's the community doing? Any technological things that are holding us back right now or anything that excites you about the research right now? >> Technology holds you back by the virtual size of the calculations that you need to do, but, it's also physics that hold you back. >> Yes. Because doing the actual modeling is very difficult and you have to be able to believe that the physics models actually work. This is one of the interesting things that Dick Peltier, who happens to be our scientific director and he's also one of the top climate scientists in the world, he's proven through some of his calculations that the models are actually pretty good. The models were designed for current conditions, with current data, so that they would reproduce the evolution of the climate that we can measure today. Now, what about climate that started happening 10,000 years ago, right? The climate was going on; it's been going on forever and ever. There's been glaciations; there's been all these events. It turns out that it has been recorded in history that there are some oscillations in temperature and other quantities that happen about every 1,000 years and nobody had been able to prove why they would happen. It turns out that the same models that we use for climate calculations today, if you take them back and do what's called paleoclimate, you start with approximating the conditions that happened 10,000 years ago, and then you move it forward, these things reproduce, those oscillations, exactly. It's very encouraging that the climate models actually make sense. We're not talking in a vacuum. We're not predicting the end of the world, just because. These calculations are right. They're correct. They're predicting the temperature of the earth is climbing and it's true, we're seeing it, but it will continue unless we do something. Right? It's extremely interesting. Now he's he's beginning to apply those results of the paleoclimate to studies with anthropologists and archeologists. We're trying to understand the events that happened in the Levant in the Middle East thousands of years ago and correlate them with climate events. Now, is that cool or what? >> That's very cool. >> So, I think humanity's greatest challenge is again to... >> I know! >> He just added global warming to it. >> You have a fun job. You have a fun job. >> It's all the interdisciplinarity that now has been made possible. Before we couldn't do this. Ten years ago we couldn't run those calculations, now we can. So it's really cool. - Amazing. Great. Well, Madhu, Danny, thank you so much for coming on the show. >> Thank you for having us. >> It was really fun talking to you. >> Thanks. >> I'm Rebecca Knight for Stu Miniman. We will have more from the Lenovo Transform just after this. (tech music)
SUMMARY :
Brought to you by Lenovo. and Dr. Daniel Gruner the CTO of SciNet and that is climate change, and curing cancer. so the ability to predict the next anomaly, and then I want to hear how you work together. and the hot water from the top; The mixing of nutrients, By a system the size of the one we have. and as an analyst, talking to labs and universities, to buy a bigger one. and things like that. and what you look for and how that connects and other contents that are in that water and the humanitarians, the humanities people. of that many systems is the concert. With Lenovo, at least the team, as we know it now, and it was a smart part of the bid. for about a decade now, maybe? and then Madhu will talk and bring us up to speed and the rest is for the operational part. And it's hot air coming out, but you take the heat, by the computers to heat their buildings. that we ended up with the most efficient data center, Right. Project Neptune, in general? is the name of the God of the Sea You open the door and it's this hot 60-, 65-degree C air. by the virtual size of the calculations that you need to do, of the paleoclimate to studies with anthropologists You have a fun job. It's all the interdisciplinarity We will have more from the Lenovo Transform just after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dick Peltier | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Danny | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Raleigh | LOCATION | 0.99+ |
SciNet | ORGANIZATION | 0.99+ |
48,000 cores | QUANTITY | 0.99+ |
Madhu | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Bruner | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
University of Toronto | ORGANIZATION | 0.99+ |
20-degree | QUANTITY | 0.99+ |
Skynet | ORGANIZATION | 0.99+ |
Munich | LOCATION | 0.99+ |
50 degree | QUANTITY | 0.99+ |
CERN | ORGANIZATION | 0.99+ |
two teams | QUANTITY | 0.99+ |
Califo | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
JPL | ORGANIZATION | 0.99+ |
Madhu Matta | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Dan | PERSON | 0.99+ |
third team | QUANTITY | 0.99+ |
60 degree | QUANTITY | 0.99+ |
50 Celsius | QUANTITY | 0.99+ |
second team | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
65-degree C | QUANTITY | 0.99+ |
iXsystems | ORGANIZATION | 0.99+ |
LRZ | ORGANIZATION | 0.99+ |
Ten years ago | DATE | 0.99+ |
10,000 years ago | DATE | 0.98+ |
thousands of years ago | DATE | 0.98+ |
Daniel Gruner | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
264 nautical layers | QUANTITY | 0.98+ |
Middle East | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
earth | LOCATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Single | QUANTITY | 0.98+ |
each direction | QUANTITY | 0.98+ |
Earth | LOCATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Gruner | PERSON | 0.98+ |
twenty-four seven | QUANTITY | 0.97+ |
three-weeks | QUANTITY | 0.97+ |
Neptune | LOCATION | 0.96+ |
Barcelona Supercomputer | ORGANIZATION | 0.96+ |
single simulation | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
SE650 | COMMERCIAL_ITEM | 0.94+ |
Dr. | PERSON | 0.94+ |
theCube | COMMERCIAL_ITEM | 0.94+ |
Hurricane Florence | EVENT | 0.94+ |
this morning | DATE | 0.93+ |
up to 50 degrees | QUANTITY | 0.92+ |
Levant | LOCATION | 0.92+ |