Image Title

Search Results for Super Computing 2017:

Armughan Ahmad, Dell EMC | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE, covering Super Computing 17. Brought to you by Intel. (soft electronic music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're gettin' towards the end of the day here at Super Computing 2017 in Denver, Colorado. 12,000 people talkin' really about the outer limits of what you can do with compute power and lookin' out into the universe and black holes and all kinds of exciting stuff. We're kind of bringin' it back, right? We're all about democratization of technology for people to solve real problems. We're really excited to have our last guest of the day, bringin' the energy, Armughan Ahmad. He's SVP and GM, Hybrid Cloud and Ready Solutions for Dell EMC, and a many-time CUBE alumni. Armughan, great to see you. >> Yeah, good to see you, Jeff. So, first off, just impressions of the show. 12,000 people, we had no idea. We've never been to this show before. This is great. >> This is a show that has been around. If you know the history of the show, this was an IEEE engineering show, that actually turned into high-performance computing around research-based analytics and other things that came out of it. But, it's just grown. We're seeing now, yesterday the super computing top petaflops were released here. So, it's fascinating. You have some of the brightest minds in the world that actually come to this event. 12,000 of them. >> Yeah, and Dell EMC is here in force, so a lot of announcements, a lot of excitement. What are you guys excited about participating in this type of show? >> Yeah, Jeff, so when we come to an event like this, HBC-- We know that HBC is also evolved from your traditional HBC, which was around modeling and simulation, and how it started from engineering to then clusters. It's now evolving more towards machine learning, deep learning, and artificial intelligence. So, what we announced here-- Yesterday, our press release went out. It was really related to how our strategy of advancing HBC, but also democratizing HBC's working. So, on the advancing, on the HBC side, the top 500 super computing list came out. We're powering some of the top 500 of those. One big one is TAC, which is Texas Institute out of UT, University of Texas. They now have, I believe, the number 12 spot in the top 500 super computers in the world, running an 8.2 petaflops off computing. >> So, a lot of zeros. I have no idea what a petaflop is. >> It's very, very big. It's very big. It's available for machine learning, but also eventually going to be available for deep learning. But, more importantly, we're also moving towards democratizing HBC because we feel that democratizing is also very important, where HBC should not only be for the research and the academia, but it should also be focused towards the manufacturing customers, the financial customers, our commercial customers, so that they can actually take the complexity of HBC out, and that's where our-- We call it our HBC 2.0 strategy, off learning from the advancements that we continue to drive, to then also democratizing it for our customers. >> It's interesting, I think, back to the old days of Intel microprocessors getting better and better and better, and you had Spark and you had Silicon Graphics, and these things that were way better. This huge differentiation. But, the Intel I32 just kept pluggin' along and it really begs the question, where is the distinction now? You have huge clusters of computers you can put together with virtualization. Where is the difference between just a really big cluster and HBC and super computing? >> So, I think, if you look at HBC, HBC is also evolving, so let's look at the customer view, right? So, the other part of our announcement here was artificial intelligence, which is really, what is artificial intelligence? It's, if you look at a customer retailer, a retailer has-- They start with data, for example. You buy beer and chips at J's Retailer, for example. You come in and do that, you usually used to run a SEQUEL database or you used to run a RDBMS database, and then that would basically tell you, these are the people who can purchase from me. You know their purchase history. But, then you evolved into BI, and then if that data got really, very large, you then had an HBC cluster, would which basically analyze a lot of that data for you, and show you trends and things. That would then tell you, you know what, these are my customers, this is how many times they are frequent. But, now it's moving more towards machine learning and deep learning as well. So, as the data gets larger and larger, we're seeing datas becoming larger, not just by social media, but your traditional computational frameworks, your traditional applications and others. We're finding that data is also growing at the edge, so by 2020, about 20 billion devices are going to wake up at the edge and start generating data. So, now, Internet data is going to look very small over the next three, four years, as the edge data comes up. So, you actually need to now start thinking of machine learning and deep learning a lot more. So, you asked the question, how do you see that evolving? So, you see an RDBMS traditional SQL evolving to BI. BI then evolves into either an HBC or hadoop. Then, from HBC and hadoop, what do you do next? What you do next is you start to now feed predictive analytics into machine learning kind of solutions, and then once those predictive analytics are there, then you really, truly start thinking about the full deep learning frameworks. >> Right, well and clearly like the data in motion. I think it's funny, we used to make decisions on a sample of data in the past. Now, we have the opportunity to take all the data in real time and make those decisions with Kafka and Spark and Flink and all these crazy systems that are comin' to play. Makes Hadoop look ancient, tired, and yesterday, right? But, it's still valid, right? >> A lot of customers are still paying. Customers are using it, and that's where we feel we need to simplify the complex for our customers. That's why we announced our Machine Learning Ready Bundle and our Deep Learning Ready Bundle. We announced it with Intel and Nvidia together, because we feel like our customers either go to the GPU route, which is your accelerator's route. We announced-- You were talking to Ravi, from our server team, earlier, where he talked about the C4140, which has the quad GPU power, and it's perfect for deep learning. But, with Intel, we've also worked on the same, where we worked on the AI software with Intel. Why are we doing all of this? We're saying that if you thought that RDBMS was difficult, and if you thought that building a hadoop cluster or HBC was a little challenging and time consuming, as the customers move to machine learning and deep learning, you now have to think about the whole stack. So, let me explain the stack to you. You think of a compute storage and network stack, then you think of-- The whole eternity. Yeah, that's right, the whole eternity of our data center. Then you talk about our-- These frameworks, like Theano, Caffe, TensorFlow, right? These are new frameworks. They are machine learning and deep learning frameworks. They're open source and others. Then you go to libraries. Then you go to accelerators, which accelerators you choose, then you go to your operating systems. Now, you haven't even talked about your use case. Retail use case or genomic sequencing use case. All you're trying to do is now figure out TensorFlow works with this accelerator or does not work with this accelerator. Or, does Caffe and Theano work with this operating system or not? And, that is a complexity that is way more complex. So, that's where we felt that we really needed to launch these new solutions, and we prelaunched them here at Super Computing, because we feel the evolution of HBC towards AI is happening. We're going to start shipping these Ready Bundles for machine learning and deep learning in first half of 2018. >> So, that's what the Ready Solutions are? You're basically putting the solution together for the client, then they can start-- You work together to build the application to fix whatever it is they're trying to do. >> That's exactly it. But, not just fix it. It's an outcome. So, I'm going to go back to the retailer. So, if you are the CEO of the biggest retailer and you are saying, hey, I just don't want to know who buys from me, I want to now do predictive analytics, which is who buys chips and beer, but who can I sell more things to, right? So, you now start thinking about demographic data. You start thinking about payroll data and other datas that surround-- You start feeding that data into it, so your machine now starts to learn a lot more of those frameworks, and then can actually give you predictive analytics. But, imagine a day where you actually-- The machine or the deep learning AI actually tells you that it's not just who you want to sell chips and beer to, it's who's going to buy the 4k TV? You're makin' a lot of presumptions. Well, there you go, and the 4k-- But, I'm glad you're doin' the 4k TV. So, that's important, right? That is where our customers need to understand how predictive analytics are going to move towards cognitive analytics. So, this is complex but we're trying to make that complex simple with these Ready Solutions from machine learning and deep learning. >> So, I want to just get your take on-- You've kind of talked about these three things a couple times, how you delineate between AI, machine learning, and deep learning. >> So, as I said, there is an evolution. I don't think a customer can achieve artificial intelligence unless they go through the whole crawl walk around space. There's no shortcuts there, right? What do you do? So, if you think about, Mastercard is a great customer of ours. They do an incredible amount of transactions per day, (laughs) as you can think, right? In millions. They want to do facial recognitions at kiosks, or they're looking at different policies based on your buying behavior-- That, hey, Jeff doesn't buy $20,000 Rolexes every year. Maybe once every week, you know, (laughs) it just depends how your mood is. I was in the Emirates. Exactly, you were in Dubai (laughs). Then, you think about his credit card is being used where? And, based on your behaviors that's important. Now, think about, even for Mastercard, they have traditional RDBMS databases. They went to BI. They have high-performance computing clusters. Then, they developed the hadoop cluster. So, what we did with them, we said okay. All that is good. That data that has been generated for you through customers and through internal IT organizations, those things are all very important. But, at the same time, now you need to start going through this data and start analyzing this data for predictive analytics. So, they had 1.2 million policies, for example, that they had to crunch. Now, think about 1.2 million policies that they had to say-- In which they had to take decisions on. That they had to take decisions on. One of the policies could be, hey, does Jeff go to Dubai to buy a Rolex or not? Or, does Jeff do these other patterns, or is Armughan taking his card and having a field day with it? So, those are policies that they feed into machine learning frameworks, and then machine learning actually gives you patterns that they can now see what your behavior is. Then, based on that, eventually deep learning is when they move to next. Deep learning now not only you actually talk about your behavior patterns on the credit card, but your entire other life data starts to-- Starts to also come into that. Then, now, you're actually talking about something before, that's for catching a fraud, you can actually be a lot more predictive about it and cognitive about it. So, that's where we feel that our Ready Solutions around machine learning and deep learning are really geared towards, so taking HBC to then democratizing it, advancing it, and then now helping our customers move towards machine learning and deep learning, 'cause these buzzwords of AIs are out there. If you're a financial institution and you're trying to figure out, who is that customer who's going to buy the next mortgage from you? Or, who are you going to lend to next? You want the machine and others to tell you this, not to take over your life, but to actually help you make these decisions so that your bottom line can go up along with your top line. Revenue and margins are important to every customer. >> It's amazing on the credit card example, because people get so pissed if there's a false positive. With the amount of effort that they've put into keep you from making fraudulent transactions, and if your credit card ever gets denied, people go bananas, right? The behavior just is amazing. But, I want to ask you-- We're comin' to the end of 2017, which is hard to believe. Things are rolling at Dell EMC. Michael Dell, ever since he took that thing private, you could see the sparkle in his eye. We got him on a CUBE interview a few years back. A year from now, 2018. What are we going to talk about? What are your top priorities for 2018? >> So, number one, Michael continues to talk about that our vision is advancing human progress through technology, right? That's our vision. We want to get there. But, at the same time we know that we have to drive IT transformation, we have to drive workforce transformation, we have to drive digital transformation, and we have to drive security transformation. All those things are important because lots of customers-- I mean, Jeff, do you know like 75% of the S&P 500 companies will not exist by 2027 because they're either not going to be able to make that shift from Blockbuster to Netflix, or Uber taxi-- It's happened to our friends at GE over the last little while. >> You can think about any customer-- That's what Michael did. Michael actually disrupted Dell with Dell technologies and the acquisition of EMC and Pivotal and VMWare. In a year from now, our strategy is really about edge to core to the cloud. We think the world is going to be all three, because the rise of 20 billion devices at the edge is going to require new computational frameworks. But, at the same time, people are going to bring them into the core, and then cloud will still exist. But, a lot of times-- Let me ask you, if you were driving an autonomous vehicle, do you want that data-- I'm an Edge guy. I know where you're going with this. It's not going to go, right? You want it at the edge, because data gravity is important. That's where we're going, so it's going to be huge. We feel data gravity is going to be big. We think core is going to be big. We think cloud's going to be big. And we really want to play in all three of those areas. >> That's when the speed of light is just too damn slow, in the car example. You don't want to send it to the data center and back. You don't want to send it to the data center, you want those decisions to be made at the edge. Your manufacturing floor needs to make the decision at the edge as well. You don't want a lot of that data going back to the cloud. All right, Armughan, thanks for bringing the energy to wrap up our day, and it's great to see you as always. Always good to see you guys, thank you. >> All right, this is Armughan, I'm Jeff Frick. You're watching theCUBE from Super Computing Summit 2017. Thanks for watching. We'll see you next time. (soft electronic music)

Published Date : Nov 16 2017

SUMMARY :

Brought to you by Intel. So, first off, just impressions of the show. You have some of the brightest minds in the world What are you guys excited about So, on the advancing, on the HBC side, So, a lot of zeros. the complexity of HBC out, and that's where our-- You have huge clusters of computers you can and then if that data got really, very large, you then had and all these crazy systems that are comin' to play. So, let me explain the stack to you. for the client, then they can start-- The machine or the deep learning AI actually tells you So, I want to just get your take on-- But, at the same time, now you need to start you could see the sparkle in his eye. But, at the same time we know that we have to But, at the same time, people are going to bring them and it's great to see you as always. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

DubaiLOCATION

0.99+

ArmughanPERSON

0.99+

$20,000QUANTITY

0.99+

Michael DellPERSON

0.99+

EMCORGANIZATION

0.99+

2018DATE

0.99+

TACORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2027DATE

0.99+

Armughan AhmadPERSON

0.99+

DellORGANIZATION

0.99+

12,000QUANTITY

0.99+

EmiratesLOCATION

0.99+

75%QUANTITY

0.99+

MastercardORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

2020DATE

0.99+

PivotalORGANIZATION

0.99+

8.2 petaflopsQUANTITY

0.99+

C4140COMMERCIAL_ITEM

0.99+

12,000 peopleQUANTITY

0.99+

Texas InstituteORGANIZATION

0.99+

GEORGANIZATION

0.99+

OneQUANTITY

0.99+

1.2 million policiesQUANTITY

0.99+

J's RetailerORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

YesterdayDATE

0.99+

500 super computersQUANTITY

0.99+

millionsQUANTITY

0.99+

20 billion devicesQUANTITY

0.99+

University of TexasORGANIZATION

0.99+

VMWareORGANIZATION

0.99+

CaffeORGANIZATION

0.98+

Super Computing Summit 2017EVENT

0.98+

yesterdayDATE

0.98+

Dell EMCORGANIZATION

0.98+

UberORGANIZATION

0.98+

IntelORGANIZATION

0.98+

HBCORGANIZATION

0.97+

RaviPERSON

0.97+

about 20 billion devicesQUANTITY

0.97+

end of 2017DATE

0.97+

I32COMMERCIAL_ITEM

0.97+

threeQUANTITY

0.96+

CUBEORGANIZATION

0.96+

first half of 2018DATE

0.96+

Super Computing 17EVENT

0.95+

Super Computing 2017EVENT

0.95+

Deep Learning Ready BundleCOMMERCIAL_ITEM

0.94+

GMORGANIZATION

0.94+

HadoopTITLE

0.93+

three thingsQUANTITY

0.91+

S&P 500ORGANIZATION

0.91+

SQLTITLE

0.9+

UTORGANIZATION

0.9+

about 1.2 million policiesQUANTITY

0.89+

firstQUANTITY

0.89+

RolexORGANIZATION

0.89+

Hybrid CloudORGANIZATION

0.88+

BlockbusterORGANIZATION

0.87+

TheanoORGANIZATION

0.86+

12QUANTITY

0.86+

IEEEORGANIZATION

0.85+

Bernie Spang, IBM & Wayne Glanfield, Red Bull Racing | Super Computing 2017


 

>> Announcer: From Denver, Colorado it's theCUBE. Covering Super Computing 17, brought to you by Intel. Welcome back everybody, Jeff Frick here with theCUBE. We're at Super Computing 2017 in Denver, Colorado talking about big big iron, we're talking about space and new frontiers, black holes, mapping the brain. That's all fine and dandy, but we're going to have a little bit more fun this next segment. We're excited to have our next guest Bernie Spang. He's a VP Software Defined Infrastructure for IBM. And his buddy and guest Wayne Glanfield HPC Manager for Red Bull Racing. And for those of you that don't know, that's not the pickup trucks, it's not the guy jumping out of space, this is the Formula One racing team. The fastest, most advanced race cars in the world. So gentlemen, first off welcome. Thank you. Thank you Jeff. So what is a race car company doing here for a super computing conference? Obviously we're very interested in high performance computing so traditionally we've used a wind tunnel to do our external aerodynamics. HPC allows us to do many many more iterations, design iterations of the car. So we can actually kind of get more iterations of the designs out there and make the car go faster very quicker. So that's great, you're not limited to how many times you can get it in the wind tunnel. The time you have in the wind tunnel. I'm sure there's all types of restrictions, cost and otherwise. There's lots of restrictions and both the wind tunnel and in HPC usage. So with HPC we're limited to 25 teraflops, which isn't many teraflops. 25 teraflops. >> Wayne: That's all. And Bernie, how did IBM get involved in Formula One racing? Well I mean our spectrum computing offerings are about virtualizing clusters to optimize efficiency, and the performance of the workloads. So our Spectrum LSF offering is used by manufacturers, designers to get ultimate efficiency out of the infrastructure. So with the Formula One restrictions on the teraflops you want to get as much work through that system as efficiently as you can. And that's where Spectrum computing comes in. That's great. And so again, back to the simulations. So not only can you just do simulations 'cause you got the capacity, but then you can customize it as you said I think before we turned on the cameras for specific tracks, specific race conditions. All types of variables that you couldn't do very easily in a traditional wind tunnel. Yes obviously it takes a lot longer to actually kind of develop, create, and rapid prototype the models and get them in the wind tunnel, and actually test them. And it's obviously much more expensive. So by having a HPC facility we can actually kind of do the design simulations in a virtual environment. So what's been kind of the ahah from that? Is it just simply more better faster data? Is there some other kind of transformational thing that you observed as a team when you started doing this type of simulation versus just physical simulation in a wind tunnel? We started using HPC and computational fluid dynamics about 12 years ago in anger. Traditionally it started out as a complementary tool to the wind tunnel. But now with the advances in HPC technology and software from IBM, it's actually beginning to overtake the wind tunnel. So it's actually kind of driving the way we design the car these days. That's great. So Bernie, working with super high end performance, right, where everything is really optimized to get that car to go a little bit faster, just a little bit faster. Right. Pretty exciting space to work in, you know, there's a lot of other great applications, aerospace, genomics, this and that. But this is kind of a fun thing you can actually put your hands on. Oh it's definitely fun, it's definitely fun being with the Red Bull Racing team, and with our clients when we brief them there. But we have commercial clients in automotive design, aeronautics, semiconductor manufacturing, where getting every bit of efficiency and performance out of their infrastructure is also important. Maybe they're not limited by rules, but they're limited by money, you know and the ability to investment. And their ability to get more out of the environment gives them a competitive advantage as well. And really what's interesting about racing, and a lot of sports is you get to witness the competition. We don't get to witness the competition between big companies day to day. You're not kind of watching it in those little micro instances. So the good thing is you get to learn a lot from such a focused, relatively small team as Red Bull Racing that you can apply to other things. So what are some of the learnings as you've got work with them that you've taken back? Well certainly they push the performance of the environment, and they push us, which is a great thing for us, and for our other clients who benefit. But one of the things I think that really stands out is the culture there of the entire team no matter what their role and function. From the driver on down to everybody else are focused on winning races and winning championships. And that team view of getting every bit of performance out of everything everybody does all the time really opened our thinking to being broader than just the scheduling of the IT infrastructure, it's also about making the design team more productive and taking steps out of the process, and anything we can do there. Inclusive of the storage management, and the data management over time. So it's not just the compute environment it's also the virtualized storage environment. Right, and just massive amounts of storage. You said not only are you running and generating, I'm just going to use boatloads 'cause I'm not sure which version of the flops you're going to use. But also you got historical data, and you have result data, and you have models that need to be tweaked, and continually upgraded so that you do better the following race. Exactly, I mean we're generating petabytes of data a year and I think one of the issues which is probably different from most industries is our workflows are incredibly complex. So we have up to 200 discrete job steps for each workflow to actually kind of produce a simulation. This is where the kind of IBM Spectrum product range actually helps us do that efficiently. If you imagine an aerospace engineer, or aerodynamics engineer trying to manually manage 200 individual job steps, it just wouldn't happen very efficiently. So this is where Spectrum scale actually kind of helps us do that. So you mentioned it briefly Bernie, but just a little bit more specifically. What are some of the other industries that you guys are showcasing that are leveraging the power of Spectrum to basically win their races. Yeah so and we talked about the infrastructure and manufacturing, but they're industrial clients. But also in financial services. So think in terms of risk analytics and financial models being an important area. Also healthcare life sciences. So molecular biology, finding new drugs. When you talk about the competition and who wins right. Genomics research and advances there. Again, you need a system and an infrastructure that can chew through vast amounts of data. Both the performance and the compute, as well as the longterm management with cost efficiency of huge volumes of data. And then you need that virtualized cluster so that you can run multiple workloads many times with an infrastructure that's running in 80%, 90% efficiency. You can't afford to have silos of clusters. Right we're seeing clients that have problems where they don't have this cluster virtualization software, have cluster creep, just like in the early days we had server sprawl, right? With a different app on a different server, and we needed to virtualize the servers. Well now we're seeing cluster creep. Right the Hadoop clusters and Spark clusters, and machine learning and deep learning clusters. As well as the traditional HPC workload. So what Spectrum computing does is virtualizes that shared cluster environment so that you can run all these different kind of workloads and drive up the efficiency of the environment. 'Cause efficiency is really the key right. You got to have efficiency that's what, that's really where cloud got its start, you know, kind of eating into the traditional space, right. There's a lot of inefficient stuff out there so you got to use your resources efficiently it's way too competitive. Correct well we're also seeing inefficiencies in the use of cloud, right. >> Jeff: Absolutely. So one of the features that we've added to the Spectrum computing recently is automated dynamic cloud bursting. So we have clients who say that they've got their scientists or their design engineers spinning up clusters in the cloud to run workloads, and then leaving the servers running, and they're paying the bill. So we built in automation where we push the workload and the data over the cloud, start the servers, run the workload. When the workload's done, spin down the servers and bring the data back to the user. And it's very cost effective that way. It's pretty fun everyone talks often about the spin up, but they forget to talk about the spin down. Well that's where the cost savings is, exactly. Alright so final words, Wayne, you know as you look forward, it's super a lot of technology in Formula One racing. You know kind of what's next, where do you guys go next in terms of trying to get another edge in Formula One racing for Red Bull specifically. I mean I'm hoping they reduce the restrictions on HPC so it can actually start using CFD and the software IBM provides in a serious manner. So it can actually start pushing the technologies way beyond where they are at the moment. It's really interesting that they, that as a restriction right, you think of like plates and size of the engine, and these types of things as the rule restrictions. But they're actually restricting based on data size, your use of high performance computing. They're trying to save money basically, but. It's crazy. So whether it's a rule or you know you're share holders, everybody's trying to save money. Alright so Bernie what are you looking at, sort of 2017 is coming to an end, it's hard for me to say that as you look forward to 2018 what are some of your priorities for 2018. Well the really important thing and we're hearing it at this conference, I'm talking with the analysts and with the clients. The next generation of HPC in analytics is what we're calling machine learning, deep learning, cognitive AI, whatever you want to call it. That's just the new generation of this workload. And our Spectrum conductor offering and our new deep learning impact capability to automate the training of deep learning models, so that you can more quickly get to an accurate model like in hours or minutes, not days or weeks. That's going to a huge break through. And based on our early client experience this year, I think 2018 is going to be a breakout year for putting that to work in commercial enterprise use cases. Alright well I look forward to the briefing a year from now at Super Computing 2018. Absolutely. Alright Bernie, Wayne, thanks for taking a few minutes out of your day, appreciate it. You're welcome, thank you. Alright he's Bernie, he's Wayne, I'm Jeff Frick we're talking Formula One Red Bull Racing here at Super Computing 2017. Thanks for watching.

Published Date : Nov 16 2017

SUMMARY :

and new frontiers, black holes, mapping the brain. So the good thing is you get to learn a lot and bring the data back to the user.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WaynePERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

BerniePERSON

0.99+

Wayne GlanfieldPERSON

0.99+

IBMORGANIZATION

0.99+

90%QUANTITY

0.99+

80%QUANTITY

0.99+

Bernie SpangPERSON

0.99+

2018DATE

0.99+

25 teraflopsQUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

2017DATE

0.99+

Denver, ColoradoLOCATION

0.99+

oneQUANTITY

0.98+

Super Computing 17EVENT

0.98+

Super Computing 2017EVENT

0.98+

IntelORGANIZATION

0.97+

each workflowQUANTITY

0.97+

Super Computing 2018EVENT

0.97+

Formula OneEVENT

0.96+

bothQUANTITY

0.96+

BothQUANTITY

0.95+

this yearDATE

0.92+

up to 200 discrete job stepsQUANTITY

0.92+

a yearQUANTITY

0.89+

Formula OneORGANIZATION

0.86+

about 12 years agoDATE

0.86+

firstQUANTITY

0.84+

200 individual job stepsQUANTITY

0.82+

SpectrumOTHER

0.79+

HPCORGANIZATION

0.79+

Red BullEVENT

0.79+

theCUBEORGANIZATION

0.73+

petabytesQUANTITY

0.65+

SparkTITLE

0.61+

HPCPERSON

0.6+

issuesQUANTITY

0.56+

featuresQUANTITY

0.52+

SpectrumCOMMERCIAL_ITEM

0.5+

SpectrumTITLE

0.49+

SpectrumORGANIZATION

0.44+

BernieLOCATION

0.39+

Ravi Pendekanti, Dell EMC | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. Hey welcome back everybody, Jeff Frick here with theCUBE. We're at Super Computing 2017, Denver, Colorado, 12,000 people talking about big iron, big questions, big challenges. It's really an interesting take on computing, really out on the edge. The key note was, literally, light years out in space, talking about predicting the future with quirks and all kinds of things, a little over my head for sure. But we're excited to kind of get back to the ground and we have Ravi Pendekanti. He's the Senior Vice President of Product Management and Marketing, Server Platforms, Dell EMC. It's a mouthful, Ravi great to see you. Great to see you too Jeff and thanks for having me here. Absolutely, so we were talking before we turned the cameras on. One of your big themes, which I love, is kind of democratizing this whole concept of high performance computing, so it's not just the academics answering the really, really, really big questions. You're absolutely right. I mean think about it Jeff, 20 years ago, even 10 years ago, when people talk about high performance computing, it was what I call as being in the back alleys of research and development. There were a few research scientists working on it, but we're at a time in our journey towards helping humanity in a bigger way. The HPC has found it's way into almost every single mainstream industry you can think of. Whether it is fraud detection, you see MasterCard is using it for ensuring that they can see and detect any of the fraud that can be committed earlier than the perpetrators come in and actually hack the system. Or if you get into life sciences, if you talk about genomics. I mean this is what might be good for our next set of generations, where they can probably go out and tweak some of the things in a genome sequence so that we don't have the same issues that we have had in the past. Right. Right? So, likewise, you can pick any favorite industry. I mean we are coming up to the holiday seasons soon. I know a lot of our customers are looking at how do they come up with the right schema to ensure that they can stock the right product and ensure that it is available for everyone at the right time? 'Cause timing is important. I don't think any kid wants to go with no toy and have the product ship later. So bottom line is, yes, we are looking at ensuring the HPC reaches every single industry you can think of. So how do you guys parse HPC verses a really big virtualized cluster? I mean there's so many ways that compute and store has evolved, right? So now, with cloud and virtual cloud and private cloud and virtualization, you know, I can pull quite a bit of horsepower together to attack a problem. So how do you kind of cut the line between Navigate, yeah. big, big compute, verses true HPC? HPC. It's interesting you ask. I'm actually glad you asked because people think that it's just feeding CPU or additional CPU will do the trick, it doesn't. The simple fact is, if you look at the amount of data that is being created. I'll give you a simple example. I mean, we are talking to one of the airlines right now, and they're interested in capturing all the data that comes through their flights. And one of the things they're doing is capturing all the data from their engines. 'Cause end of the day, you want to make sure that your engines are pristine as they're flying. And every hour that an engine flies out, I mean as an airplane flies out, it creates about 20 terabytes of data. So, if you have a dual engine, which is what most flights are. In one hour they create about 40 terabytes of data. And there are supposedly about 38,000 flights taking off at any given time around the world. I mean, it's one huge data collection problem. Right? I mean, I'm told it's like a real Godzilla number, so I'll let you do the computation. My point is if you really look at the data, data has no value, right? What really is important is getting information out of it. The CPU on the other side has gone to a time and a phase where it is hitting the, what I call as the threshold of the Moore's law. Moore's law was all about performance doubles every two years. But today, that performance is not sufficient. Which is where auxiliary technologies need to be brought in. This is where the GPUs, the FBGAs. Right, right. Right. So when you think about these, that's where the HPC world takes off, is you're augmenting your CPUs and your processors with additional auxiliary technology such as the GPUs and FBGAs to ensure that you have more juice to go do this kind of analytics and the massive amounts of data that you and I and the rest of the humanity is creating. It's funny that you talk about that. We were just at a Western Digital event a little while ago, talking about the next generation of drives and it was the same thing where now it's this energy assist method to change really the molecular way that it saves information to get more out of it. So that's kind of how you parse it. If you've got to juice the CPU, and kind of juice the traditional standard architecture, then you're moving into the realm of high performance computing. Absolutely, I mean this is why, Jeff, yesterday we launched a new PowerEdge C4140, right? The first of it's kind in terms of the fact that it's got two Intel Xeon processors, but beyond that, it also can support four Nvidia GPUs. So now you're looking at a server that's got both the CPUs, to your earlier comment on processors, but is augmented by four of the GPUs, that gives immense capacity to do this kind of high performance computing. But as you said, it's not just compute, it's store, it's networking, it's services, and then hopefully you package something together in a solution so I don't have to build the whole thing from scratch, you guys are making moves, right? Oh, this is a perfect lead in, perfect lead in. I know, my colleague, Armagon will be talking to you guys shortly. What his team does, is it takes all the building blocks we provide, such as the servers, obviously looks at the networking, the storage elements, and then puts them together to create what are called solutions. So if you've got solutions, which enable our customers to go back in and easily deploy a machine-learning or a deep-learning solution. Where now our customers don't have to do what I call as the heavy lift. In trying to make sure that they understand how the different pieces integrate together. So the goal behind what we are doing at Dell EMC is to remove the guess work out so that our customers and partners can go out and spend their time deploying the solution. Whether it is for machine learning, deep learning or pick your favorite industry, we can also verticalize it. So that's the beauty of what we are doing at Dell EMC. So the other thing we were talking about before we turned turned the cameras on is, I call them the itys from my old Intel days, reliability, sustainability, serviceability, and you had a different phrase for it. >> Ravi: Oh yes, I know you're talking about the RAS. The RAS, right. Which is the reliability, availability, and serviceability. >> Jeff: But you've got a new twist on it. Oh we do. Adding something very important, and we were just at a security show early this week, CyberConnect, and security now cuts through everything. Because it's no longer a walled garden, 'cause there are no walls. There are no walls. It's really got to be baked in every layer of the solution. Absolutely right. The reason is, if you really look at security, it's not about, you know till a few years ago, people used to think it's all about protecting yourself from external forces, but today we know that 40% of the hacks happen because of the internal, you know, system processes that we don't have in place. Or we could have a person with an intent to break in for whatever reason, so the integrated security becomes part and parcel of what we do. This is where, with in part of a 14G family, one of the things we said is we need to have integrated security built in. And along with that, we want to have the scalability because no two workloads are the same and we all know that the amount of data that's being created today is twice what it was the last year for each of us. Forget about everything else we are collecting. So when you think about it, we need integrated security. We need to have the scalability feature set, also we want to make sure there is automation built in. These three main tenets that we talked about feed into what we call internally, the monic of a user's PARIS. And that's what I think, Jeff, to our earlier conversation, PARIS is all about, P is for best price performance. Anybody can choose to get the right performance or the best performance, but you don't want to shell out a ton of dollars. Likewise, you don't want to pay minimal dollars and try and get the best performance, that's not going to happen. I think there's a healthy balance between price performance, that's important. Availability is important. Interoperability, as much as everybody thinks that they can act on their own, it's nearly impossible, or it's impossible that you can do it on your own. >> Jeff: These are big customers, they've got a lot of systems. You are. You need to have an ecosystem of partners and technologies that come together and then, end of the day, you have to go out and have availability and serviceability, or security, to your point, security is important. So PARIS is about price performance, availability, interoperability, reliability, availability and security. I like it. That's the way we design it. It's much sexier than that. We drop in, like an Eiffel Tower picture right now. There you go, you should. So Ravi, hard to believe we're at the end of 2017, if we get together a year from now at Super Computing 2018, what are some of your goals, what are your some objectives for 2018? What are we going to be talking about a year from today? Oh, well looking into a crystal ball, as much as I can look into that, I thin that-- >> Jeff: As much as you can disclose. And as much as we can disclose, a few things I think are going to happen. >> Jeff: Okay. Number one, I think you will see people talk about to where we started this conversation. HPC has become mainstream, we talked about it, but the adoption of high performance computing, in my personal belief, is not still at a level that it needs to be. So, if you go down next 12 to 18 months, lets say, I do think the adoption rates will be much higher than where we are. And we talk about security now, because it's a very topical subject, but as much as we are trying to emphasize to our partners and customers that you've got to think about security from ground zero. We still see a number of customers who are not ready. You know, some of the analysis show there are nearly 40% of the CIOs are not ready in helping and they truly understand, I should say, what it takes to have a secure system and a secure infrastructure. It's my humble belief that people will pay attention to it and move the needle on it. And we talked about, you know, four GPUs in our C4140, do anticipate that there will be a lot more of auxiliary technology packed into it. Sure, sure. So that's essentially what I can say without spilling the beans too much. Okay, all right, super. Ravi, thanks for taking a couple of minutes out of your day, appreciate it. = Thank you. All right, he's Ravi, I'm Jeff Frick, you're watching theCUBE from Super Computing 2017 in Denver, Colorado. Thanks for watching. (techno music)

Published Date : Nov 16 2017

SUMMARY :

and the massive amounts of data that you and I Which is the reliability, because of the internal, you know, and then, end of the day, you have to go out Jeff: As much as you can disclose. And we talked about, you know, four GPUs in our C4140,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Ravi PendekantiPERSON

0.99+

40%QUANTITY

0.99+

RaviPERSON

0.99+

PARISORGANIZATION

0.99+

2018DATE

0.99+

one hourQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

12,000 peopleQUANTITY

0.99+

MasterCardORGANIZATION

0.99+

C4140COMMERCIAL_ITEM

0.99+

NvidiaORGANIZATION

0.99+

twiceQUANTITY

0.99+

eachQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

bothQUANTITY

0.99+

ArmagonORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

about 20 terabytesQUANTITY

0.99+

Denver,LOCATION

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

yesterdayDATE

0.98+

about 38,000 flightsQUANTITY

0.98+

early this weekDATE

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

endDATE

0.97+

Eiffel TowerLOCATION

0.97+

10 years agoDATE

0.97+

nearly 40%QUANTITY

0.96+

twoQUANTITY

0.95+

20 years agoDATE

0.95+

18 monthsQUANTITY

0.95+

three main tenetsQUANTITY

0.94+

firstQUANTITY

0.94+

fourQUANTITY

0.93+

Super Computing '17EVENT

0.92+

OneQUANTITY

0.92+

every two yearsQUANTITY

0.92+

Super Computing 2017EVENT

0.91+

12QUANTITY

0.89+

2017DATE

0.88+

few years agoDATE

0.86+

MoorePERSON

0.86+

XeonCOMMERCIAL_ITEM

0.85+

Western DigitalORGANIZATION

0.84+

about 40 terabytes of dataQUANTITY

0.83+

Super Computing 2018EVENT

0.82+

two workloadsQUANTITY

0.81+

dualQUANTITY

0.76+

a yearQUANTITY

0.74+

a ton of dollarsQUANTITY

0.74+

14GORGANIZATION

0.7+

singleQUANTITY

0.66+

every hourQUANTITY

0.65+

ground zeroQUANTITY

0.64+

HPCORGANIZATION

0.6+

ColoradoLOCATION

0.56+

doublesQUANTITY

0.56+

PORGANIZATION

0.54+

CyberConnectORGANIZATION

0.49+

theCUBEORGANIZATION

0.49+

RASOTHER

0.34+

Susan Bobholz, Intel | Super Computing 2017


 

>> [Announcer] From Denver, Colorado, it's the Cube covering Super Computing 17, brought to you by Intel. (techno music) >> Welcome back, everybody, Jeff Frick with the Cube. We are at Super Computing 2017 here in Denver, Colorado. 12,000 people talking about big iron, heavy lifting, stars, future mapping the brain, all kinds of big applications. We're here, first time ever for the Cube, great to be here. We're excited for our next guest. She's Susan Bobholtz, she's the Fabric Alliance Manager for Omni-Path at Intel, Susan, welcome. >> Thank you. >> So what is Omni-Path, for those that don't know? >> Omni-Path is Intel's high performance fabric. What it does is it allows you to connect systems and make big huge supercomputers. >> Okay, so for the royal three-headed horsemen of compute, store, and networking, you're really into data center networking, connecting the compute and the store. >> Exactly, correct, yes. >> Okay. How long has this product been around? >> We started shipping 18 months ago. >> Oh, so pretty new? >> Very new. >> Great, okay and target market, I'm guessing has something to do with high performance computing. >> (laughing) Yes, our target market is high performance computing, but we're also seeing a lot of deployments in artificial intelligence now. >> Okay and so what's different? Why did Intel feel compelled that they needed to come out with a new connectivity solution? >> We were getting people telling us they were concerned that the existing solutions were becoming too expensive and weren't going to scale into the future, so they said Intel, can you do something about it, so we did. We made a couple of strategic acquisitions, we combined that with some of our own IP and came up with Omni-Path. Omni-Path is very much a proprietary protocol, but we use all the same software interfaces as InfiniBand, so your software applications just run. >> Okay, so to the machines it looks like InfiniBand? >> Yes. >> Just plug and play and run. >> Very much so, it's very similar. >> Okay what are some of the attributes that make it so special? >> The reason it's really going very well is that it's the price performance benefits, so we have equal to, or better, performance than InfiniBand today, but we also have our switch technology is 48 ports verses InfiniBand is 36 ports. So that means you can build denser clusters in less space and less cables, lower power, total cost of ownership goes down, and that's why people are buying it. >> Really fits into the data center strategy that Intel's executing very aggressively right now. >> Fits very nicely, absolutely, yes, very much so. >> Okay, awesome, so what are your thoughts here at the show? Any announcements, anything that you've seen that's of interest? >> Oh yeah, so, a couple things. We've had really had good luck on the Top 500 list. 60% of the servers that are running a 100 gigabyte fabrics in the Top 500 list are running connected via Omni-Path. >> What percentage again? >> 60% >> 60? >> Yes. >> You've only been at it for 18 months? >> Yes, exactly. >> Impressive. >> Very, very good. We've got systems in the Top 10 already. Some of the Top 10 systems in the world are using Omni-Path. >> Is it rip and replace, do you find, or these are new systems that people are putting in. >> Yeah, these are new systems. Usually when somebody's got a system they like and run, they don't want to touch it. >> Right. >> These are people saying I need a new system. I need more power, I need more oompf. They have the money, the budget, they want to put in something new, and that's when they look to Omni-Path. >> Okay, so what are you working on now, what's kind of next for Omni-Path? >> What's next for us is we are announcing a new higher, denser switch technology, so that will allow you to go for your director class switches, which is the really big ones, is now rather than having 768 ports, you can go to 1152, and that means, again, denser topologies, lower power, less cabling, it reduces your total cost of ownership. >> Right, I think you just answered my question, but I'm going to ask you anyway. >> (laughs) Okay. >> We talked a little bit before we turned the camera on about AI and some of the really unique challenges of AI, and that was part of the motivation behind this product. So what are some of the special attributes of AI that really require this type of connectivity? >> It's very much what you see even with high performance computing. You need low latency, you need high bandwidth. It's the same technologies, and in fact, in a lot of cases, it's the same systems, or sometimes they can be running software load that is HPC focused, and sometimes they're running a software load that is artificial intelligence focused. But they have the same exact needs. >> Okay. >> Do it fast, do it quick. >> Right, right, that's why I said you already answered the question. Higher density, more computing, more storing, faster. >> Exactly, right, exactly. >> And price performance. All right, good, so if we come back a year from now for Super Computing 2018, which I guess is in Dallas in November, they just announced. What are we going to be talking about, what are some of your priorities and the team's priorities as you look ahead to 2018? >> Oh we're continuing to advance the Omni-Path technology with software and additional capabilities moving forward, so we're hoping to have some really cool announcements next year. >> All right, well, we'll look forward to it, and we'll see you in Dallas in a year. >> Thanks, Cube. >> All right, she's Susan, and I'm Jeff. You're watching the Cube from Super Computing 2017. Thanks for watching, see ya next time. (techno music)

Published Date : Nov 15 2017

SUMMARY :

covering Super Computing 17, brought to you by Intel. She's Susan Bobholtz, she's the Fabric Alliance Manager What it does is it allows you to connect systems Okay, so for the royal three-headed horsemen Okay. has something to do with high performance computing. in artificial intelligence now. so they said Intel, can you do something So that means you can build denser clusters Really fits into the data center strategy in the Top 500 list are running connected via Omni-Path. Some of the Top 10 systems in the world are using Omni-Path. Is it rip and replace, do you find, and run, they don't want to touch it. They have the money, the budget, so that will allow you to go for your director class but I'm going to ask you anyway. about AI and some of the really unique challenges of AI, It's very much what you see you already answered the question. and the team's priorities as you look ahead to 2018? moving forward, so we're hoping to have and we'll see you in Dallas in a year. All right, she's Susan, and I'm Jeff.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan BobholtzPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Susan BobholzPERSON

0.99+

DallasLOCATION

0.99+

18 monthsQUANTITY

0.99+

NovemberDATE

0.99+

SusanPERSON

0.99+

2018DATE

0.99+

36 portsQUANTITY

0.99+

60%QUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

CubePERSON

0.99+

next yearDATE

0.99+

100 gigabyteQUANTITY

0.99+

IntelORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

48 portsQUANTITY

0.99+

768 portsQUANTITY

0.99+

60QUANTITY

0.98+

first timeQUANTITY

0.97+

CubeCOMMERCIAL_ITEM

0.97+

18 months agoDATE

0.97+

Super Computing 2017EVENT

0.96+

todayDATE

0.92+

InfiniBandTITLE

0.91+

Top 10QUANTITY

0.91+

1152QUANTITY

0.91+

Super Computing 17EVENT

0.91+

Top 10 systemsQUANTITY

0.85+

a yearQUANTITY

0.82+

three-headedQUANTITY

0.8+

PathOTHER

0.79+

Super ComputingEVENT

0.76+

TopQUANTITY

0.72+

Omni-PathTITLE

0.72+

Omni-PathOTHER

0.72+

Omni-PathCOMMERCIAL_ITEM

0.71+

OmniTITLE

0.59+

OmniORGANIZATION

0.58+

Omni-PathORGANIZATION

0.57+

coupleQUANTITY

0.5+

-PathOTHER

0.49+

PathORGANIZATION

0.3+

500OTHER

0.29+

Jim Wu, Falcon Computing | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE covering Super Computing '17. Brought to you by Intel. (upbeat techno music) Hey welcome back, everybody. Jeff Frick here with theCUBE. We're at Super Computing 2017 in Denver, Colorado. It's our first trip to the show, 12,000 people, a lot of exciting stuff going on, big iron, big lifting, heavy duty compute. We're excited to have our next guest on. He's Jim Wu, he's the Director of Customer Experience for Falcon Computing. Jim, welcome. Thank you. Good to see you. So, what does Falcon do for people that aren't familiar with the company? Yeah, Falcon Company is in our early stages startup, focus on AVA-based acceleration development. Our vision is to allow software engineers to develop a FPGA-based accelerators, accelerators without FPGA expertise. Right, you just said you closed your B round. So, congratulations on that. >> Jim: Thank you. Yeah, very exciting. So, it's a pretty interesting concept. To really bring the capability to traditional software engineers to program for hardware. That's kind of a new concept. What do you think? 'Cause it brings the power of a hardware system. but the flexibility of a software system. Yeah, so today, to develop FPGA accelerators is very challenging. So, today for the accelerations-based people use very low level language, like a Verilog and the VHDL to develop FPGA accelerators. Which was very time consuming, very labor-intensive. So, our goal is to liberate them to use, C/C++ space design flow to give them an environment that they are familiar with in C/C++. So now not only can they improve their productivity, we also do a lot of automatic organization under the hood, to give them the highest accelerator results. Right, so that really opens up the ecosystem well beyond the relatively small ecosystem that knows how to program their hardware. Definitely, that's what we are hoping to see. We want to the tool in the hands of all software programmers. They can use it in the Cloud. They can use it on premises. Okay. So what's the name of your product? And how does it fit within the stack? I know we've got the Intel microprocessor under the covers, we've got the accelerator, we've got the cards. There's a lot of pieces to the puzzle. >> Jim: Yeah. So where does Falcon fit? So our main product is a compiler, called the Merlin Compiler. >> Jeff: Okay. It's a pure C and the C++ flow that enables software programmers to design APGA-based accelerators without any knowledge of APGA. And it's highly integrated with Intel development tools. So users don't even need to learn anything about the Intel development environment. They can just use their C++ development environment. Then in the end, we give them the host code as well as APGA binaries so they can round on APGA to see a accelerated applications. Okay, and how long has Merlin been GA? Actually, we'll be GA early next year. Early next year. So finishing, doing the final polish here and there. Yes. So in this quarter, we are heavily investing a lot of ease-of-use features. Okay. We have most of the features we want to be in the tool, but we're still lacking a bit in terms of ease-of-use. >> Jeff: Okay. So we are enhancing our report capabilities, we are enhancing our profiling of capabilities. We want to really truly like a traditional C++-based development environment for software application engineers. Okay, that's fine. You want to get it done, right, before you ship it out the door? So you have some Alpha programs going on? Some Beta programs of some really early adopters? Yeah, exactly. So today we provide a 14 day free trial to any customers who are interested. We have it, you can set up your enterprise or you can set up on Cloud. Okay. We provide to where you want your work done. Okay. And so you'll support all the cloud service providers, the big public clouds, all the private clouds. All the traditional data servers as well. Right. So, we are twice already on Aduplas as well as Alibaba Cloud. So we are working on bringing the tool to other public cloud providers as well. Right. So what is some of the early feedback you're getting from some of the people you're talking to? As to where this is going to make the biggest impact. What type of application space has just been waiting for this solution? So our Merlin Compiler is a productivity tool, so any space that FPGA can traditionally play well that's where we want to be there. So like encryption, decryption, video codec, compression, decompression. Those kind of applications are very stable for APGA. Now traditionally they can only be developed by hardware engineers. Now with the Merlin Compiler, all of these software engineers can use the Merlin Compiler to do all of these applications. Okay. And when is the GA getting out, I know it's coming? When is it coming? Approximately So probably first quarter of 2018. Okay, that's just right around the corner. Exactly. Alright, super. And again, a little bit about the company, how many people are you? A little bit of the background on the founders. So we have about 30 employees, at the moment, so we have offices in Santa Clara which is our headquarters. We also have an office in Los Angeles. As well as a Beijing, China. Okay, great. Alright well Jim, thanks for taking a few minutes. We'll be looking for GA in a couple of months and wish you nothing but the best success. Okay, thank you so much, Jeff. Alright, he's Jim Lu I'm Jeff Frick. You're watching theCUBE from supering computing 2017. Thanks for watching. (upbeat techno music)

Published Date : Nov 14 2017

SUMMARY :

Brought to you by Intel. Verilog and the VHDL to develop FPGA accelerators. called the Merlin Compiler. We have most of the features we want to be in the tool, We provide to where you want your work done.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim WuPERSON

0.99+

JimPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

BeijingLOCATION

0.99+

Los AngelesLOCATION

0.99+

14 dayQUANTITY

0.99+

todayDATE

0.99+

FalconORGANIZATION

0.99+

first quarter of 2018DATE

0.99+

12,000 peopleQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

twiceQUANTITY

0.99+

first tripQUANTITY

0.99+

C++TITLE

0.99+

Early next yearDATE

0.98+

IntelORGANIZATION

0.98+

Super Computing '17EVENT

0.98+

early next yearDATE

0.98+

2017DATE

0.98+

GALOCATION

0.97+

Jim LuPERSON

0.97+

Falcon CompanyORGANIZATION

0.97+

about 30 employeesQUANTITY

0.97+

Super Computing 2017EVENT

0.97+

APGATITLE

0.94+

this quarterDATE

0.94+

theCUBEORGANIZATION

0.94+

CTITLE

0.92+

AduplasORGANIZATION

0.91+

C/C+TITLE

0.9+

C+TITLE

0.87+

Alibaba CloudORGANIZATION

0.84+

APGAORGANIZATION

0.82+

Falcon ComputingORGANIZATION

0.81+

ChinaLOCATION

0.76+

MerlinTITLE

0.71+

Merlin CompilerTITLE

0.65+

MerlinORGANIZATION

0.64+

FPGAORGANIZATION

0.62+

SuperEVENT

0.61+

GAORGANIZATION

0.61+

VerilogTITLE

0.54+

John Lockwood, Algo Logic Systems | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. (electronic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Denver, Colorado at Super Computing 2017. 12,000 people, our first trip to the show. We've been trying to come for awhile, it's pretty amazing. A lot of heavy science in terms of the keynotes. All about space and looking into brain mapping and it's heavy lifting, academics all around. We're excited to have our next guest, who's an expert, all about speed and that's John Lockwood. He's the CEO of Algo-Logic. First off, John, great to see you. >> Yeah, thanks Jeff, glad to be here. >> Absolutely, so for folks that aren't familiar with the company, give them kind of the quick overview of Algo. >> Yes, Algo-Logic puts algorithms into logic. So our main focus is taking things are typically done in software and putting them into FPGAs and by doing that we make them go faster. >> So it's a pretty interesting phenomenon. We've heard a lot from some of the Intel execs about kind of the software overlay that now, kind of I guess, a broader ecosystem of programmers into hardware, but then still leveraging the speed that you get in hardware. So it's a pretty interesting combination to get those latencies down, down, down. >> Right, right, I mean Intel certainly made a shift to go on into heterogeneous compute. And so in this heterogeneous world, we've got software running on Xeons, Xeon Phis. And we've also got the need though, to use new compute in more than just the traditional microprocessor. And so with the acquisition of Altera, is that now Intel customers can use FPGAs in order to get the benefit in speed. And so Algo-Logic, we typically provide applications with software APIs, so it makes it really easy for end customers to deploy FPGAs into their data center, into their hosts, into their network and start using them right away. >> And you said one of your big customer sets is financial services and trading desk. So low latency there is critical as millions and millions and millions if not billions of dollars. >> Right, so Algo-Logic we have a whole product line of high-frequency trading systems. And so our Tick-To-Trade system is unique in the fact that it has a sub-microsecond trading latency and this means going from market data that comes in, for example on CME for options and futures trading, to time that we can place a fix order back out to the market. All of that happens in an FPGA. That happens in under a microsecond. So under a millionth of second and that beats every other software system that's being used. >> Right, which is a game change, right? Wins or losses can be made on those time frames. >> It's become a must have is that if you're trading on Wall Street or trading in Chicago and you're not trading with an FPGA, you're trading at a severe disadvantage. And so we make a product that enables all the trading firms to be playing on a fair, level playing field against the big firms. >> Right, so it's interesting because the adoption of Flash and some of these other kind of speed accelerator technologies that have been happening over the last several years, people are kind of getting accustomed to the fact that speed is better, but often it was kind of put aside in this kind of high-value applications like financial services and not really proliferating to a broader use of applications. I wonder if you're seeing that kind of change a little bit, where people are seeing the benefits of real time and speed beyond kind of the classic high-value applications? >> Well, I think the big change that's happened is that it's become machine-to-machine now. And so humans, for example in trading, are not part of the loop anymore and so it's not a matter of am I faster than another person? It's am I faster than the other person's machine? And so this notion of having compute that goes fast has become suddenly dramatically much more important because everything now is going to machine versus machine. And so if you're an ad tech advertiser, is that how quickly you can do an auction to place an ad matters and if you can get a higher value ad placed because you're able to do a couple rounds of an auction, that's worth a lot. And so, again, with Algo-Logic we make things go faster and that time benefit means, that all thing else being the same, you're the first to come to a decision. >> Right, right and then of course the machine-to-machine obviously brings up the hottest topic that everybody loves to talk about is autonomous vehicles and networked autonomous vehicles and just the whole IOT space with the compute moving out to the edge. So this machine-to-machine systems are only growing in importance and really percentage of the total compute consumption by far. >> That's right, yeah. So last year at Super Computing, we demonstrated a drone, bringing in realtime data from a drone. So doing realtime data collection and doing processing with our Key Value Store. So this year, we have a machine learning application, a Markov Decision Process where we show that we can scale-out a machine learning process and teach cars how to drive in a few minutes. >> Teach them how to drive in a few minutes? >> Right. >> So that's their learning. That's not somebody programming the commands. They're actually going through a process of learning? >> Right, well so the Key Value Store is just a part of this. We're just the part of the system that makes the scale-outs that runs well in a data center. And so we're still running the Markov Decision Process in simulations in software. So we have a couple Xeon servers that we brought with us to do the machine learning and a data center would scale-out to be dozens of racks, but even with a few machines though, for simple highway driving, what we can show is we start off with, the system's untrained and that in the Markov Decision Process, we reward the final state of not having accidents. And so at first, the cars drive and they're bouncing into each other. It's like bumper cars, but within a few minutes and after about 15 million simulations, which can be run that quickly, is that the cars start driving better than humans. And so I think that's a really phenomenal step, is the fact that you're able to get to a point where you can train a system how to drive and give them 15 man years of experience in a matter of minutes by the scale-out compute systems. >> Right, 'cause then you can put in new variables, right? You can change that training and modify it over time as conditions change, throw in snow or throw in urban environments and other things. >> Absolutely, right. And so we're not pretending that our machine learning, that application we're showing here is an end-all solution. But as you bring in other factors like pedestrians, deer, other cars running different algorithms or crazy drivers, is that you want to expose the system to those conditions as well. And so one of the questions that came up to us was, "What machine learning application are you running?" So we're showing all 25 cars running one machine learned application and that's incrementally getting better as they learn to drive, but we could also have every car running a different machine learning application and see how different AIs interact with each other. And I think that's what you're going to see on the highway as we have more self-driving cars running different algorithms, we have to make sure they all place nice with each other. >> Right, but it's really a different way of looking at the world, right, using machine learning, machine-to-machine versus single person or a team of people writing a piece of software to instruct something to do something and then you got to go back and change it. This is a much more dynamic realtime environment that we're entering into with IOT. >> Right, I mean the machine-to-human, which was kind of last year and years before, were, "How do you make interactions "between the computers better than humans?" But now it's about machine-to-machine and it's,"How do you make machines interact better "with other machines?" And that's where it gets really competitive. I mean, you can imagine with drones for example, for applications where you have drones against drones, the drones that are faster are going to be the ones that win. >> Right, right, it's funny, we were just here last week at the commercial drone show and it's pretty interesting how they're designing the drones now into a three-part platform. So there's the platform that flies around. There's the payload, which can be different sensors or whatever it's carrying, could be herbicide if it's an agricultural drone. And then they've opened up the STKs, both on the control side as well as the mobile side, in terms of the controls. So it's a very interesting way that all these things now, via software could tie together, but as you say, using machine learning you can train them to work together even better, quicker, faster. >> Right, I mean having a swarm or a cluster of these machines that work with each other, you could really do interesting things. >> Yeah, that's the whole next thing, right? Instead of one-to-one it's many-to-many. >> And then when swarms interact with other swarms, then I think that's really fascinating. >> So alright, is that what we're going to be talking about? So if we connect in 2018, what are we going to be talking about? The year's almost over. What are your top priorities for next year? >> Our top priorities are to see. We think that FPGA is just playing this important part. A GPU for example, became a very big part of the super computing systems here at this conference. But the other side of heterogeneous is the FPGA and the FPGA has seen almost, just very minimal adoption so far. But the FPGA has the capability of providing, especially when it comes to doing network IO transactions, it's speeding up realtime interactions, it has an ability to change the world again for HPC. And so I'm expecting that in a couple years, at this HPC conference, that what we'll be talking about, is the biggest top 500 super computers, is that how big of FPGAs do they have. Not how big of GPUs do they have. >> All right, time will tell. Well, John, thanks for taking a few minutes out of your day and stopping by. >> Okay, thanks Jeff, great to talk to you. >> All right, he's John Lockwood, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. >> Bye. (electronic music)

Published Date : Nov 14 2017

SUMMARY :

Covering Super Computing '17, brought to you by Intel. A lot of heavy science in terms of the keynotes. that aren't familiar with the company, and by doing that we make them go faster. still leveraging the speed that you get in hardware. And so with the acquisition of Altera, And you said one of your big customer sets Right, so Algo-Logic we have a whole product line Right, which is a game change, right? And so we make a product that enables all the trading firms Right, so it's interesting because the adoption of Flash And so this notion of having compute that goes fast and just the whole IOT space and teach cars how to drive in a few minutes. That's not somebody programming the commands. and that in the Markov Decision Process, Right, 'cause then you can put in new variables, right? And so one of the questions that came up to us was, of looking at the world, right, using machine learning, Right, I mean the machine-to-human, in terms of the controls. you could really do interesting things. Yeah, that's the whole next thing, right? And then when swarms interact with other swarms, So alright, is that what we're going to be talking about? And so I'm expecting that in a couple years, All right, time will tell. All right, he's John Lockwood, I'm Jeff Frick. (electronic music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

John LockwoodPERSON

0.99+

ChicagoLOCATION

0.99+

Jeff FrickPERSON

0.99+

JohnPERSON

0.99+

2018DATE

0.99+

millionsQUANTITY

0.99+

25 carsQUANTITY

0.99+

last weekDATE

0.99+

Algo-LogicORGANIZATION

0.99+

last yearDATE

0.99+

billions of dollarsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

Wall StreetLOCATION

0.99+

Denver, ColoradoLOCATION

0.99+

AlteraORGANIZATION

0.99+

next yearDATE

0.99+

this yearDATE

0.99+

Algo Logic SystemsORGANIZATION

0.99+

first tripQUANTITY

0.98+

under a microsecondQUANTITY

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

dozens of racksQUANTITY

0.98+

firstQUANTITY

0.98+

Super Computing 2017EVENT

0.97+

FirstQUANTITY

0.97+

bothQUANTITY

0.97+

under a millionth of secondQUANTITY

0.96+

500 super computersQUANTITY

0.96+

Super Computing '17EVENT

0.94+

15 man yearsQUANTITY

0.94+

about 15 million simulationsQUANTITY

0.93+

three-part platformQUANTITY

0.89+

minutesQUANTITY

0.88+

XeonORGANIZATION

0.84+

theCUBEORGANIZATION

0.82+

single personQUANTITY

0.78+

one ofQUANTITY

0.75+

last several yearsDATE

0.74+

Key Value StoreORGANIZATION

0.72+

coupleQUANTITY

0.63+

couple yearsQUANTITY

0.61+

FlashTITLE

0.61+

yearsDATE

0.55+

Xeon PhisCOMMERCIAL_ITEM

0.51+

machinesQUANTITY

0.5+

questionsQUANTITY

0.5+

Value StoreORGANIZATION

0.49+

KeyTITLE

0.47+

XeonsORGANIZATION

0.4+

MarkovORGANIZATION

0.39+

CMETITLE

0.39+

Stephane Monoboisset, Accelize | Super Computing 2017


 

>> Voiceover: From Denver, Colorado, it's theCUBE covering Super Computing '17, brought to you by Intel. Hey, welcome back, everybody. Jeff Frick, here, with theCUBE. We're in Denver, Colorado at Super Computing 2017. It's all things heavy lifting, big iron, 12,000 people. I think it's the 20th anniversary of the conference. A lot of academics, really talking about big iron, doin' big computing. And we're excited to have our next guest, talking about speed, he's Stephane Monoboisset. Did I get that right? That's right. He's a director of marketing and partnerships for Accelize. Welcome. Thank you. So, for folks that aren't familiar with Accelize, give them kind of the quick overview. Okay, so Accelize is a French startup. Actually, a spinoff for a company called PLDA that has been around for 20 years, doing PCI express IP. And about a few years ago, we started initiative to basically bring FPGA acceleration to the cloud industry. So what we say is, we basically enable FPGA acceleration as a service. So did it not exist in cloud service providers before that, or what was kind of the opportunity that you saw there? So, FPGAs have been used in data centers in many different ways. They're starting to make their way into, as a service type of approach. But one of the thing that the industry, one of the buzzword that the industry's using, is FPGA as a service. And the industry usually refers to it as the way to bring FPGA to the end users. But when you think about it, end users don't really want FPGA as a service. Most of the cloud end users are not FPGA experts. So they couldn't care less whether it's an FPGA or something else. What they really want is the acceleration benefits. Hence the term, FPGA acceleration as a service. So, in order to do that, instead of just going and offering an FPGA platform, and giving them the tools, even if they are easy to use and develop the FPGAs, our objective is to propose to provide a marketplace of accelerators that they can use as a service, without even thinking that it's an FPGA on the background. So that's a really interesting concept. Because that also leverages an ecosystem. And one thing we know that's important, if you have any kind of a platform playing, you need an ecosystem that brings a much broader breadth of applications, and solution suites, and there's a lot of talk about solutions. So that was pretty insightful, 'cause now you open it up to this much broader set of applications. Well, absolutely. The ecosystem is the essential part of the offering because obviously, as a company, we cannot be expert in every single domain. And to a certain extent, even FPGA designers, they are what, about maybe 10, 15,000 FPGA designers in the world. They are not really expert in the end application. So one of the challenges that we're trying to address is how do we make application developers, the people who are already playing in the cloud, the ISVs, for example, who have the expertise of what the end user wants, being able to develop something that is efficient to the end user in FPGAs. And this is why we've created a tool called Quick Play, which basically enables what we call the accelerator function developers, the guys who have the application expertise, to leverage an ecosystem of IP providers in the FPGA space that have built efficient building blocks, like encryption, compression, video transcoding. Right. These sort of things. So what you have is an ecosystem of cloud service providers. You have an ecosystem of IP providers. And we have this growing ecosystem of accelerator developers that develop all these accelerators that are sold as a service. And that really opens up the number of people that are qualified to play in the space. 'Cause you're kind of hiding the complexity into the hardcore, harder engineers and really making it more kind of a traditional software application space. Is that right? Yeah, you're absolutely right. And we're doing that on the technical front, but we're also doing that on the business model front. Because one thing with FPGAs is that FPGAs has relied heavily over the years on the IP industry. And the IP industry for FPGAs, and it's the same for ASIGs, have been also relying on the business model, which is based on very high up-front cost. So let me give you an example. Let's say I want to develop an accelerator, right, for database. And what I need to do is to get the stream of data coming in. It's most likely encrypted, so I need to decrypt this data, then I want to do some search algorithm on it to extract certain functions. I'm going to do some processing on it, and maybe the last thing I want to do is, I want to compress because I want to store the result of that data. If I'm doing that with a traditional IP business model, what I need to do is basically go and talk to every single one of those IP providers and ask them to sell me the IP. In the traditional IP business model, I'm looking at somewhere between 200,000 to 500,000 up front cost. And I want to sell this accelerator for maybe a couple of dollars on one of the marketplace. There's something that doesn't play out. So what we've done, also, is we've introduced a pay-per-use business model that allows us to track those IPs that are being used by the accelerators so we can propagate the as-a-service business model throughout the industry, the supply chain. Which is huge, right? 'Cause as much as cloud is about flexibility and extensibility, it's about the business model as well. About paying what you use when you use it, turning it on, turning it off. So that's a pretty critical success factor. Absolutely, I mean, you can imagine that there's, I don't know, millions of users in the cloud. There's maybe hundreds of thousands of different type of ways they're processing their data. So we also need a very agile ecosystem that can develop very quickly. And we also need them to do it in a way that doesn't cost too much money, right? Think about it, and think about the app store when it was launched, right? Right. When Apple launched the iPhone back about 10 years ago, right, they didn't have much application. And they didn't, I don't think they quite knew, exactly, how it was going to be used. But what they did, which completely changed the industry, is they opened up the SDK that they sold for very small amount of money and enabled a huge community to come up with a lot of a lot of application. And now you go there and you can find application that really meats your need. That's kind of the similar concept that we're trying to develop here. Right. So how's been the uptake? I mean, so where are you, kind of, in the life cycle of this project? 'Cause it's a relatively new spinout of the larger company? Yes, so it's relatively new. We did the spinout because we really want to give that product its own life. Right, right. Right? But we are still at the beginning. So we started a developing partnership with cloud service providers. The two ones that we've announced is Amazon Web Services and OVH, the cloud service provider in France. And we have recruited, I think, about a dozen IP partners. And now we're also working with accelerator developer, accelerator functions developers. Okay. So it's a work in progress. And our main goal right now is to, really to evangelize, and to show them how much money they can do and how they can serve this market of FPGA acceleration as a service. The cloud providers, or the application providers? Who do you really have to convince the most? So the one we have to convince today are really the application developers. Okay, okay. Because without content, your marketplace doesn't mean much. So this is the main thing we're focusing on right now. Okay, great. So, 2017's coming to an end, which is hard to believe. So as you look forward to 2018, of those things you just outlined, kind of what are some of the top priorities for 2018? So, top priorities will be to strengthen our relationship with the key cloud service providers we work with. We have a couple of other discussions ongoing to try to offer a platform on more cloud service providers. We also want to strengthen our relationship with Intel. And we'll continue the evangelization to really onboard all the IP providers and the accelerator developers so that the marketplace becomes filled with valuable accelerators that people can use. And that's going to be a long process, but we are focusing right now on key application space that we know people can leverage in the application. Exciting times. Oh yeah, it is. You know, it's 10 years since the app store launched, I think, so I look at acceleration as a service in cloud service providers, this sounds like a terrific opportunity. It is, it is a huge opportunity. Everybody's talking about it. We just need to materialize it now. All right, well, congratulations and thanks for taking a couple minutes out of your day. Oh, thanks for your time. All right, he's Stephane, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. (upbeat music)

Published Date : Nov 14 2017

SUMMARY :

So one of the challenges that we're trying to address

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephane MonoboissetPERSON

0.99+

AnthonyPERSON

0.99+

TeresaPERSON

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

InformaticaORGANIZATION

0.99+

JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Teresa TungPERSON

0.99+

Keith TownsendPERSON

0.99+

Jeff FrickPERSON

0.99+

Peter BurrisPERSON

0.99+

Rebecca KnightPERSON

0.99+

MarkPERSON

0.99+

SamsungORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

JamiePERSON

0.99+

John FurrierPERSON

0.99+

Jamie SharathPERSON

0.99+

RajeevPERSON

0.99+

AmazonORGANIZATION

0.99+

JeremyPERSON

0.99+

Ramin SayarPERSON

0.99+

HollandLOCATION

0.99+

Abhiman MatlapudiPERSON

0.99+

2014DATE

0.99+

RajeemPERSON

0.99+

Jeff RickPERSON

0.99+

SavannahPERSON

0.99+

Rajeev KrishnanPERSON

0.99+

threeQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

FranceLOCATION

0.99+

Sally JenkinsPERSON

0.99+

GeorgePERSON

0.99+

StephanePERSON

0.99+

John FarerPERSON

0.99+

JamaicaLOCATION

0.99+

EuropeLOCATION

0.99+

AbhimanPERSON

0.99+

YahooORGANIZATION

0.99+

130%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

2018DATE

0.99+

30 daysQUANTITY

0.99+

ClouderaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

183%QUANTITY

0.99+

14 millionQUANTITY

0.99+

AsiaLOCATION

0.99+

38%QUANTITY

0.99+

TomPERSON

0.99+

24 millionQUANTITY

0.99+

TheresaPERSON

0.99+

AccentureORGANIZATION

0.99+

AccelizeORGANIZATION

0.99+

32 millionQUANTITY

0.99+

Karsten Ronner, Swarm64 | Super Computing 2017


 

>> Announcer: On Denver, Colorado, it's theCUBE, covering SuperComputing '17, brought to you by Intel. >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at this SuperComputing conference 2017. I think there's 12,000 people. Our first time being here is pretty amazing. A lot of academics, a lot of conversations about space and genomes and you know, heavy-lifting computing stuff. It's fun to be here, and we're really excited. Our next guest, Karsten Ronner. He's the CEO of Swarm64. So Karsten, great to see you. >> Yeah, thank you very much for this opportunity. >> Absolutely. So for people that aren't familiar with Swarm64, give us kind of the quick eye-level. >> Yeah. Well, in a nutshell, Swarm64 is accelerating relational databases, and we allow them to ingest data so much faster, 50 times faster than a relational database. And we can also then query that data 10, 20 times faster than relational database. And that is very important for many new applications in IoT and in netbanking and in finance, and so on. >> So you're in a good space. So beyond just general or better performance, faster, faster, faster, you know, we're seeing all these movements now in real-time analytics and real-time applications, which is only going to get crazier with IoT and Internet of Things. So how do you do this? Where do you do this? What are some of the examples you could share with us? >> Yeah, so all our solution is a combination of a software wrapper that attaches our solution to existing databases. And inside, there's an FPGA from Intel, the Arria 10. And we are combining both, such that they actually plug into standard interfaces of existing databases, like in PostgreSQL, Foreign Data Wrappers, the storage engine in MySQL, and MariaDB and so on. And with that mechanism, we ensure that the database, the application doesn't see us. For the application, there's just fast database but we're invisible and also the functionality of the database remains what it was. That's the net of what we're doing. >> So that's so important because we talked a little bit about offline, you said you had a banking customer that said they have every database that's ever been created. They've been buying them all along so they've got embedded systems, you can't just rip and replace. You have to work with existing infrastructure. At the same time, they want to go faster. >> Yeah, absolutely right. Absolutely right. And there's a huge code base, which has been verified, which has been debugged, and in banking, it's also about compliance so you can't just rip out your old code base and do something new, because again, you would have to go through compliance. Therefore, customers really, really, really want their existing databases faster. >> Right. Now the other interesting part, and we've talked to some of the other Intel execs, is kind of this combination hybrid of the Hardware Software Solution in the FPGA, and you're really opening up an ecosystem for people to build more software-based solutions that leverage that combination of the hardware software power. Where do you see that kind of evolving? How's that going to help your company? >> Yeah. We are a little bit unique in that we are hiding that FPGA from the user, and we're not exposing it. Many people, actually, many applications expose it to the user, but apart from that, we are benefiting a lot from what Intel is doing. Intel is providing the entire environment, including virtualization, all those things that help us then to be able to get into Cloud service providers or into proprietary virtualized environments and things like that. So it is really a very close cooperation with Intel that helps us and enables us to do what we're doing. >> Okay. And I'm curious because you spend a lot of time with customers, you said a lot of legacy customers. So as they see the challenges of this new real-time environment, what are some of their concerns, what are some of the things that they're excited that they can do now with real-time, versus bash and data lake. And I think it's always funny, right? We used to make decisions based on stuff that happened in the past. And we're kind of querying now really the desires just to make action on stuff that's happening now, it's a fundamentally different way to address a problem. >> Yeah, absolutely. And a very, very key element of our solution is that we can not only insert these very, very large amounts of data that also other solutions can do, massively parallel solutions, streaming solutions, you know them all. They can do that too. However, the difference is that we can make that data available within less than 10 microseconds. >> Jeff: 10 microseconds? >> So dataset arrives within less than 10 microseconds, that dataset is part of the next query and that is a game changer. That allows you to do controlled loop processing of data in machine-to-machine environments, and autonomous, for autonomous applications and all those solutions where you just can't wait. If your car is driving down the street, you better know what has happened, right? And you can react to it. As an example, it could be a robot in a plant or things like that, where you really want to react immediately. >> I'm curious as to the kind of value unlocking that that provides to those old applications that were working with what they think is an old database. Now, you said, you know, you're accelerating it. To the application, it looks just the same as it looked before. How does that change those performances of those applications? I would imagine there's a whole other layer of value unlocking in those entrenched applications with this vast data. >> Yeah. That is actually true, and it's on a business level, the applications enable customers to do things they were not capable of doing before, and look for example in finance. If you can analyze the market data much quicker, if you can analyze past trades much quicker, then obviously you're generating value for the firm because you can react to market trends more accurately, you can mirror them in a more tighter fashion, and if you can do that, then you can reduce the margin of error with which you're estimating what's happening, and all of that is money. It's really pure money in the bank account of the customer, so to speak. >> Right. And the other big trend we talked about, besides faster, is you know, sampling versus not sampling. In the old days, we sampled old data and made decisions. Now we don't want to sample, we want all of the data, we want to make decisions on all the data, so again that's opening up another level of application performance because it's all the data, not a sample. >> For sure. Because before, you were aggregating. When you aggregate, you reduce the amount of information available. Now, of course, when you have the full set of information available, your decision-making is just so much smarter. And that's what we're enabling. >> And it's funny because in finance, you mentioned a couple of times, they've been doing that forever, right. The value of a few units of time, however small, is tremendous, but now we're seeing it in other industries as well that realize the value of real-time, aggregated, streaming data versus a sampling of old. Really opens up new types of opportunities. >> Absolutely, yes, yes. Yeah, finance, as I mentioned is an example, but then also IoT, machine-to-machine communication, everything which is real-time, logging, data logging, security and network monitoring. If you want to really understand what's flowing through your network, is there anything malicious, is there any actor on my network that should not be there? And you want to react so quickly that you can prevent that bad actor from doing anything to your data, this is where we come in. >> Right. And security's so big, right? It in everywhere. Especially with IoT and machine learning. >> Absolutely. >> All right, Karsten, I'm going to put you on the spot. So we're November 2017, hard to believe. As you look forward to 2018, what are some of your priorities? If we're standing here next year, at SuperComputing 2018, what are we going to be talking about? >> Okay, what we're going to talk about really is that we will, right now we're accelerating single-server solutions and we are working very, very hard on massively parallel systems, while retaining the real-time components. So we will not only then accelerate a single server, by then, allowing horizontal scaling, we will then bring a completely new level of analytics performance to customers. So that's what I'm happy to talk to you about next year. >> All right, we'll see you next year, I think it's in Texas. >> Wonderful, yeah, great. >> So thanks for stopping by. >> Thank you. >> He's Karsten, I'm Jeff. You're watching TheCUBE, from SuperComputing 2017. Thanks for watching.

Published Date : Nov 14 2017

SUMMARY :

brought to you by Intel. and genomes and you know, Yeah, thank you very of the quick eye-level. And that is very important for So how do you do this? ensure that the database, about offline, you said about compliance so you can't just rip out How's that going to help your company? that FPGA from the user, stuff that happened in the past. is that we can make the street, you better know that provides to those and if you can do that, then you can And the other big trend we talked about, Now, of course, when you have the in finance, you mentioned quickly that you can prevent And security's so big, right? going to put you on the spot. talk to you about next year. All right, we'll see you next Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Karsten RonnerPERSON

0.99+

KarstenPERSON

0.99+

TexasLOCATION

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

50 timesQUANTITY

0.99+

November 2017DATE

0.99+

MySQLTITLE

0.99+

next yearDATE

0.99+

2018DATE

0.99+

next yearDATE

0.99+

less than 10 microsecondsQUANTITY

0.99+

Swarm64ORGANIZATION

0.99+

10 microsecondsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

PostgreSQLTITLE

0.99+

10QUANTITY

0.99+

MariaDBTITLE

0.99+

IntelORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

20 timesQUANTITY

0.99+

bothQUANTITY

0.99+

first timeQUANTITY

0.98+

Swarm64TITLE

0.97+

SuperComputingEVENT

0.97+

single serverQUANTITY

0.95+

SuperComputing '17EVENT

0.91+

theCUBEORGANIZATION

0.85+

Super Computing 2017EVENT

0.83+

TheCUBETITLE

0.81+

singleQUANTITY

0.65+

2017EVENT

0.63+

bashTITLE

0.6+

Foreign Data WrappersTITLE

0.54+

SuperComputingTITLE

0.54+

Arria 10ORGANIZATION

0.52+

2017DATE

0.4+

Bill Jenkins, Intel | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing 17. Brought to you by Intel. (techno music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at the Super Computing Conference 2017. About 12 thousand people, talking about the outer edges of computing. It's pretty amazing. The keynote was huge. The square kilometer array, a new vocabulary word I learned today. It's pretty exciting times, and we're excited to have our next guest. He's Bill Jenkins. He's a Product Line Manager for AI on FPGAs at Intel. Bill, welcome. Thank you very much for having me. Nice to meet you, and nice to talk to you today. So you're right in the middle of this machine-learning AI storm, which we keep hearing more and more about. Kind of the next generation of big data, if you will. That's right. It's the most dynamic industry I've seen since the telecom industry back in the 90s. It's evolving every day, every month. Intel's been making some announcements. Using this combination of software programming and FPGAs on the acceleration stack to get more performance out of the data center. Did I get that right? Sure, yeah, yeah. Pretty exciting. The use of both hardware, as well as software on top of it, to open up the solution stack, open up the ecosystem. What of those things are you working on specifically? I really build first the enabling technology that brings the FPGA into that Intel ecosystem. Where Intel is trying to provide that solution from top to bottom to deliver AI products. >> Jeff: Right. Into that market. FPGAs are a key piece of that because we provide a different way to accelerate those machine-learning and AI workloads. Where we can be an offload engine to a CPU. We can be inline analytics to offload the system, and get higher performance that way. We tie into that overall Intel ecosystem of tools and products. Right. So that's a pretty interesting piece because the real-time streaming data is all the rage now, right? Not in batch. You want to get it now. So how do you get it in? How do you get it written to the database? How do you get it into the micro-processor? That's a really, really important piece. That's different than even two years ago. You didn't really hear much about real-time. I think it's, like I said, it's evolving quite a bit. Now, a lot of people deal with training. It's the science behind it. The data scientists work to figure out what topologies they want to deploy and how they want to deploy 'em. But now, people are building products around it. >> Jeff: Right. And once they start deploying these technologies into products, they realize that they don't want to compensate for limitations in hardware. They want to work around them. A lot of this evolution that we're building is to try to find ways to more efficiently do that compute. What we call inferencing, the actual deployed machine-learning scoring, as they will. >> Jeff: Right. In a product, it's all about how quickly can I get the data out. It's not about waiting two seconds to start the processing. You know, in an autonomous-driven car where someone's crossing the road, I'm not waiting two seconds to figure out it's a person. Right, right. I need it right away. So I need to be able to do that with video feeds, right off a disk drive, from the ethernet data coming in. I want to do that directly in line, so that my processor can do what it's good at, and we offload that processor to get better system performance. Right. And then on the machine-learning specifically, 'cause that is all the rage. And it is learning. So there is a real-time aspect to it. You talked about autonomous vehicles. But there's also continuous learning over time, that's not necessarily dependent on learning immediately. Right. But continuous improvement over time. What are some of the unique challenges in machine-learning? And what are some of the ways that you guys are trying to address those? Once you've trained the network, people always have to go back and retrain. They say okay, I've got a good accuracy, but I want better performance. Then they start lowering the precision, and they say well, today we're at 32-bit, maybe 16-bit. Then they start looking into eight. But the problem is, their accuracy drops. So they retrain that into eight topology, that network, to get the performance benefit, but with the higher accuracy. The flexibility of FPGA actually allows people to take that network at 32-bit, with the 32-bit trained weights, but deploy it in lower precision. So we can abstract away the fact that the hardware's so flexible, we can do what we call floating point 11-bit floating point. Or even 8-bit floating point. Even here today at the show, we've got a binary and ternary demo, showcasing the flexibility that the FPGA can provide today with that building block piece of hardware that the FPGA can be. And really provide, not only the topologies that people are trying to build today, but tomorrow. >> Jeff: Right. Future proofing their hardware. But then the precisions that they may want to do. So that they don't have to retrain. They can get less than a 1% accuracy loss, but they can lower that precision to get all the performance benefits of that data scientist's work to come up with a new architecture. Right. But it's interesting 'cause there's trade-offs, right? >> Bill: Sure. There's no optimum solution. It's optimum as to what you're trying to optimize for. >> Bill: Right. So really, the ability to change the ability to continue to work on those learning algorithms, to be able to change your priority, is pretty key. Yeah, a lot of times today, you want this. So this has been the mantra of the FPGA for 30 plus years. You deploy it today, and it works fine. Maybe you build an ASIC out of it. But what you want tomorrow is going to be different. So maybe if it's changing so rapidly, you build the ASIC because there's runway to that. But if there isn't, you may just say, I have the FPGA, I can just reprogram it to do what's the next architecture, the next methodology. Right. So it gives you that future proofing. That capability to sustain different topologies. Different architectures, different precisions. To kind of keep people going with the same piece of hardware. Without having to say, spin up a new ASIC every year. >> Jeff: Right, right. Which, even then, it's so dynamic it's probably faster then, every year, the way things are going today. So the other thing you mentioned is topography, and it's not the same topography you mentioned, but this whole idea of edge. Sure. So moving more and more compute, and store, and smarts to the edge. 'Cause there's just not going to be time, you mentioned autonomous vehicles, a lot of applications to get everything back up into the cloud. Back into the data center. You guys are pushing this technology, not only in the data center, but progressively closer and closer to the edge. Absolutely. The data center has a need. It's always going to be there, but they're getting big. The amount of data that we're trying to process every day is growing. I always say that the telecom industry started the Information Age. Well, the Information Age has done a great job of collecting a lot of data. We have to process that. If you think about where, maybe I'll allude back to autonomous vehicles. You're talking about thousands of gigabytes, per day, of data generated. Smart factories. Exabytes of data generated a day. What are you going to do with all that? It has to be processed. We need that compute in the data center. But we have to start pushing it out into the edge, where I start thinking, well even a show like this, I want security. So, I want to do real-time weapons detection, right? Security prevention. I want to do smart city applications. Just monitoring how traffic moves through a mall, so that I can control lighting and heating. All of these things at the edge, in the camera, that's deployed on the street. In the camera that's deployed in a mall. All of that, we want to make those smarter, so that we can do more compute. To offload the amount of data that needs to be sent back to the data center. >> Jeff: Right. As much as possible. Relevant data gets sent back. No shortage of demand for compute store networking, is there? No, no. It's really a heterogeneous world, right? We need all the different compute. We need all the different aspects of transmission of the data with 5G. We need disk space to store it. >> Jeff: Right. We need cooling to cool it. It's really becoming a heterogeneous world. All right, well, I'm going to give you the last word. I can't believe we're in November of 2017. Yeah. Which is bananas. What are you working on for 2018? What are some of your priorities? If we talk a year from now, what are we going to be talking about? Intel's acquired a lot of companies over the past couple years now on AI. You're seeing a lot of merging of the FPGA into that ecosystem. We've got the Nervana. We've got Movidius. We've got Mobileye acquisitions. Saffron Technologies. All of these things, when the FPGA is kind of a key piece of that because it gives you that flexibility of the hardware, to extend those pieces. You're going to see a lot more stuff in the cloud. A lot more stuff with partners next year. And really enabling that edge to data center compute, with things like binary neural networks, ternary neural networks. All the different next generation of topologies to kind of keep that leading edge flexibility that the FPGA can provide for people's products tomorrow. >> Jeff: Exciting times. Yeah, great. All right, Bill Jenkins. There's a lot going on in computing. If you're not getting your computer science degree, kids, think about it again. He's Bill Jenkins. I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. Thank you. (techno music)

Published Date : Nov 14 2017

SUMMARY :

Kind of the next generation of big data, if you will. We can be inline analytics to offload the system, A lot of this evolution that we're building is to try to of hardware that the FPGA can be. So that they don't have to retrain. It's optimum as to what you're trying to optimize for. So really, the ability to change the ability to continue We need that compute in the data center. We need all the different aspects of of the hardware, to extend those pieces. There's a lot going on in computing.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Bill JenkinsPERSON

0.99+

two secondsQUANTITY

0.99+

2018DATE

0.99+

November of 2017DATE

0.99+

8-bitQUANTITY

0.99+

16-bitQUANTITY

0.99+

32-bitQUANTITY

0.99+

todayDATE

0.99+

next yearDATE

0.99+

BillPERSON

0.99+

30 plus yearsQUANTITY

0.99+

11-bitQUANTITY

0.99+

tomorrowDATE

0.99+

Denver, ColoradoLOCATION

0.99+

IntelORGANIZATION

0.98+

eightQUANTITY

0.98+

MovidiusORGANIZATION

0.98+

Super Computing Conference 2017EVENT

0.98+

a dayQUANTITY

0.96+

Saffron TechnologiesORGANIZATION

0.96+

thousands of gigabytesQUANTITY

0.95+

MobileyeORGANIZATION

0.95+

About 12 thousand peopleQUANTITY

0.95+

two years agoDATE

0.95+

90sDATE

0.94+

less than a 1%QUANTITY

0.94+

NervanaPERSON

0.94+

FPGAORGANIZATION

0.9+

both hardwareQUANTITY

0.89+

firstQUANTITY

0.84+

Exabytes of dataQUANTITY

0.76+

Super Computing 2017EVENT

0.75+

past couple yearsDATE

0.73+

every yearQUANTITY

0.69+

yearQUANTITY

0.69+

per dayQUANTITY

0.6+

5GQUANTITY

0.58+

Super Computing 17EVENT

0.55+

theCUBEORGANIZATION

0.52+

FPGATITLE

0.42+

Bernhard Friebe, Intel Programmable Solutions Group | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE. Covering Super Computing 2017 brought to you by Intel. (upbeat music) >> Hey, welcome back everybody. Jeffrey Frick here with theCube. We're in Denver, Colorado at Super Computing 17. I think it's the 20th year of the convention. 12,000 people. We've never been here before. It's pretty amazing. Amazing keynote, really talking about space, and really big, big, big computing projects, so, excited to be here, and we've got our first guest of the day. He's Bernard Friebe, he is the Senior Director of FPGA, I'll get that good by the end of the day, Software Solutions for Intel Programmable group. First off, welcome, Bernard. >> Thank you. I'm glad to be here. >> Absolutely. So, have you been to this conference before? >> Yeah, a couple of times before. It's always a big event. Always a big show for us, so I'm excited. >> Yeah, and it's different, too, cuz it's got a lot of academic influence, as well, as you walk around the outside. It's pretty hardcore. >> Yes, it's wonderful, and you see a lot of innovation going on, and we need to move fast. We need to move faster. That's what it is. And accelerate. >> And that's what you're all about, acceleration, so, Intel's making a lot of announcements, really, about acceleration at FPGA. For acceleration and in data centers and in big data, and all these big applications. So, explain just a little bit how that seed is evolving and what some of the recent announcements are all about. >> The world of computing must accelerate. I think we all agree on that. We all see that that's a key requirement. And FPGA's are a truly versatile, multi-function accelerator. It accelerates so many workloads in the high-performance computing space, may it be financial, genomics, oil and gas, data analytics, and the list goes on. Machine learning is a very big one. The list goes on and on. And, so, we're investing heavily in providing solutions which makes it much easier for our users to develop and deploy FPGA in a high-performance computing environment. >> You guys are taking a lot of steps to make the software programming at FPGA a lot easier, so you don't have to be a hardcore hardware engineer, so you can open it up to a broader ecosystem and get a broader solution set. Is that right? >> That's right, and it's not just the hardware. How do you unlock the benefits of FPGA as a versatile accelerator, so their parallelism, their ability to do real-time, low-latency, acceleration of many different workloads, and how do you enable that in an environment which is truly dynamic and multi-function, like a data center. And so, the product we've recently announced is the acceleration stack for xeon with FPGA, which enables that use more. >> So, what are the components for that stack? >> It starts with hardware. So, we are building a hardware accelerator card, it's a pc express plugin card, it's called programmable accelerator card. We have integrated solutions where you have everything on an FPGA in package, but what's common is a software framework solution stack, which sits on top of these different hardware implementation, which really makes it easy for a developer to develop an accelerator, for a user to then deploy that accelerator and run it in their environment, and it also enables a data center operator to basically enable the FPGA like any other computer resources by integrating it into their orchestration framework. So, multiple levels taking care of all those needs. >> It's interesting, because there's a lot of big trends that you guys are taking advantage of. Obviously, we're at Super Computing, but big data, streaming analytics, is all the rage now, so more data faster, reading it in real time, pumping it into the database in real time, and then, right around the corner, we have IoT and internet of things and all these connected devices. So the demand for increased speed, to get that data in, get that data processed, get the analytics back out, is only growing exponentially. >> That's right, and FPGAs, due to their flexibility, have distinct advantages there. The traditional model is look aside of offload, where you have a processor, and then you offload your tasks to your accelerator. The FPGA, with their flexible I/Os and flexible core can actually run directly in the data path, so that's what we call in-line processing. And what that allows people to do is, whatever the source is, may it be cameras, may it be storage, may it be through the network, through ethernet, can stream directly into the FPGA and do your acceleration as the data comes in in a streaming way. And FPGAs provide really unique advantages there versus other types of accelerators. Low-latency, very high band-width, and they're flexible in a sense that our customers can build different interfaces, different connectivity around those FPGAs. So, it's really amazing how versatile the usage of FPGA has become. >> It is pretty interesting, because you're using all the benefits that come from hardware, hardware-based solutions, which you just get a lot of benefits when things are hardwired, with the software component and enabling a broader ecosystem to write ready-made solutions and integrations to their existing solutions that they already have. Great approach. >> The acceleration stack provides a consistent interface to the developer and the user of the FPGA. What that allows our ecosystem and our customers to do is to define these accelerators based on this framework, and then they can easily migrate those between different hardware platforms, so we're building in future improvements of the solution, and the consistent interfaces then allow our customers and partners to build their software stacks on top of it. So, their investment, once they do it and we target our Arria 10 programmable accelerator card can easily be leveraged and moved forward into the next generation strategy, and beyond. We enable, really, and encourage a broad ecosystem, to build solutions. You'll see that here at the show, many partners now have demos, and they show their solutions built on Intel FPGA hardware and the acceleration stack. >> OK, so I'm going to put you on the spot. So, these are announced, what's the current state of the general availability? >> We're sampling now on the cards, the acceleration stack is available for delivery to customers. A lot of it is open source, by the way, so it can already be downloaded from GitHub And the partners are developing the solutions they are demonstrating today. The product will go into volume production in the first half of next year. So, we're very close. >> All right, very good. Well, Bernard, thanks for taking a few minutes to stop by. >> Oh, it's my pleasure. >> All right. He's Bernard, I'm Jeff. You're watching theCUBE from Super Computing 17. Thanks for watching. (upbeat music)

Published Date : Nov 14 2017

SUMMARY :

brought to you by Intel. I'll get that good by the end of the day, I'm glad to be here. So, have you been to this conference before? Yeah, a couple of times before. Yeah, and it's different, too, and you see a lot of innovation going on, For acceleration and in data centers and the list goes on. and get a broader solution set. and how do you enable that in an environment and run it in their environment, and all these connected devices. and FPGAs, due to their flexibility, and enabling a broader ecosystem and the consistent interfaces then OK, so I'm going to put you on the spot. A lot of it is open source, by the way, Well, Bernard, thanks for taking a few minutes to stop by. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BernardPERSON

0.99+

Bernard FriebePERSON

0.99+

Bernhard FriebePERSON

0.99+

Jeffrey FrickPERSON

0.99+

JeffPERSON

0.99+

Intel Programmable Solutions GroupORGANIZATION

0.99+

12,000 peopleQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

20th yearQUANTITY

0.98+

Super Computing 17EVENT

0.97+

FPGAORGANIZATION

0.97+

Super Computing 2017EVENT

0.97+

todayDATE

0.96+

FirstQUANTITY

0.96+

GitHubORGANIZATION

0.95+

first half of next yearDATE

0.95+

first guestQUANTITY

0.95+

IntelORGANIZATION

0.95+

FPGATITLE

0.85+

theCubeORGANIZATION

0.84+

Arria 10COMMERCIAL_ITEM

0.73+

theCUBEORGANIZATION

0.54+

SuperEVENT

0.41+

ComputingTITLE

0.39+

17EVENT

0.36+