Image Title

Search Results for 3,000 watts:

Nancy Wang & Kate Watts | International Women's Day


 

>> Hello everyone. Welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE been profiling the leaders in the technology world, women in technology from developers to the boardroom, everything in between. We have two great guests promoting in from Malaysia. Nancy Wang is the general manager, also CUBE alumni from AWS Data Protection, and founder and board chair of Advancing Women in Tech, awit.org. And of course Kate Watts who's the executive director of Advancing Women in Tech.org. So it's awit.org. Nancy, Kate, thanks for coming all the way across remotely from Malaysia. >> Of course, we're coming to you as fast as our internet bandwidth will allow us. And you know, I'm just thrilled today that you get to see a whole nother aspect of my life, right? Because typically we talk about AWS, and here we're talking about a topic near and dear to my heart. >> Well, Nancy, I love the fact that you're spending a lot of time taking the empowerment to go out and help the industries and helping with the advancement of women in tech. Kate, the executive director it's a 501C3, it's nonprofit, dedicating to accelerating the careers of women in groups in tech. Can you talk about the organization? >> Yes, I can. So Advancing Women in Tech was founded in 2017 in order to fix some of the pathway problems that we're seeing on the rise to leadership in the industry. And so we specifically focus on supporting mid-level women in technical roles, get into higher positions. We do that in a few different ways through mentorship programs through building technical skills and by connecting people to a supportive community. So you have your peer network and then a vertical sort of relationships to help you navigate the next steps in your career. So to date we've served about 40,000 individuals globally and we're just looking to expand our reach and impact and be able to better support women in the industry. >> Nancy, talk about the creation, the origination story. How'd this all come together? Obviously the momentum, everyone in the industry's been focused on this for a long time. Where did AWIT come from? Advancing Women in Technology, that's the acronym. Advancing Women in Technology.org, where'd it come from? What's the origination story? >> Yeah, so AWIT really originated from this desire that I had, to Kate's point around, well if you look around right and you know, don't take my word for it, right? Look at stats, look at news reports, or just frankly go on your LinkedIn and see how many women in underrepresented groups are in senior technical leadership roles right out in the companies whose names we all know. And so that was my case back in 2016. And so when I first got the idea and back then I was actually at Google, just another large tech company in the valley, right? It was about how do we get more role models, how we get more, for example, women into leadership roles so they can bring up the next generation, right? And so this is actually part of a longer speech that I'm about to give on Wednesday and part of the US State Department speaker program. In fact, that's why Kate and I are here in Malaysia right now is working with over 200 women entrepreneurs from all over in Southeast Asia, including Malaysia Philippines, Vietnam, Borneo, you know, so many countries where having more women entrepreneurs can help raise the GDP right, and that fits within our overall mission of getting more women into top leadership roles in tech. >> You know, I was talking about Teresa Carlson she came on the program as well for this year this next season we're going to do. And she mentioned the decision between the US progress and international. And she's saying as much as it's still bad numbers, it's worse than outside the United States and needs to get better. Can you comment on the global aspect? You brought that up. I think it's super important to highlight that it's just not one area, it's a global evolution. >> Absolutely, so let me start, and I'd love to actually have Kate talk about our current programs and all of the international groups that we're working with. So as Teresa aptly mentioned there is so much work to be done not just outside the US and North Americas where typically tech nonprofits will focus, but rather if you think about the one to end model, right? For example when I was doing the product market fit workshop for the US State Department I had women dialing in from rice fields, right? So let me just pause there for a moment. They were holding their cell phones up near towers near trees just so that they can get a few minutes of time with me to do a workshop and how to accelerate their business. So if you don't call that the desire to propel oneself or accelerate oneself, not sure what is, right. And so it's really that passion that drove me to spend the next week and a half here working with local entrepreneurs working with policy makers so we can take advantage and really leverage that passion that people have, right? To accelerate more business globally. And so that's why, you know Kate will be leading our contingent with the United Nations Women Group, right? That is focused on women's economic empowerment because that's super important, right? One aspect can be sure, getting more directors, you know vice presidents into companies like Google and Amazon. But another is also how do you encourage more women around the world to start businesses, right? To reach economic and freedom independence, right? To overcome some of the maybe social barriers to becoming a leader in their own country. >> Yes, and if I think about our own programs and our model of being very intentional about supporting the learning development and skills of women and members of underrepresented groups we focused very much on providing global access to a number of our programs. For instance, our product management certification on Coursera or engineering management our upcoming women founders accelerator. We provide both access that you can get from anywhere. And then also very intentional programming that connects people into the networks to be able to further their networks and what they've learned through the skills online, so. >> Yeah, and something Kate just told me recently is these courses that Kate's mentioning, right? She was instrumental in working with the American Council on Education and so that our learners can actually get up to six college credits for taking these courses on product management engineering management, on cloud product management. And most recently we had our first organic one of our very first organic testimonials was from a woman's tech bootcamp in Nigeria, right? So if you think about the worldwide impact of these upskilling courses where frankly in the US we might take for granted right around the world as I mentioned, there are women dialing in from rice patties from other, you know, for example, outside the, you know corporate buildings in order to access this content. >> Can you think about the idea of, oh sorry, go ahead. >> Go ahead, no, go ahead Kate. >> I was going to say, if you can't see it, you can't become it. And so we are very intentional about ensuring that we have we're spotlighting the expertise of women and we are broadcasting that everywhere so that anybody coming up can gain the skills and the networks to be able to succeed in this industry. >> We'll make sure we get those links so we can promote them. Obviously we feel the same way getting the word out. I think a couple things I'd like to ask you guys cause I think you hit a great point. One is the economic advantage the numbers prove that diverse teams perform better number one, that's clear. So good point there. But I want to get your thoughts on the entrepreneurial equation. You mentioned founders and startups and there's also different makeups in different countries. It's not like the big corporations sometimes it's smaller business in certain areas the different cultures have different business sizes and business types. How do you guys see that factoring in outside the United States, say the big tech companies? Okay, yeah. The easy lower the access to get in education than stay with them, in other countries is it the same or is it more diverse in terms of business? >> So what really actually got us started with the US State Department was around our work with women founders. And I love for Kate to actually share her experience working with AWS startups in that capacity. But frankly, you know, we looked at the content and the mentor programs that were providing women who wanted to be executives, you know, quickly realize a lot of those same skills such as finding customers, right? Scaling your product and building channels can also apply to women founders, not just executives. And so early supporters of our efforts from firms such as Moderna up in Seattle, Emergence Ventures, Decibel Ventures in, you know, the Bay Area and a few others that we're working with right now. Right, they believed in the mission and really helped us scale out what is now our existing platform and offerings for women founders. >> Those are great firms by the way. And they also are very founder friendly and also understand the global workforce. I mean, that's a whole nother dimension. Okay, what's your reaction to all that? >> Yes, we have been very intentional about taking the product expertise and the learnings of women and in our network, we first worked with AWS startups to support the development of the curriculum for the recent accelerator for women founders that was held last spring. And so we're able to support 25 founders and also brought in the expertise of about 20 or 30 women from Advancing Women in Tech to be able to be the lead instructors and mentors for that. And so we have really realized that with this network and this individual sort of focus on product expertise building strong teams, we can take that information and bring it to folks everywhere. And so there is very much the intentionality of allowing founders allowing individuals to take the lessons and bring it to their individual circumstances and the cultures in which they are operating. But the product sense is a skill that we can support the development of and we're proud to do so. >> That's awesome. Nancy, I want to ask you some never really talk about data storage and AWS cloud greatness and goodness, here's different and you also work full-time at AWS and you're the founder or the chairman of this great organization. How do you balance both and do you get, they're getting behind you on this, Amazon is getting behind you on this. >> Well, as I say it's always easier to negotiate on the way in. But jokes aside, I have to say the leadership has been tremendously supportive. If you think about, for example, my leaders Wayne Duso who's also been on the show multiple times, Bill Vaas who's also been on the show multiple times, you know they're both founders and also operators entrepreneurs at heart. So they understand that it is important, right? For all of us, it's really incumbent on all of us who are in positions to do so, to create a pathway for more people to be in leadership roles for more people to be successful entrepreneurs. So, no, I mean if you just looked at LinkedIn they're always uploading my vote so they reach to more audiences. And frankly they're rooting for us back home in the US while we're in Malaysia this week. >> That's awesome. And I think that's a good culture to have that empowerment and I think that's very healthy. What's next for you guys? What's on the agenda? Take us through the activities. I know that you got a ton of things happening. You got your event out there, which is why you're out there. There's a bunch of other activities. I think you guys call it the Advancing Women in Tech week. >> Yes, this week we are having a week of programming that you can check out at Advancing Women in Tech.org. That is spotlighting the expertise of a number of women in our space. So it is three days of programming Tuesday, Wednesday and Thursday if you are in the US so the seventh through the ninth, but available globally. We are also going to be in New York next week for the event at the UN and are looking to continue to support our mentorship programs and also our work supporting women founders throughout the year. >> All right. I have to ask you guys if you don't mind get a little market data so you can share with us here at theCUBE. What are you hearing this year that's different in the conversation space around the topics, the interests? Obviously I've seen massive amounts of global acceleration around conversations, more video, things like this more stories are scaling, a lot more LinkedIn activity. It just seems like it's a lot different this year. Can you guys share any kind of current trends you're seeing relative to the conversations and topics being discussed across the the community? >> Well, I think from a needle moving perspective, right? I think due to the efforts of wonderful organizations including the Q for spotlighting all of these awesome women, right? Trailblazing women and the nonprofits the government entities that we work with there's definitely more emphasis on creating access and creating pathways. So that's probably one thing that you're seeing is more women, more investors posting about their activities. Number two, from a global trend perspective, right? The rise of women in security. I noticed that on your agenda today, you had Lena Smart who's a good friend of mine chief information security officer at MongoDB, right? She and I are actually quite involved in helping founders especially early stage founders in the security space. And so globally from a pure technical perspective, right? There's right more increasing regulations around data privacy, data sovereignty, right? For example, India's in a few weeks about to get their first data protection regulation there locally. So all of that is giving rise to yet another wave of opportunity and we want women founders uniquely positioned to take advantage of that opportunity. >> I love it. Kate, reaction to that? I mean founders, more pathways it sounds like a neural network, it sounds like AI enabled. >> Yes, and speaking of AI, with the rise of that we are also hearing from many community members the importance of continuing to build their skills upskill learn to be able to keep up with the latest trends. There's a lot of people wondering what does this mean for my own career? And so they're turning to organizations like Advancing Women in Tech to find communities to both learn the latest information, but also build their networks so that they are able to move forward regardless of what the industry does. >> I love the work you guys are doing. It's so impressive. I think the economic angle is new it's more amplified this year. It's always kind of been there and continues to be. What do you guys hope for by next year this time what do you hope to see different from a needle moving perspective, to use your word Nancy, for next year? What's the visual output in your mind? >> I want to see real effort made towards 50-50 representation in all tech leadership roles. And I'd like to see that happen by 2050. >> Kate, anything on your end? >> I love that. I'm going to go a little bit more touchy-feely. I want everybody in our space to understand that the skills that they build and that the networks they have carry with them regardless of wherever they go. And so to be able to really lean in and learn and continue to develop the career that you want to have. So whether that be at a large organization or within your own business, that you've got the potential to move forward on that within you. >> Nancy, Kate, thank you so much for your contribution. I'll give you the final word. Put a plug in for the organization. What are you guys looking for? Any kind of PSA you want to share with the folks watching? >> Absolutely, so if you're in a position to be a mentor, join as a mentor, right? Help elevate and accelerate the next generation of women leaders. If you're an investor help us invest in more women started companies, right? Women founded startups and lastly, if you are women looking to accelerate your career, come join our community. We have resources, we have mentors and who we have investors who are willing to come in on the ground floor and help you accelerate your business. >> Great work. Thank you so much for participating in our International Women's Day 23 program and we'd look to keep this going quarterly. We'll see you next year, next time. Thanks for coming on. Appreciate it. >> Thanks so much John. >> Thank you. >> Okay, women leaders here. >> Nancy: Thanks for having us >> All over the world, coming together for a great celebration but really highlighting the accomplishments, the pathways the investment, the mentoring, everything in between. It's theCUBE. Bring as much as we can. I'm John Furrier, your host. Thanks for watching.

Published Date : Mar 7 2023

SUMMARY :

in the technology world, that you get to see a whole nother aspect of time taking the empowerment to go on the rise to leadership in the industry. in the industry's been focused of the US State Department And she mentioned the decision and all of the international into the networks to be able to further in the US we might take for Can you think about the and the networks to be able The easy lower the access to get and the mentor programs Those are great firms by the way. and also brought in the or the chairman of this in the US while we're I know that you got a of programming that you can check I have to ask you guys if you don't mind founders in the security space. Kate, reaction to that? of continuing to build their skills I love the work you guys are doing. And I'd like to see that happen by 2050. and that the networks Any kind of PSA you want to and accelerate the next Thank you so much for participating All over the world,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KatePERSON

0.99+

NancyPERSON

0.99+

TeresaPERSON

0.99+

Bill VaasPERSON

0.99+

AmazonORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

JohnPERSON

0.99+

MalaysiaLOCATION

0.99+

Kate WattsPERSON

0.99+

NigeriaLOCATION

0.99+

Nancy WangPERSON

0.99+

Wayne DusoPERSON

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

ModernaORGANIZATION

0.99+

WednesdayDATE

0.99+

American Council on EducationORGANIZATION

0.99+

John FurrierPERSON

0.99+

Lena SmartPERSON

0.99+

2017DATE

0.99+

VietnamLOCATION

0.99+

BorneoLOCATION

0.99+

Emergence VenturesORGANIZATION

0.99+

New YorkLOCATION

0.99+

2016DATE

0.99+

United Nations Women GroupORGANIZATION

0.99+

Decibel VenturesORGANIZATION

0.99+

USLOCATION

0.99+

United StatesLOCATION

0.99+

Southeast AsiaLOCATION

0.99+

LinkedInORGANIZATION

0.99+

2050DATE

0.99+

MongoDBORGANIZATION

0.99+

US State DepartmentORGANIZATION

0.99+

next yearDATE

0.99+

International Women's DayEVENT

0.99+

25 foundersQUANTITY

0.99+

SeattleLOCATION

0.99+

North AmericasLOCATION

0.99+

AWS Data ProtectionORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

three daysQUANTITY

0.99+

seventhQUANTITY

0.99+

Bay AreaLOCATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

next weekDATE

0.99+

30 womenQUANTITY

0.98+

One aspectQUANTITY

0.98+

ThursdayDATE

0.98+

this yearDATE

0.98+

about 40,000 individualsQUANTITY

0.98+

this yearDATE

0.98+

last springDATE

0.98+

this weekDATE

0.98+

TuesdayDATE

0.98+

David Schmidt, Dell Technologies and Scott Clark, Intel | SuperComputing 22


 

(techno music intro) >> Welcome back to theCube's coverage of SuperComputing Conference 2022. We are here at day three covering the amazing events that are occurring here. I'm Dave Nicholson, with my co-host Paul Gillin. How's it goin', Paul? >> Fine, Dave. Winding down here, but still plenty of action. >> Interesting stuff. We got a full day of coverage, and we're having really, really interesting conversations. We sort of wrapped things up at Supercomputing 22 here in Dallas. I've got two very special guests with me, Scott from Intel and David from Dell, to talk about yeah supercomputing, but guess what? We've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome gentlemen. >> Thanks so much for having us. >> Thanks for having us. >> So, let's start with you, David. First of all, explain the relationship in general between Dell and Intel. >> Sure, so obviously Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercompute, you know, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes. Right, so we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms, like the XR4000, our shortest rack server ever that's designed to go into Edge environments, but is also built for those Edge AI use cases that supports GPUs. It supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about, have already been talking about, and it's going to really enable our customers to deliver AI in a variety of ways. >> You mentioned AI on the CPU. Maybe this is a question for Scott. What does that mean, AI on the CPU? >> Well, as David was talking about, we're just seeing this explosion of different use cases. And some of those on the Edge, some of them in the Cloud, some of them on Prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily a cost going across the PCI express bus, especially with some of our newer products like the CPU Max series, where you can have a huge about of high bandwidth memory just sitting right on the CPU. Things that traditionally would have been accelerator only, can now live on a CPU, and that includes both on the inference side. We're seeing some really great things with images, where you might have a giant medical image that you need to be able to do extremely high resolution inference on or even text, where you might have a huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. >> So how are these needs influencing the evolution of Intel CPU architectures? >> So, we're talking to our customers. We're talking to our partners. This presents both an opportunity, but also a challenge with all of these different places that you can put these great products, as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes there's generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. >> That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using doing large image analysis and leveraging system memory, leveraging the CPU to do that, we've been talking about that for several generations now. Right, going back to Cascade Lake, going back to what we would call 14th generation power Edge. And so now as we prepare and continue to unveil, kind of we're in launch season, right, you and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. >> Yeah, I'd like to applaud Dell just for a moment for its restraint. Because I know you could've come in and taken all of the space in the convention center to show everything that you do. >> Would have loved to. >> In the HPC space. Now, worst kept secrets on earth at this point. Vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? David just alluded to it. >> Yeah, as David talked about, we've been working on some of these things for many years. And it's just, this momentum is continuing to build, both in respect to some of our hardware investments. We've unveiled some things both here, both on the CPU side and the accelerator side, but also on the software side. OneAPI is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads, and the combination thereof, are becoming more and more viable, as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for a better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out, and we're going to be seeing more and more of this. >> So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers, and maybe this is a question for you, David. How do you work with customers to figure out what the right fit is? >> I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right. How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately. But the range is really, really large, right, and there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations, and then of course it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella, but the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I just love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions, for the right workloads, for the right environments. >> So, I like to dive in on this power issue, to give people who are watching an idea. Because we say five kilowatts, 90 kilowatts, people are like, oh wow, hmm, what does that mean? 90 kilowatts is more than 100 horse power if you want to translate it over. It's a massive amount of power, so if you think of EV terms. You know, five kilowatts is about a hairdryer's around a kilowatt, 1,000 watts, right. But the point is, 90 kilowatts in a rack, that's insane. That's absolutely insane. The heat that that generates has got to be insane, and so it's important. >> Several houses in the size of a closet. >> Exactly, exactly. Yeah, in a rack I explain to people, you know, it's like a refrigerator. But, so in the arena of thermals, I mean is that something during the development of next gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? >> Well, you definitely have to take thermals into account, as well as just the power of consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. There's many paths up that mountain, and it's about choosing that right path. We've talked about this before, having extremely thoughtful partners, we're just not going to common-torily try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. >> Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based on the west coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? >> Good question. I guess I can kick this off. So Habana is part of the Intel family now. They've been integrated in. It's been a great journey with them, as some of their products have launched on AWS, and they've had some very good wins on MLPerf and things like that. I think it's about finding the right tool for the job, right. Not every problem is a nail, so you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. It's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle, so you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24/7, but then you only need a burst to certain things. There's a lot of different options out there. >> The portfolio approach. >> Exactly. >> And then what I love about the work that Scott's team is doing, customers have told us this week in our meetings, they do not want to spend developer's time porting code from one stack to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our every day lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another or spend time improving algorithms and doing things that actually generate, you know, meaningful outcomes for their business or their research. And so if they are, you know, desperately searching I would say for that solution and for help in that area, and that's what we're working to enable soon. >> And this is what I love about oneAPI, our software stack, it's open first, heterogeneous first. You can take SYCL code, it can run on competitor's hardware. It can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the in-developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. >> Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets, and the idea of an open standard where competitors of Intel from a silicone perspective can have their chips integrated via a universal standard. And basically you had the top three silicone vendors saying, yeah, absolutely, let's work together. Cats and dogs. >> Exactly, but at the end of the day, it's whatever menagerie solves the problem. >> Right, right, exactly. And of course Dell can solve it from any angle. >> Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicone without software is just sand. Sand with silicone is poorly written prose. But without an actual platform to put it on, it's nothing, it's a box that sits in the corner. >> David, you mentioned that 90% of power age servers now support GPUs. So how is this high-performing, the growth of high performance computing, the demand, influencing the evolution of your server architecture? >> Great question, a couple of ways. You know, I would say 90% of our platforms support GPUs. 100% of our platforms support AI use cases. And it goes back to the CPU compute stack. As we look at how we deliver different form factors for customers, we go back to that range, I said that power range this week of how do we enable the right air coolant solutions? How do we deliver the right liquid cooling solutions, so that wherever the customer is in their environment, and whatever footprint they have, we're ready to meet it? That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. >> You want to be more specific? >> We can't unveil everything at Supercompute. We have a lot of great stuff coming up here in the next few months, so. >> It's kind of like being at a great restaurant when they offer you dessert, and you're like yeah, dessert would be great, but I just can't take anymore. >> It's a multi course meal. >> At this point. Well, as we wrap, I've got one more question for each of you. Same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence, at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problem do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Cause it's not the little, it's not the bits and the bobs and the speeds and the feats, it's what we're going to do with them, so what do you think, David? >> I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me what we could do in the past two years to unable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example, and if I were to map that forward, it's about enabling that human progress. And it's, you know, you ask a 20 year version of me 20 years ago, you know, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. You know, there's great use cases, there's great image analysis. We talked about some. The images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good when you get out of bed in the morning, to know that you're enabling that type of progress. >> Scott, quick thoughts? >> Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases, from the very micro of developing the next drug to finding the next battery technology, all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains, both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. >> Fantastic, fantastic. Thank you, gentlemen. You heard it hear first, Intel and Dell just committed to solving the secrets of the universe by New Years Eve 2023. >> Well, next Supercompute, let's give us a little time. >> The next Supercompute Convention. >> Yeah, next year. >> Yeah, SC 2023, we'll come back and see what problems have been solved. You heard it hear first on theCube, folks. By SC 23, Dell and Intel are going to reveal the secrets of the universe. From here, at SC 22, I'd like to thank you for joining our conversation. I'm Dave Nicholson, with my co-host Paul Gillin. Stay tuned to theCube's coverage of Supercomputing Conference 22. We'll be back after a short break. (techno music)

Published Date : Nov 17 2022

SUMMARY :

covering the amazing events Winding down here, but So not all of the holiday gifts First of all, explain the and the right designs for What does that mean, AI on the CPU? that you need to be able to and it's no longer the case leveraging the CPU to do that, all of the space in the convention center And I appreciate it if you and the ecosystem is something that you mentioned. And I just love that five to 90 example But the point is, 90 kilowatts to people, you know, And at the end of the day, You're based on the west coast. So Habana is part of the Intel family now. and for help in that area, in that ecosystem to make Chiplets, and the idea of an open standard Exactly, but at the end of the day, And of course Dell can that sits in the corner. the growth of high performance And it goes back to the CPU compute stack. in the next few months, so. when they offer you dessert, and the speeds and the feats, in the morning, to know And at the end of the day, of the universe by New Years Eve 2023. Well, next Supercompute, From here, at SC 22, I'd like to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MaribelPERSON

0.99+

JohnPERSON

0.99+

KeithPERSON

0.99+

EquinixORGANIZATION

0.99+

Matt LinkPERSON

0.99+

Dave VellantePERSON

0.99+

IndianapolisLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ScottPERSON

0.99+

Dave NicholsonPERSON

0.99+

Tim MinahanPERSON

0.99+

Paul GillinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Stephanie CoxPERSON

0.99+

AkanshkaPERSON

0.99+

BudapestLOCATION

0.99+

IndianaLOCATION

0.99+

Steve JobsPERSON

0.99+

OctoberDATE

0.99+

IndiaLOCATION

0.99+

StephaniePERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris LavillaPERSON

0.99+

2006DATE

0.99+

Tanuja RanderyPERSON

0.99+

CubaLOCATION

0.99+

IsraelLOCATION

0.99+

Keith TownsendPERSON

0.99+

AkankshaPERSON

0.99+

DellORGANIZATION

0.99+

Akanksha MehrotraPERSON

0.99+

LondonLOCATION

0.99+

September 2020DATE

0.99+

IntelORGANIZATION

0.99+

David SchmidtPERSON

0.99+

90%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

October 2020DATE

0.99+

AfricaLOCATION

0.99+

Lucas Snyder, Indiana University and Karl Oversteyns, Purdue University | SuperComputing 22


 

(upbeat music) >> Hello, beautiful humans and welcome back to Supercomputing. We're here in Dallas, Texas giving you live coverage with theCUBE. I'm joined by David Nicholson. Thank you for being my left arm today. >> Thank you Savannah. >> It's a nice little moral. Very excited about this segment. We've talked a lot about how the fusion between academia and the private sector is a big theme at this show. You can see multiple universities all over the show floor as well as many of the biggest companies on earth. We were very curious to learn a little bit more about this from people actually in the trenches. And we are lucky to be joined today by two Purdue students. We have Lucas and Karl. Thank you both so much for being here. >> One Purdue, one IU, I think. >> Savannah: Oh. >> Yeah, yeah, yeah. >> I'm sorry. Well then wait, let's give Indiana University their fair do. That's where Lucas is. And Karl is at Purdue. Sorry folks. I apparently need to go back to school to learn how to read. (chuckles) In the meantime, I know you're in the middle of a competition. Thank you so much for taking the time out. Karl, why don't you tell us what's going on? What is this competition? What brought you all here? And then let's dive into some deeper stuff. >> Yeah, this competition. So we're a joint team between Purdue and IU. We've overcome our rivalries, age old rivalries to computer at the competition. It's a multi-part competition where we're going head to head against other teams from all across the world, benchmarking our super computing cluster that we designed. >> Was there a moment of rift at all when you came together? Or was everyone peaceful? >> We came together actually pretty nicely. Our two advisors they were very encouraging and so we overcame that, no hostility basically. >> I love that. So what are you working on and how long have you guys been collaborating on it? You can go ahead and start Lucas. >> So we've been prepping for this since the summer and some of us even before that. >> Savannah: Wow. >> And so currently we're working on the application phase of the competition. So everybody has different specialties and basically the competition gives you a set of rules and you have to accomplish what they tell you to do in the allotted timeframe and run things very quickly. >> And so we saw, when we came and first met you, we saw that there are lights and sirens and a monitor looking at the power consumption involved. So part of this is how much power is being consumed. >> Karl: That's right. >> Explain exactly what are the what are the rules that you have to live within? >> So, yeah, so the main constraint is the time as we mentioned and the power consumption. So for the benchmarking phase, which was one, two days ago there was a hard camp of 3000 watts to be consumed. You can't go over that otherwise you would be penalized for that. You have to rerun, start from scratch basically. Now there's a dynamic one for the application section where it's it modulates at random times. So we don't know when it's going to go down when it's going to go back up. So we have to adapt to that in real time. >> David: Oh, interesting. >> Dealing with a little bit of real world complexity I guess probably is simulation is here. I think that's pretty fascinating. I want to know, because I am going to just confess when I was your age last week, I did not understand the power of supercomputing and high performance computing. Lucas, let's start with you. How did you know this was the path you wanted to go down in your academic career? >> David: Yeah, what's your background? >> Yeah, give us some. >> So my background is intelligence systems engineering which is kind of a fusion. It's between, I'm doing bioengineering and then also more classical computer engineering. So my background is biology actually. But I decided to go down this path kind of on a whim. My professor suggested it and I've kind of fallen in love with it. I did my summer internship doing HPC and I haven't looked back. >> When did you think you wanted to go into this field? I mean, in high school, did you have a special teacher that sparked it? What was it? >> Lucas: That's funny that you say that. >> What was in your background? >> Yes, I mean, in high school towards the end I just knew that, I saw this program at IU and it's pretty new and I just thought this would be a great opportunity for me and I'm loving it so far. >> Do you have family in tech or is this a different path for you? >> Yeah, this is a different path for me, but my family is so encouraging and they're very happy for me. They text me all the time. So I couldn't be happier. >> Savannah: Just felt that in my heart. >> I know. I was going to say for the parents out there get the tissue out. >> Yeah, yeah, yeah. (chuckles) >> These guys they don't understand. But, so Karl, what's your story? What's your background? >> My background, I'm a major in unmanned Aerial systems. So this is a drones commercial applications not immediately connected as you might imagine although there's actually more overlap than one might think. So a lot of unmanned systems today a lot of it's remote sensing, which means that there's a lot of image processing that takes place. Mapping of a field, what have you, or some sort of object, like a silo. So a lot of it actually leverages high performance computing in order to map, to visualize much replacing, either manual mapping that used to be done by humans in the field or helicopters. So a lot of cost reduction there and efficiency increases. >> And when did you get this spark that said I want to go to Purdue? You mentioned off camera that you're from Belgium. >> Karl: That's right. >> Did you, did you come from Belgium to Purdue or you were already in the States? >> No, so I have family that lives in the States but I grew up in Belgium. >> David: Okay. >> I knew I wanted to study in the States. >> But at what age did you think that science and technology was something you'd be interested in? >> Well, I've always loved computers from a young age. I've been breaking computers since before I can remember. (chuckles) Much to my parents dismay. But yeah, so I've always had a knack for technology and that's sort of has always been a hobby of mine. >> And then I want to ask you this question and then Lucas and then Savannah will get some time. >> Savannah: It cool, will just sit here and look pretty. >> Dream job. >> Karl: Dream job. >> Okay. So your undergrad both you. >> Savannah: Offering one of my questions. Kind of, It's adjacent though. >> Okay. You're undergrad now? Is there grad school in your future do you feel that's necessary? Is that something you want to pursue? >> I think so. Entrepreneurship is something that's been in the back of my head for a while as well. So may be or something. >> So when I say dream job, understand could be for yourself. >> Savannah: So just piggyback. >> Dream thing after academia or stay in academia. What's do you think at this point? >> That's a tough question. You're asking. >> You'll be able to review this video in 10 years. >> Oh boy. >> This is give us your five year plan and then we'll have you back on theCUBE and see 2027. >> What's the dream? There's people out here watching this. I'm like, go, hey, interesting. >> So as I mentioned entrepreneurship I'm thinking I'll start a company at some point. >> David: Okay. >> Yeah. In what? I don't know yet. We'll see. >> David: Lucas, any thoughts? >> So after graduation, I am planning to go to grad school. IU has a great accelerated master's degree program so I'll stay an extra year and get my master's. Dream job is, boy, that's impossible to answer but I remember telling my dad earlier this year that I was so interested in what NASA was doing. They're sending a probe to one of the moons of Jupiter. >> That's awesome. From a parent's perspective the dream often is let's get the kids off the payroll. So I'm sure that your families are happy to hear that you have. >> I think these two will be right in that department. >> I think they're going to be okay. >> Yeah, I love that. I was curious, I want to piggyback on that because I think when NASA's doing amazing we have them on the show. Who doesn't love space. >> Yeah. >> I'm also an entrepreneur though so I very much empathize with that. I was going to ask to your dream job, but also what companies here do you find the most impressive? I'll rephrase. Because I was going to say, who would you want to work with? >> David: Anything you think is interesting? >> But yeah. Have you even had a chance to walk the floor? I know you've been busy competing >> Karl: Very little. >> Yeah, I was going to say very little. Unfortunately I haven't been able to roam around very much. But I look around and I see names that I'm like I can't even, it's crazy to see them. Like, these are people who are so impressive in the space. These are people who are extremely smart. I'm surrounded by geniuses everywhere I look, I feel like, so. >> Savannah: That that includes us. >> Yeah. >> He wasn't talking about us. Yeah. (laughs) >> I mean it's hard to say any of these companies I would feel very very lucky to be a part of, I think. >> Well there's a reason why both of you were invited to the party, so keep that in mind. Yeah. But so not a lot of time because of. >> Yeah. Tomorrow's our day. >> Here to get work. >> Oh yes. Tomorrow gets play and go talk to everybody. >> Yes. >> And let them recruit you because I'm sure that's what a lot of these companies are going to be doing. >> Yeah. Hopefully it's plan. >> Have you had a second at all to look around Karl. >> A Little bit more I've been going to the bathroom once in a while. (laughs) >> That's allowed I mean, I can imagine that's a vital part of the journey. >> I've ruin my gaze a little bit to what's around all kinds of stuff. Higher education seems to be very important in terms of their presence here. I find that very, very impressive. Purdue has a big stand IU as well, but also others all from Europe as well and Asia. I think higher education has a lot of potential in this field. >> David: Absolutely. >> And it really is that union between academia and the private sector. We've seen a lot of it. But also one of the things that's cool about HPC is it's really not ageist. It hasn't been around for that long. So, I mean, well, at this scale it's obviously this show's been going on since 1988 before you guys were even probably a thought. But I think it's interesting. It's so fun to get to meet you both. Thank you for sharing about what you're doing and what your dreams are. Lucas and Karl. >> David: Thanks for taking the time. >> I hope you win and we're going to get you off the show here as quickly as possible so you can get back to your teams and back to competing. David, great questions as always, thanks for being here. And thank you all for tuning in to theCUBE Live from Dallas, Texas, where we are at Supercomputing. My name's Savannah Peterson and I hope you're having a beautiful day. (gentle upbeat music)

Published Date : Nov 16 2022

SUMMARY :

Thank you for being my left arm today. Thank you both so much for being here. I apparently need to go back from all across the world, and so we overcame that, So what are you working on since the summer and some and you have to accomplish and a monitor looking at the So for the benchmarking phase, How did you know this was the path But I decided to go down I saw this program at They text me all the time. I was going to say for Yeah, yeah, yeah. But, so Karl, what's your story? So a lot of unmanned systems today And when did you get that lives in the States I can remember. ask you this question Savannah: It cool, will of my questions. Is that something you want to pursue? I think so. So when I say dream job, understand What's do you think at this point? That's a tough question. You'll be able to review and then we'll have you back What's the dream? So as I mentioned entrepreneurship I don't know yet. planning to go to grad school. to hear that you have. I think these two will I was curious, I want to piggyback on that I was going to ask to your dream job, Have you even had I can't even, it's crazy to see them. Yeah. I mean it's hard to why both of you were invited go talk to everybody. And let them recruit you Have you had a second I've been going to the I mean, I can imagine that's I find that very, very impressive. It's so fun to get to meet you both. going to get you off the show

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavannahPERSON

0.99+

DavidPERSON

0.99+

David NicholsonPERSON

0.99+

BelgiumLOCATION

0.99+

KarlPERSON

0.99+

NASAORGANIZATION

0.99+

3000 wattsQUANTITY

0.99+

LucasPERSON

0.99+

IUORGANIZATION

0.99+

EuropeLOCATION

0.99+

Karl OversteynsPERSON

0.99+

Savannah PetersonPERSON

0.99+

five yearQUANTITY

0.99+

AsiaLOCATION

0.99+

Lucas SnyderPERSON

0.99+

Dallas, TexasLOCATION

0.99+

PurdueORGANIZATION

0.99+

two advisorsQUANTITY

0.99+

TomorrowDATE

0.99+

twoQUANTITY

0.99+

PurdueLOCATION

0.99+

1988DATE

0.99+

last weekDATE

0.99+

JupiterLOCATION

0.99+

bothQUANTITY

0.99+

Purdue UniversityORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

two days agoDATE

0.98+

oneQUANTITY

0.98+

Indiana UniversityORGANIZATION

0.98+

Indiana UniversityORGANIZATION

0.97+

earlier this yearDATE

0.93+

earthLOCATION

0.93+

firstQUANTITY

0.92+

SupercomputingORGANIZATION

0.9+

2027TITLE

0.86+

HPCORGANIZATION

0.8+

theCUBEORGANIZATION

0.8+

StatesLOCATION

0.56+

secondQUANTITY

0.48+

22QUANTITY

0.38+

Sanzio Bassini, Cineca | CUBE Conversation, July 2021


 

(upbeat music) >> Welcome to the CUBE Conversation. I'm Lisa Martin. I'm talking next with Sanzio Bassini, the Head of High Performance Computing at Cineca, at DELL technologies customer. Sanzio, welcome to the CUBE. >> Thank you, it's a pleasure, it's a pleasure. >> Likewise, nice to see you. So tell us a little bit about Cineca. This is a large computing center, but a very large Italian nonprofit consortium. Tell us about it. >> Yes, Cineca been founded 50 years ago, from the university systems in Italy. For a statutory mission, which is to support, the scientific discovery, and the industry innovations, using the High Performance Computing and the correlated methodologies like a, Artificial Intelligence, which is one of the, you see the more, in a, in a adopted in those days, but together with the big data processing and and simulation. Yes, we are a consortium, which means that this is a private not-for-profit organizations. Currently, our member of the consortium, almost all the universities in Italy and also all the national agencies for those selected structures. Uh. The main quarter of Cineca is in Bologna, which is in the heart Nation, with the bunch of presence in Milan, in Rome and in Naples, so we are a consultation organization. >> And I also read that you were, are the top 10 out of the top 500 of the world's fastest super computers. That's a pretty big accomplishment. >> Yes. That is a part of our institutional missions, the last 10 to 15 years we have been to say, frequent flyers in the top 10. There been at least two, three systems that have been ranked at the top 10. Apart, the.., whatever would be the meaning of such an advance market, there's a lot of its criticalities. We are well aware. The idea is that we're enabling the scientific discovery, by means of providing the most advanced systems and the co-designing, the most advanced HPC systems to promote and support the, what is the, excellence in science. And that being part of European high-performance computing IT system. That is the case. >> Excellent. Now, talk to me about some of the challenges that Cineca is trying to solve in particular, the Human Brain Project. Talk to us a little bit about that and how you're leveraging high-performance computing to accelerate scientific discovery. >> Um, The Human Brain Project is one of the flagship project that has been co-founded by the European commission and that the participating member states. Is not as another situations that are undertaking, it's definitely a joint collaboration between members states and the European commission. There are two different right now, flagships together with another, that is in progress, which is that the quantum of flagship, the first two flagship abroad that that has been lost. The commission for operation with the participating states has been one on the digraph vein on which also we are participating in directly together with the CNR, is the national business counselor. And the second for which we are core partners of the HPC that is, the Human Brain Project. That, that is a big flagship, one million offer, of newer investment, co-founded by the participating states and that the European commission. The project it's going to set up, in what to do be the, third strategic grant agreement that they will go over the next three years, the good, the complete that the, the whole process. Then we see what is going to happen at Africa. We thought that their would be some others progress offer these big projects. It's project that would combine both the technology issues, like the designing the off high-performance computing systems that meet the requirements of the community and the big challenge, scientific challenges correlated to the physiological functions of the human brain center, including the different farm show survey to do with the behavior of the human brain. A from the pathological point of view, from the physiological point of view, that better understand the could be the way for, for a facing that. Let's say the pathology, some of those are very much correlated with respect to aging, and that it would impact the, the health, the public health systems. Some other that are correlating with what would be the support for the physiological knowledge of the, of the human brains. And finally that they, let me say, technological transfer stuff that represented the knowing off at the physiological, behavior of the human brain. Just to use a sort of metaphor to have happen from the point of view of there computational performance, the human brain is a, a, a, more than Exoscale systems, but with a energy consumption, which is very low, we are talking about some hundreds of Watts. So some hundreds of watts of energy, would provide a an extreme and computational performance. So if would could organized the technology of the high-performance computing in terms of interconnections now we're morphing the computing systems and exploitations of that kind of technologies, in I build a system that it might provide the computational power that would represent a tremendous and tremendous step ahead, in order to facing the big challenges of our base, like energies, personalized medicine, try not to change food for all those kinds of big socioeconomic challenges that we are facing. >> Yes I was reading that besides, sorry Sanzio I was reading that besides the Human Brain Project, there are other projects going on, such as that you mentioned, I'd like to understand how Cineca is working with Dell technologies. You have to translate, as you've mentioned a minute ago, the scientific requirements for discovery into high-performance computing requirements. Talk to me about how you've been doing that with partners like Dell technologies. >> Yes, in particularly in our computing architectures, we had the need to address the capability to facing the data processing involved with backed off the Human Brain Project and general speaking that is backed off the science vendor, that would combine the capability also to provide the cloud access to the system. So by main soft containers technologies and the capability also, to address what would be the creation of a Federation. So Piper problems with people proceeded in a new world. So at the end that the requirements and the terms of reference of the would matter will decline and the terms of a system that would be capable to manage, let's say, in a holistic approach, the data processing, the cloud computing services and the opportunity before for being integrated that in a Federation of HSBC system in Europe's, and with this backed off, that kind of thing, we manage a competitive dialogue procurement processor, I think I the sentence would share together with the different potential technology providers, what would be the visuals and those are the constraints (inaudible) and those other kinds of constraints like, I don't want to say, I mean, environmental kind of constraints and uh, sharing with this back of the technology provider what would it be the vision for this solution, in a very, let's say hard, the competitive dialogue, and at the end, results in a sort of, I don't want to say Darwinian processes, okay. So I mean, the survivors in terms of the different technology providers being Dell that shown the characteristics of the solution that it will be more, let's say compliant. And at the same time are flexible with respect of the combinations of very different constraints and requirements that has been the, the process that has been the outcomes of such a process. >> I like that you mentioned that Darwinian survival of the fittest and that Dell technologies has been, it sounds like a pretty flexible partner because you've got so many different needs and scientific needs to meet for different researchers. Talk to me about how you mentioned that this is a multi-national effort. How does Cineca serve and work with teams not only in Italy, but in other countries and from other institutes? >> Definitely the volume commitment that together with the, European member states is that by means of scientific merits and the peer review process, roughly speaking the arc of the production capacity, would be shared at the European level. That it's a commitment that, that there's been, that there's been a shared of that together with France, Germany, Spain, and, and with the London. So, I mean, our, half of our production capacity, it's a share of that at the European level, where also of course the Italian scientist can apply in the participates, but in a sort of offer emulations and the advanced competition for addressing what it would be the excellence in science. The remaining 50% of our production capacity is for, for the national community and, somehow to prepare and support the Italian community to be competitive on the worldwide scenario on the European and international scenario, uh that setting up would lead also to the agreement at the international level, with respect of some of the options that, that are promoted the progress in a US and in Japan also. So from this point of view, that mean that in some cases also the, access that it would come from researchers the best collaborations and the sharing options with the US researchers their or Japanese researchers in an open space. >> Open space for, it sounds like the Human Brain Project, which the HPC is powering, which has been around since 2013 is really facilitating global collaboration. Talk to me about some of the results that the high-performance computing environment has helped the Human Brain Project to achieve so far. >> The main outcomes that it will be consolidated in the next phase that will be need the by rural SPC that is the Jared undertaking um entities, that has been created for consolidating and for progressing the high-performance computing ecosystem in Europe. It represented by the Federations of high-performance computing systems at European level, there is a, a, an option that, that has been encapsulated and the elaborated inside the human brain flagship project which is called the FEHIPCSE that stand for Federation of a High-Performance Computing System in Europe. That uh provide the open service based on the two concepts on one, one is the sharing of the Heidi at a European level, so it means that the, the high demand of the users or researchers more properly. It's unique and Universal at the European level. That didn't mean better the same, we see identity management, education management with the open, and the access to the Cineca system, to the SARS system in France, to the unique system in, uh Germany to the, Diocese system in a Switzerland, to the Morocco System in a Spain. That is the part related to what will be the federated, the ID management, the others, et cetera, related to what will be the Federation off the data access. So from the point of view, again, the scientific community, mostly the community of Human Brain Project, but that will be open at other domains and other community, make sure that data in a seamless mode after European language, from the technological point of view, or let's say from the infrastructure point of view, very strong up, from the scientific point of view, uh what they think they may not, will be the most of the options is being supported by Cineca has to do with the two specific target. One is the elaboration of the data that are provided by the lands. The laws are a laboratory facility in that Florence. That is one of the four parts, and from the bottom view of the provisions of the data that is for the scattering, the, the data that would come from the mouse brains, that are use for, for (inaudible) And then the second part is for the Mayor scale studies of the cortex of the of the human brain, and that got add-on by a couple of groups that are believing that action from a European level their group of the National Researcher Counsel the CNR, that are the two main outcome on which we are in some out reference high-performance computing facilities for supporting that kind of research. Then their is in some situations they combinations of the performance a, capability of the Federation systems for addressing what will be the simulations of the overall human brain would take a lot of performance challenge simulation with bacteria that they would happen combining that they SPC facility as at European level. >> Right! So I was reading there's a case study by the way, on Cynic that Dell technologies has published. And some of the results you talked about, those that the HPC is facilitating research and results on epilepsy, spinal cord injury, brain prostheses for the blind, as well as new insights into autism. So incredibly important work that you're doing here for the Human Brain Project. One last question Sanzio, for you, what advice would you give to your peers who might be in similar situations that need to, to build and deploy and maintain high-performance computing environments? Where should they start? >> (coughs laughs) I think that at, at a certain point, that specific know how would became sort of a know how that is been, I mean, accumulated and then by some facilities and institutions around the world. There are the, the federal labs in US, the main nation model centers in Europe, the big facilities in Japan. And of course the, the big university facilities in China that are becoming, how do you say, evident and our progressively occupied increasing the space, that to say that that is somehow it, that, that, that the, those institutions would continues collaborate and sharing that there are periods I would expect off what to do, be the top level systems. Then there is a continuous sharing of uh knowledge, the experience best practices with respect off, let's say the technologies transfers towards productions and services and boosterism. Where the situation is big parenta, in the sense that, their are focused what it would be, uh the integration of the high-performance computing technology into their production workflow. And from the point of view, there is the sharing of the experience in order to provide the, a sort of, let's say, spreads and amplifications of the opportunity for supporting innovation. That is part of are solution means, in a Italy but it also, eh, er sort of um, see objective, that is addressed by the European options er supported by the European commission. I think that that sort of (inaudible) supply that in US, the, that will be coming there, sort of you see the max practice for the technology transfer to support the innovation. >> Excellent, that sharing and that knowledge transfer and collaboration. It seems to be absolutely fundamental and the environment that you've built, facilitates that. Sanzio thank you so much for sharing with us, what Cineca is doing and the great research that's going on there, and across a lot of disciplines, we appreciate you joining the program today. Thank you. >> Thank you, it's been a pleasure, thank you very much for the opportunity. >> Likewise, for Sanzio Bassini. I'm Lisa Martin. You're watching this cube conversation. (calming music)

Published Date : Sep 24 2021

SUMMARY :

the Head of High Performance Thank you, it's a Likewise, nice to see you. and also all the national agencies are the top 10 out of the that have been ranked at the top 10. the Human Brain Project. and that the European commission. the Human Brain Project, that is backed off the the fittest and that Dell the Italian community to be competitive of the results that the that is for the scattering, the, And some of the results you talked about, that is addressed by the European options and the environment that you've built, thank you very much for the opportunity. for Sanzio Bassini.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

NaplesLOCATION

0.99+

ItalyLOCATION

0.99+

CinecaORGANIZATION

0.99+

EuropeLOCATION

0.99+

MilanLOCATION

0.99+

RomeLOCATION

0.99+

JapanLOCATION

0.99+

CNRORGANIZATION

0.99+

USLOCATION

0.99+

ChinaLOCATION

0.99+

50%QUANTITY

0.99+

July 2021DATE

0.99+

BolognaLOCATION

0.99+

FranceLOCATION

0.99+

Sanzio BassiniPERSON

0.99+

HPCORGANIZATION

0.99+

SanzioPERSON

0.99+

one millionQUANTITY

0.99+

SpainLOCATION

0.99+

DellORGANIZATION

0.99+

four partsQUANTITY

0.99+

two conceptsQUANTITY

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.99+

DELLORGANIZATION

0.99+

second partQUANTITY

0.99+

2013DATE

0.99+

SwitzerlandLOCATION

0.99+

European commissionORGANIZATION

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.98+

todayDATE

0.98+

three systemsQUANTITY

0.98+

AfricaLOCATION

0.97+

FlorenceLOCATION

0.97+

50 years agoDATE

0.96+

15 yearsQUANTITY

0.96+

EuropeanOTHER

0.95+

Federation of a High-Performance Computing SystemORGANIZATION

0.94+

DioceseORGANIZATION

0.93+

third strategic grant agreementQUANTITY

0.93+

GermanyLOCATION

0.92+

One last questionQUANTITY

0.92+

ItalianOTHER

0.91+

Human Brain ProjectTITLE

0.91+

SARSORGANIZATION

0.91+

CUBEORGANIZATION

0.9+

top 10QUANTITY

0.84+

hundreds of WattsQUANTITY

0.83+

JapaneseOTHER

0.82+

two specific targetQUANTITY

0.81+

10QUANTITY

0.8+

two main outcomeQUANTITY

0.79+

INSURANCE Improve Underwriting


 

>>Good afternoon, I'm wanting or evening depending >>On where you are and welcome to this breakout session around insurance, improve underwriting with better insights. >>So first and >>Foremost, let's summarize very quickly, um, who we're with and what we're talking about today. My name is Mooney castling, and I'm the managing director at Cloudera for the insurance vertical. And we have a sizeable presence in insurance. We have been working with insurance companies for a long time now, over 10 years, which in terms of insurances, maybe not that long, but for technology, it really is. And we're working with, as you can see some of the largest companies in the world and in the continents of the world. However, we also do a significant amount of work with smaller insurance companies, especially around specialty exposures and the regionals, the mutuals in property, casualty, general insurance, life, annuity, and health. So we have a vast experience of working with insurers. And, um, we'd like to talk a little bit today about what we're seeing recently in the underwriting space and what we can do to support the insurance industry >>In there. So >>Recently what we have been seeing, and it's actually accelerated as a result of their recent pandemic that we all have been going through. We see that insurers are putting even more emphasis on accounting for every individual customer's risks, lotta via commercial, a client or a personal person, personal insurance risk in a dynamic and a bespoke way. And what I mean with that is in a dynamic way, it means that risks and risk assessments change very regularly, right? Companies go into different business situations. People behave differently. Risks are changing all the time and they're changing per person. They're not changing the narrow generically my risk at a certain point of time in travel, for example, it might be very different than any of your risks, right? So what technology has started to enable is underwrite and assess those risks at those very specific individual levels. And you can see that insurers are investing in depth capability. The value of, um, artificial intelligence and underwriting is growing dramatically. As you see from some of those quotes here and also risks that were historically very difficult to assess such as networks, uh, vendor is global supply chains, um, works workers' compensation that has a lot of moving parts to it all the time and anything that deals with rapidly changing risks, exposures and people, and businesses have been supported more and more by technology such as ours to help account for that. >>And this is a bit a difficult slide. So bear with me for a second here. What this slide shows specifically for underwriting is how data-driven insights help manage underwriting. And what you see on the left side of this slide is the progress insurers make in analytical capabilities. And quite often the first steps are around reporting and that tends to be run from a data warehouse, operational data store, Starsky, Matt, um, data, uh, models. And then, and reporting really is, uh, quite often as a BI function, of course, a business intelligence function. And it really, you know, at a regular basis informs the company of what has been taken place now in the second phase, the middle dark, the middle color blue. The next step that is shore stage is to get into descriptive analytics. And what descriptive analytics really do is they try to describe what we're learning in reporting. >>So we're seeing certain events and sorts and findings and sorts of numbers and certain trends happening in reporting. And in the descriptive phase, we describe what this means and you know why this is happening. And then ultimately, and this is the holy grill, the end goal we like to get through predictive analytics. So we like to try to predict what is going to happen, uh, which risk is a good one to underwrite, you know, Watts next policy, a customer might need or wants water claims as we discuss it. And not a session today, uh, might become fatherless or a which one we can move straight through because they're not supposed to be any issues with it, both on the underwriting and the claims side. So that's where every insurer is shooting for right now. But most of them are not there yet. Totally. Right. So on the right side of this slide specifically for underwriting, we would, we like to show what types of data generally are being used in use cases around underwriting, in the different faces of maturity and analytics that I just described. >>So you will see that on the reporting side, in the beginning, we start with braids, information, quotes, information, submission information, bounding information. Um, then if you go to the descriptive phase, we start to add risk engineering information, risk reports, um, schedules of assets on the commercial side, because some are profiles, uh, as the descriptions move into some sort of an unstructured data environments, um, notes, diaries, claims notes, underwriting notes, risk engineering notes, transcripts of customer service calls, and then totally to the outer side of this baseball field looking slide, right? You will see the relatively new data sources that can add tremendous value. Um, but I'm not Whitely integrated yet. So I will walk through some use cases around specifically. So think about sensors, wearables, you know, sense of some people's bodies, sensors, moving assets for transportation, drone images for underwriting. It's not necessary anymore to send, uh, an inspection person and inspector or a risk risk inspector or engineer to every building. You know, insurers now fly drones over it, to look at the roofs, et cetera, photos. You know, we see it a lot in claims first notice of loss, but we also see it for underwriting purposes that policies all done out at pretty much say sent me pictures of your five most valuable assets in your home and we'll price your home and all its contents for you. So we start seeing more and more movements towards those, as I mentioned earlier, dynamic and bespoke types of underwriting. >>So this is how Cloudera supports those initiatives. So on the left side, you see data coming into your insurance company. There are all sorts of different states, Dara. Some of them aren't managed and controlled by you. Some audits you get from third parties and we'll talk about Della medics in a little bit. It's one of the use cases, the move into the data life cycle, the data journey. So the data is coming into your organization. You collected, you store it, you make it ready for utilization. You plop it, eat it in an operational environment for processing what in an analytical environment for analysis. And then you close on the loop and adjusted from the beginning if necessary, no specifically for insurance, which is if not the most regulated industry in the world it's coming awfully close. And it will come in as a, as a very admirable second or third. >>Um, it's critically important that that data is controlled and managed in the correct way on all the different regulations that, that we are subject to. So we do that in the cloud era share data experiment experience, which is where we make sure that the data is accessed by the right people. And that we always can track who did watch to any point in time to that data. Um, and that's all part of the Cloudera data platform. Now that whole environment that we run on premise as well as in the cloud or in multiple clouds or in hybrid, most insurers run hybrid models, which are part of that data on premise and part of the data and use cases and workloads in the cloud. We support enterprise use cases around on the writing in risk selection, individualized pricing, digital submissions, quote processing, the whole quote, quote bound process, digitally fraud and compliance evaluations and network analysis around, um, service providers. So I want to walk you through some of the use cases that we've seen in action recently that showcases how this >>Work in real life. First one >>Is to seize that group plus Cloudera, um, uh, full disclosure is obviously for the people that know a Dutch health insurer. I did not pick the one because I happen to be Dutch is just happens to be a fantastic use case and what they were struggling with as many, many insurance companies is that they had a legacy infrastructure that made it very difficult to combine data sets and get a full view of the customer and its needs. Um, as any ensure customer demands and needs are rapidly changing competition is changing. So C-SAT decided that they needed to do something about it. And they built a data platform on Cloudera that helps them do a couple of things. It helps them support customers better or proactively. So they got really good in pinging customers on what potential steps they need to take to improve on their health in a preventative way. >>But also they sped up rapidly their, uh, approvals of medical procedures, et cetera. And so that was the original intent, right? It's like serve the customers better or retain the customers, make sure what they have the right access to the right services when they need us in a proactive way. As a side effect of this, um, data platform. They also got much better in, um, preventing and predicting fraud and abuse, which is, um, the topic of the other session we're running today. So it really was a good success and they're very happy with it. And they're actually starting to see a significant uptick in their customer service, KPIs >>And results. >>The other one that I wanted to quickly mention is Octo as most of you know, Optune is a very, very large telemedics provider, telematics data provider globally speaking with Cloudera for quite some time, this one I want to showcase because it showcases what we can do with data in mass amounts. So for Octo, we, um, analyze on Cloudera 5 million connected cars, ongoing with 11 billion data points and really want to doing as the creating the algorithms and the models and insurers use to, um, to, um, run, um, tell them insurance telematics programs, right to pay as you drive B when you drive the, how you drive. And this whole telemedics part of insurance is actually growing very fast too in, in, still in solidified proof of concept mini projects, kind of initiatives. But, um, what we're succeeding is that companies are starting to offer more and more services around it. >>So they become preventative and predictive too. So now you got to the program staff being me as a dry for seeing Monique you're hopping in the car for two hours. Now, maybe it's time to take a break. Um, we see that there's a Starbucks coming up on the right or any coffee shop. That's part of a bigger chain. Uh, we know because you have that app on your phone, that you are a Starbucks user. So if you stop there, we'll give you a 50 discount on your regular coffee. So we start seeing these types of programs coming through to, again, keep people safe and keep cars safe, but primarily of course the people in it, and those are the types of use cases that we start seeing in that telematic space. >>This looks more complicated than it is. So bear with me for a second. This is a commercial example because we see a data work. A lot of data were going on in commercial insurance. It's not Leah personal insurance thing. Commercial is near and dear to my heart. It's where I started. I actually, for a long time, worked in global energy insurance. So what this one wheelie explains is how we can use sensors on people's outfits and people's clothes to manage risks and underwrite risks better. So there are programs now for manufacturing companies and for oil and gas, where the people that work in those places are having sensors as part of their work outfits. And it does a couple of things. It helps in workers' comp underwriting and claims because you can actually see where people are moving, what they are doing, how long they're working. >>Some of them even tracks some very basic health-related information like blood pressure and heartbeat and stuff like that, temperature. Um, so those are all good things. The other thing that had to us, it helps, um, it helps collect data on the specific risks and exposures. Again, we're getting more and more to individual underwriting or individual risk underwriting, who insurance companies that, that ensure these, these, um, commercial commercial enterprises. So they started giving discounts if the workers were sensors and ultimately if there is an unfortunate event and it like a big accident or big loss, it helps, uh, first responders very quickly identify where those workers are and, and, and if, and how they're moving, which is all very important to figure out who to help first in case something bad happens. Right? So these are the type of data that quite often got implements in one specific use case, and then get broadly move to other use cases or deployed into other use cases to help price risks better, better, and keep, you know, risks, better control, manage, and provide preventative care. Right? >>So these were some of the use cases that we run in the underwriting space that are very excited to talk about. So as a next step, what we would like you to do is considered opportunities in your own companies to advance whisk assessment specific to your individual customer's need. And again, customers can be people they can be enterprises to can be other, any, any insurable entity, right? The police physical dera.com solutions insurance, where you will find all our documentation assets and thought leadership around the topic. And if you ever want to chat about this, you know, please give me a call or schedule a meeting with us. I get very passionate about this topic. I'll gladly talk to you forever. If you happen to be based in the us and you ever need somebody to filibuster on insurance, please give me a call. I'll easily fit 24 hours on this one. Um, so please schedule a call with me. I promise to keep it short. So thank you very much for joining this session. And as the last thing I would like to remind all of you read our blogs, read our tweets. We'd our thought leadership around insurance. And as we all know, insurance is sexy.

Published Date : Aug 5 2021

SUMMARY :

On where you are and welcome to this breakout session around insurance, improve underwriting And we're working with, as you can see some of the largest companies in the world So And you can see that insurers are investing in depth capability. And what you see on the left side of this slide And in the descriptive phase, we describe what this means So think about sensors, wearables, you know, sense of some people's bodies, sensors, So the data is coming into your organization. And that we always can track who did watch to any point in time to that data. Work in real life. So C-SAT decided that they needed to do something about it. It's like serve the customers better or retain the customers, make sure what they have the right access to The other one that I wanted to quickly mention is Octo as most of you know, So now you got to the program staff So what this one So they started giving discounts if the workers were sensors and So as a next step, what we would like you to do is considered opportunities

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GartnerORGANIZATION

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

VikasPERSON

0.99+

LisaPERSON

0.99+

MichaelPERSON

0.99+

DavidPERSON

0.99+

Katherine KosterevaPERSON

0.99+

StevePERSON

0.99+

Steve WoodPERSON

0.99+

JamesPERSON

0.99+

PaulPERSON

0.99+

EuropeLOCATION

0.99+

Andy AnglinPERSON

0.99+

Eric KurzogPERSON

0.99+

Kerry McFaddenPERSON

0.99+

EricPERSON

0.99+

Ed WalshPERSON

0.99+

IBMORGANIZATION

0.99+

Jeff ClarkePERSON

0.99+

LandmarkORGANIZATION

0.99+

AustraliaLOCATION

0.99+

KatherinePERSON

0.99+

AndyPERSON

0.99+

GaryPERSON

0.99+

AmazonORGANIZATION

0.99+

two hoursQUANTITY

0.99+

Paul GillinPERSON

0.99+

ForresterORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Michael DellPERSON

0.99+

CiscoORGANIZATION

0.99+

JeffPERSON

0.99+

Peter BurrisPERSON

0.99+

Jeff FrickPERSON

0.99+

2002DATE

0.99+

Mandy DhaliwalPERSON

0.99+

John FurrierPERSON

0.99+

2019DATE

0.99+

fiveQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

PolyComORGANIZATION

0.99+

USLOCATION

0.99+

San JoseLOCATION

0.99+

BostonLOCATION

0.99+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Neuromorphic in Silico Simulator For the Coherent Ising Machine


 

>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.

Published Date : Sep 24 2020

SUMMARY :

know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrooklynLOCATION

0.99+

SeptemberDATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Hong Kong Noise GroupORGANIZATION

0.99+

CIAORGANIZATION

0.99+

300 yardsQUANTITY

0.99+

1000 spinsQUANTITY

0.99+

IndiaLOCATION

0.99+

15 yearsQUANTITY

0.99+

second versionQUANTITY

0.99+

first versionQUANTITY

0.99+

FarahPERSON

0.99+

second partQUANTITY

0.99+

first partQUANTITY

0.99+

twoQUANTITY

0.99+

500 spinsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

first stepQUANTITY

0.99+

20QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

ScottPERSON

0.99+

University of TokyoORGANIZATION

0.99+

500 g.QUANTITY

0.98+

MexicanLOCATION

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

KasichPERSON

0.98+

first versionQUANTITY

0.98+

firstQUANTITY

0.98+

IraqLOCATION

0.98+

third partQUANTITY

0.98+

13 clock cyclesQUANTITY

0.98+

43 clock cyclesQUANTITY

0.98+

first thingQUANTITY

0.98+

0.5 microsecondsQUANTITY

0.97+

JayPERSON

0.97+

HaiderLOCATION

0.97+

15QUANTITY

0.97+

one microsecondsQUANTITY

0.97+

SpainLOCATION

0.97+

about 10 secondsQUANTITY

0.97+

LPGAORGANIZATION

0.96+

eachQUANTITY

0.96+

500 timerQUANTITY

0.96+

one strategyQUANTITY

0.96+

both casesQUANTITY

0.95+

one errorQUANTITY

0.95+

20 wattsQUANTITY

0.95+

NinaPERSON

0.95+

about 0.1 microsecondsQUANTITY

0.95+

nineQUANTITY

0.95+

each graphQUANTITY

0.93+

14QUANTITY

0.92+

CMEORGANIZATION

0.91+

IraqiOTHER

0.91+

billions of neuronsQUANTITY

0.91+

99 successQUANTITY

0.9+

about 100QUANTITY

0.9+

larger than 500 speedsQUANTITY

0.9+

VectorORGANIZATION

0.89+

spinsQUANTITY

0.89+

VictorORGANIZATION

0.89+

last six yearsDATE

0.86+

oneQUANTITY

0.85+

one analogQUANTITY

0.82+

hamiltonianOTHER

0.82+

SimulatorTITLE

0.8+

EuropeanOTHER

0.79+

three neuro inspired principlesQUANTITY

0.78+

BosmanPERSON

0.75+

three systemQUANTITY

0.75+

trumpPERSON

0.74+

Xia PiosCOMMERCIAL_ITEM

0.72+

100QUANTITY

0.7+

one gearQUANTITY

0.7+

P.QUANTITY

0.68+

FPD eightCOMMERCIAL_ITEM

0.66+

first oneQUANTITY

0.64+

Escape Program of Science 2000TITLE

0.6+

CelticsOTHER

0.58+

TobyPERSON

0.56+

MachineTITLE

0.54+

Refugee ATITLE

0.54+

coupleQUANTITY

0.53+

TektronixORGANIZATION

0.51+

OpaOTHER

0.51+

P. J. OORGANIZATION

0.51+

BozemanORGANIZATION

0.48+

Naveen Rao, Intel | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)

Published Date : Dec 3 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

20 wattsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2014DATE

0.99+

10 millionQUANTITY

0.99+

Naveen RaoPERSON

0.99+

Justin WarrenPERSON

0.99+

20 millionQUANTITY

0.99+

oneQUANTITY

0.99+

TaiwanLOCATION

0.99+

2013DATE

0.99+

100 radiologistsQUANTITY

0.99+

Alan TuringPERSON

0.99+

NaveenPERSON

0.99+

IntelORGANIZATION

0.99+

MITORGANIZATION

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

billionsQUANTITY

0.99+

a monthQUANTITY

0.99+

2020DATE

0.99+

two partQUANTITY

0.99+

Las VegasLOCATION

0.99+

one pieceQUANTITY

0.99+

ThursdayDATE

0.99+

Kate DarlingPERSON

0.98+

early 2000sDATE

0.98+

two billionQUANTITY

0.98+

first smartphonesQUANTITY

0.98+

one sideQUANTITY

0.98+

Sands Convention CenterLOCATION

0.97+

todayDATE

0.97+

OpenVINOTITLE

0.97+

one radiologistQUANTITY

0.96+

Dr.PERSON

0.96+

16 year oldQUANTITY

0.95+

two phasesQUANTITY

0.95+

trillions of parametersQUANTITY

0.94+

firstQUANTITY

0.94+

a million timesQUANTITY

0.93+

seven yearsQUANTITY

0.93+

billions of observationsQUANTITY

0.92+

one thingQUANTITY

0.92+

one extremeQUANTITY

0.91+

two competing sidesQUANTITY

0.9+

500 trillion modelQUANTITY

0.9+

a yearQUANTITY

0.89+

fiveQUANTITY

0.88+

eachQUANTITY

0.88+

One areaQUANTITY

0.88+

a couple of months agoDATE

0.85+

one sortQUANTITY

0.84+

two neuralQUANTITY

0.82+

GANsORGANIZATION

0.79+

couple of weeksQUANTITY

0.78+

DeepRacerTITLE

0.77+

millions ofQUANTITY

0.76+

PhotoshopTITLE

0.72+

deepfakesORGANIZATION

0.72+

next few yearsDATE

0.71+

yearQUANTITY

0.67+

re:Invent 2019EVENT

0.66+

threeQUANTITY

0.64+

Invent 2019EVENT

0.64+

aboutQUANTITY

0.63+

theCUBE Insights | VMworld 2019


 

>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum, World 2019 brought to you by the M Wear and its ecosystem partners. >> Hey, welcome back, everyone. Live Cube coverage of the emerald 2019 were here in San Francisco, California Mosconi North Lobby. Two sets Our 10th year covering the emerald in our 20th year of Of of our seasons of covering Me to be enterprised Tech. I'm Jeffrey Day Volonte student Justin Warren breaking down day to Cube insights segment. Dave's Do You Do You're on Set Valley set this the meadow set because it's got the steamboat chirping birds behind us. Justin, you've been doing some interviews out on the floor as well. Checking the story's out. All the news is out. Day one was all the big corporate stuff. Today was the product technology news stew. I'll go to you first. What's the assessment on your take on the M, where obviously they're reinventing themselves? Jerry Chen, who we interviewed, said this is Act three of'em where they keep on adding more and more prostitute their core, your thoughts on what's going on. >> So the biggest whore I've seen is the discussion of Tom Zoo, which really talking those cloud native applications. And if you break down VM wear, it's like many companies that said, There's the, you know, core product of the company. It is vey sphere. It is the legacy for what we have and it's not going anywhere, and it's changing. But, you know, then there's the modernization project Pacific howto a bridge to the multi cloud world. How do I bridge Kubernetes is going to come into the sphere and do that? But then there's the application world into the thing I've been. You know, the existential threat to VM, where I've been talking about forever is if we sas if I and cloud If I and all the APS go away, the data centers disappear in Vienna, where dominant, the data center is left out in the cold. So, you know, Pivotal was driving down that that path. They've done a lot of acquisitions, so love directionally where towns who's going time will tell whether they can play in that market. This is not a developer conference. We go to plenty of developer events, so, you know, that's you know, some of the places. I see you know, and and still, you know, >> narrator conference. You're right. Exactly Right. And just I want to get your thoughts, too, because you've been blocking heavily on this topic as well. Dev Ops in general, commenting on the Cube. You know, the reality and the reality, Uh, and the reality of situation from the the announcement. That's a vapor. They're doing some demos. They're really product directions. So product directions is always with VM. Where does it? It's not something that their shameful love, that's what they do. That's what they put out. It's not bakery >> company. It's a statement, A statement of >> direction. We were talking hybrid cloud in 2012 when I asked Pet guess it was a halfway house. He blew a gasket. And now, five years later, the gestation period for hybrid was that. But the end was happy to have the data center back in the back. In the play here, your thoughts on >> Yeah. So this conference is is, I think, a refreshing return to form. So, Vienna, where is as you say, this is an operators conference in Vienna. Where is for operators? It's not Four Dev's. There was a period there where cloud was scary And it was all this cloud native stuff in Vienna where tried to appeal to this new market, I guess tried to dress up and as something that it really wasn't and it didn't pull it off and we didn't It didn't feel right. And now Veum Way has decided that Well, no, actually, this is what they and where is about. And no one could be more Veum where than VM wear. So it's returning to being its best self. And I think you >> can software. They know software >> they know. So flick. So the addition of putting predict Enzo in and having communities in there, and it's to operate the software. So it's it's going to be in there an actual run on it, and they wanna have kubernetes baked into the sphere. So that now, yeah, we'll have new a new absent. Yeah, there might be SAS eps for the people who are consuming them, but they're gonna run somewhere. And now we could run them on van. Wait. Whether it's on Silent at the edge could be in the cloud your Veum wear on eight of us. >> David David so I want to get your thoughts just don't want to jump into because, you know, I love pivotal what they've done. I've always felt as a standalone company they probably couldn't compete with Amazon to scale what's going on in the other things. But bring it back in the fold in VM, where you mentioned this a couple of our interviews yesterday, Dave, and still you illuminate to to the fact of the cloud native world coming together. It's better inside VM wear because they can package pivotal and not have to bet the ranch on the outcome in the marketplace where this highly competitive statements out there so you get the business value of Pivotal. The upside now can be managed. Do your thoughts first, then go to date >> about Pivotal. Yeah, as >> an integrated, integrated is better for the industry than trying to bet the ranch on a pier play >> right? So, John, yesterday we had a little discussion about hybrid and multi cloud and still early about there, but the conversation of past five years ago was very different from the discussion. Today, Docker had a ripple effect with Containers and Veum. Where is addressing that and it made sense for Pivotal Cut to come home, if you will. They still have the Pivotal Labs group that can work with customers going through that transformation and a number of other pieces toe put together. But you ve m where is doing a good enough job to give customers the comfort that we can move you forward to the cloud. You don't have to abandon us and especially all those people that do VM Where is they don't have to be frozen where they are >> a business value. >> Well, I think you've got to start with the transaction and provide a historical context. So this goes back to what I used to call the misfit toys. The Federation. David Golden's taking bits and pieces of of of Dragon Pearl of assets in side of E, M. C and V M wear and then creating Pivotal out of whole cloth. They need an I P O. Michael Dell maintained 70% ownership of the company and 96% voting shares floated. The stock stock didn't do well, bought it back on 50 cents on the dollar. A so what the AIPO price was and then took a of Got a Brit, brought back a $4 billion asset inside of the M wear and paid $900 million for it. So it's just the brilliant financial transaction now, having said all that, what is the business value of this? You know, when I come to these shows, I'd liketo compare what they say in the messaging and the keynotes to what practitioners are saying in the practitioners last night were saying a couple of things. First of all, they're concerned about all the salmon. A like one. Practitioners said to me, Look, if it weren't for all these acquisitions that they announced last minute, what would we be hearing about here? It would have been NSX and V san again, so there's sort of a little concerns there. Some of the practitioners I talked to were really concerned about integration. They've done a good job with Nasasira, but some of the other acquisitions that they may have taken longer to integrate and customers are concerned, and we've seen this movie before. We saw the DMC. We certainly saw the tell. We're seeing it again now, at the end where Veum where? Well, they're very good at integrating companies. Sometimes that catches up to you. The last thing I'll say is we've been pushing You just mentioned it, Justin. On Dev's not a deaf show. Pivotal gives VM where the opportunity to whether it's a different show are an event within the event to actually attract the depths. But I would say in the multi cloud world, VM wears sitting in a good position. With the exception of developers pivotal, I think it's designed to solve that problem. Just tell >> your thoughts. >> Do you think that Veum, where is, is at risk of becoming a portfolio company just like a M A. M. C. Watts? Because it certainly looks at the moment to me like we look at all the different names for things, and I just look at the brand architecture of stuff. There are too many brands. There are too many product names, it's too confusing, and there's gonna have to be a culottes some point just to make it understandable for customers. Otherwise, we're just gonna end up with this endless sprawl, and we saw what the damage that did it. At present, I am saying >> it's a great point and Joseph Joe to cheese used to say that overlap is better than gaps, and I and I agree with him to appoint, you know better until it's not. And then Michael Dell came in and Bar came and said, Look, if we're gonna compete with Amazon's cost structure, we have to clean this mess up and that's what they've been doing it a lot of hard work on that. And so, yeah, they do risk that. I think if they don't do that integration, it's hard to do that. Integration, as you know, it takes time. Um, and so I have Right now. All looks good, right? Right down the middle. As you say, John, are >> multi cloud. Big topic gestation period is going to take five years to seven years. When the reality multi cloud a debate on Twitter last night, someone saying, I'm doing multi cloud today. I mean, we had Gelsinger's layout, the definition of multi cloud. >> Well, he laid out his definition definition. Everyone likes to define its. It's funny how, and we mentioned this is a stew and I earlier on the other set, cloud were still arguing about what cloud means exit always at multi cloud, which kind of multi cloud is a hybrid bowl over. And then you compare that to EJ computing, which computing was always going on. And then someone just came along and gave it a name and everyone just went, huh? OK, and go on with their lives. And so why is cloud so different and difficult for people to agree on what the thing is? >> There's a lot of money being made and lost, That's why >> right day the thing I've said is for multi cloud to be a real thing, it needs to be more valuable to a customer than the sum of its pieces on. And, you know, we know we're gonna be an Amazon reinvent later this year we will be talking, you know? Well, they will not be talking multi cloud. We might be talking about it, but >> they'll be hinting to hybrid cloud may or may not say >> that, you know, hybrid is okay in their world with outpost and everything they're doing in there partnering with VM wear. But you know, the point I've been looking at here is you know, management of multi vendor was atrocious. And, you know, why do we think we're going to any better. David, who hired me nine years ago. It was like I could spend my entire career saying, Management stinks and security needs to be, >> you know, So I want to share lawyers definition. They published in Wicked Bon on Multiply Multi Cloud Hybrid Cloudy, Putting together True Hybrid Cloud Multiply Any application application service can run on any node of the hybrid cloud without rewriting, re compiling or retesting. True hybrid cloud architectures have a consistent set of hardware. Software service is a P I is with integrated network security data and control planes that are native to and display the characteristics of public cloud infrastructure is a service. These attributes could be identically resident on other hybrid nodes independent of location, for example, including on public clouds on Prem or at the edge. That ain't happening. It's just not unless you have considered outposts cloud a customer azure stack. Okay, and you're gonna have collections of those. So that vision that he laid out, I just I think it's gonna >> be David. It's interesting because, you know, David and I have some good debates on this. I said, Tell me a company that has been better at than VM wear about taking a stack and letting it live on multiple hardware's. You know, I've got some of those cars are at a big piece last weekend talking about, you know, when we had to check the bios of everything and when blade Service rolled out getting Veum whereto work 15 years ago was really tough. Getting Veum were to work today, but the >> problem is you're gonna have outposts. You're gonna have project dimensions installed. You're gonna have azure stacks installed. You're gonna have roll your own out there. And so yeah, VM where is gonna work on all >> those? And it's not gonna be a static situation because, you know, when I talk to customers and if they're using V M where cloud on AWS, it's not a lift and shift and leave it there, Gonna modernize their things that could start using service is from the public cloud and they might migrate some of these off of the VM where environment, which I think, is the thing that I am talking to customers and hearing about that It's, you know, none of these situations are Oh, I just put it there and it's gonna live there for years. It's constantly moving and changing, and that is a major threat to VM wears multi clouds, >> Traffic pushes. Is it technically feasible without just insanely high degrees of homogeneity? That's that's the question. >> I I don't think it is and or not. I don't think it's a reasonable thing to expect anyway, because any enterprise you have any M and a activity, and all of a sudden you've got more than one that's always been true, and it will always be true. So if someone else makes a different choice and you buy them, then we'll have both. >> So maybe that's not a fair definition, but that's kind of what what? One could infer that. I think the industry is implying that that is hybrid multi club because that's the nirvana that everybody wants. >> Yeah, the only situation I can see where that could maybe come true would be in something like communities where you're running things on as an abstraction on top off everything else, and that that is a common abstraction that everyone agrees on and builds upon. But we're already seeing how that works out in real life. If >> I'm >> using and Google Antos. I can't easily move it to P. K s or open shift. There's English Kubernetes, as Joe Beta says, is not a magic layer, and everybody builds. On top of >> it, is it? Turns out it's actually not that easy. >> Well, and plus people are taken open source code, and then they're forking it and it building their own proprietary systems and saying, Hey, here's our greatest thing. >> Well, the to the to the credit of CNC, if Kubernetes. Does have a kind of standardized, agreed to get away away from that particular issue. So that's where it stands a better chance and say unfortunately, open stack. So because we saw a bit of that change of way, want to go this way? And we want to go that way. So there's a lot of seeing and zagging, at least with communities. You have a kind of common framework. But even just the implementation of that writing it, >> I love Cooper. I think I've been a big fan of committed from Day one. I think it's a great industry initiative. Having it the way it's rolling out is looking very good. I like it a lot. The comments that we heard on the Cube of Support. Some of my things that I'm looking at is for C N C s Q. Khan Come coop con Coming up is what happened with Kay, native and SDO because that's what I get to see the battleground for above Goober Netease. You see, that's what differentiates again. That's where that the vendors are gonna start to differentiate who they are. So I think carbonates. It could be a great thing. And I think what I learned here was virtualization underneath Kubernetes. It doesn't matter if you want to run a lot. Of'em Furat scale No big deal run Cooper's on top. You want to run in that bare metal? God bless you, >> Go for it. I think this use cases for both. >> That's why I particularly like Tenzer is because for those customers who wanna have a bit of this, cupidity is I don't want to run it myself. It's too hard. But if I trust Vienna where to be able to run that in to upgrade it and give me all of the goodness about operating it in the same way that I do the end where again we're in and I'll show. So now I can have stuff I already know in love, and I can answer incriminating on top of it. >> All right, But who's gonna mess up Multi clouds do. Who's the vendor? I'm not >> even saying it s so you can't mess up something that >> who's gonna think vision, this vision of multi cloud that the entire industry is putting forth who's gonna throw a monkey? The rich? Which vendor? Well, screw it. So >> you know, licensing usually can cause issues. You know, our friend Corey Crane with a nice article about Microsoft's licensing changes there. You know, there are >> lots of Amazon's plays. Oh, yeah. Okay. Amazon is gonna make it. >> A multi clock is not in the mob, >> but yet how could you do multi cloud without Amazon? >> They play with >> control. My the chessboard on my line has been Amazon is in every multi cloud because if you've got multiple clouds, there's a much greater than likely chance >> I haven't been. You know, my feeling is in looking at the history of how multi vendor of all from the I T industry from proprietary network operating systems, many computers toe open systems, D c P I P Web, etcetera. What's going on now is very interesting, and I think the sea so ce of the canary in the coal mine, not Cee Io's because they like multi vendor. They want multiple clouds. They're comfortable that they got staff for that si sos have pressure, security. They're the canary in the coal mine and all the seasons lights, while two are all saying multi clouds b s because they're building stacks internally and they want to create their own technology for security reasons and then build a P eyes and make a P. I's the supplier relationship and saying, Hey, supplier, if you want to work with me, me support my stack I think that is an interesting indication. What that means is that the entire multi cloud thing means we're pick one clown build on, have a backup. We'll deal with multiple clouds if there's workloads in there but primary one cloud, we'll be there. And I think that's gonna be the model. Yes, still be multiple clouds and you got azure and get office 3 65 That's technically multi cloud, >> but I want to make a point. And when pats on we joke about The cul de sac is hybrid cloud a cul de sac, and you've been very respectful and basically saying Yap had okay, But But But you were right, Really. What's hybrid would show me a hybrid cloud. It's taken all this time to gestate you where you see Federated Applications. It's happening. You have on prim workloads, and you have a company that has public cloud workloads. But they're not. Hybrid is >> the region. Some we'll talk about it, even multi. It is an application per cloud or a couple of clouds that you do it, but it's right. Did he follow the sun thing? That we might get there 15 years ago? Is >> no. You're gonna have to insist that this >> data moving around, consistent >> security, governance and all the organizational edicts across all those platforms >> the one place, like all week for that eventually and this is a long way off would be if you go with Serverless where it's all functions and now it's about service composition and I don't care where it lives. I'm just consuming a service because I have some data that I want to go on process and Google happens to have the best machine learning that I need to do it on that data. Also use that service. And then when I actually want to run the workload and host it somewhere else, I drop it into a CD in with an application that happens to run in AWS. >> Guys wrapping up day to buy It's just gonna ask, What is that animal? It must be an influence because hasn't said a word. >> Thistles. The famous blue cow She travels everywhere with me, >> has an INSTAGRAM account. >> She used to have an instagram. She now she doesn't. She just uses my Twitter account just in time to time. >> I learned a lot about you right now. Thanks for sharing. Great to have you. Great as always, Great commentary. Thanks for coming with Bay three tomorrow. Tomorrow. I want to dig into what's in this for Del Technologies. What's the play there when I unpacked, that is tomorrow on day three million. If there's no multi cloud and there's a big tam out there, what's in it for Michael Dell and BM where it's Crown Jewel as the main ingredient guys, thanks for coming stupid in Manchester words, David Want them? John, Thanks for watching day, too. Inside coverage here are wrap up. Thanks for watching

Published Date : Aug 28 2019

SUMMARY :

brought to you by the M Wear and its ecosystem partners. I'll go to you first. You know, the existential threat to VM, where I've been talking about forever is if we sas if I Dev Ops in general, commenting on the Cube. It's a statement, A statement of But the end was happy to have the data center back in the back. And I think you They know software Whether it's on Silent at the edge could be in the cloud your Veum wear But bring it back in the fold in VM, Yeah, as is they don't have to be frozen where they are With the exception of developers pivotal, I think it's designed to solve that problem. Because it certainly looks at the moment to me like we look at all the different names for things, Integration, as you know, it takes time. When the reality multi cloud a debate on Twitter last night, someone saying, I'm doing multi cloud today. And then you compare that to EJ computing, which computing was always going on. right day the thing I've said is for multi cloud to be a real thing, But you know, the point I've been looking at here is you know, It's just not unless you have considered outposts cloud It's interesting because, you know, David and I have some good debates on this. And so yeah, VM where is gonna work on all and hearing about that It's, you know, none of these situations are Oh, That's that's the question. I don't think it's a reasonable thing to expect anyway, because any enterprise you have any I think the industry is implying that that is hybrid multi club because that's the nirvana that everybody Yeah, the only situation I can see where that could maybe come true would be in something like communities where you're I can't easily move it to P. K s or open shift. Turns out it's actually not that easy. Well, and plus people are taken open source code, and then they're forking it and it building their Well, the to the to the credit of CNC, if Kubernetes. And I think what I learned here was virtualization I think this use cases for both. of the goodness about operating it in the same way that I do the end where again we're in and I'll show. Who's the vendor? So you know, licensing usually can cause issues. lots of Amazon's plays. My the chessboard on my line has been Amazon is in every I's the supplier relationship and saying, Hey, supplier, if you want to work with me, It's taken all this time to gestate you where you see Federated Applications. a couple of clouds that you do it, but it's right. the one place, like all week for that eventually and this is a long way off would be if you go with It must be an influence because hasn't said a word. The famous blue cow She travels everywhere with me, She just uses my Twitter account just in time to time. I learned a lot about you right now.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

ViennaLOCATION

0.99+

JohnPERSON

0.99+

Justin WarrenPERSON

0.99+

Jerry ChenPERSON

0.99+

2012DATE

0.99+

AmazonORGANIZATION

0.99+

JustinPERSON

0.99+

50 centsQUANTITY

0.99+

five yearsQUANTITY

0.99+

Corey CranePERSON

0.99+

$4 billionQUANTITY

0.99+

David GoldenPERSON

0.99+

DavePERSON

0.99+

70%QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

$900 millionQUANTITY

0.99+

KayPERSON

0.99+

TomorrowDATE

0.99+

Joe BetaPERSON

0.99+

NasasiraORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NSXORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

yesterdayDATE

0.99+

CooperPERSON

0.99+

DMCORGANIZATION

0.99+

TenzerORGANIZATION

0.99+

Del TechnologiesORGANIZATION

0.99+

VeumORGANIZATION

0.99+

TodayDATE

0.99+

20th yearQUANTITY

0.99+

AWSORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

10th yearQUANTITY

0.99+

tomorrowDATE

0.99+

five years laterDATE

0.99+

twoQUANTITY

0.99+

Joseph JoePERSON

0.99+

bothQUANTITY

0.99+

M wearORGANIZATION

0.99+

last nightDATE

0.98+

Michael DellPERSON

0.98+

nine years agoDATE

0.98+

Day oneQUANTITY

0.98+

firstQUANTITY

0.98+

ViennaORGANIZATION

0.98+

ManchesterLOCATION

0.97+

15 years agoDATE

0.97+

Pivotal LabsORGANIZATION

0.97+

Veum WayORGANIZATION

0.97+

SDOPERSON

0.97+

GelsingerPERSON

0.97+

FirstQUANTITY

0.96+

todayDATE

0.96+

Q. KhanPERSON

0.96+

V sanORGANIZATION

0.95+

David DavidPERSON

0.95+

eightQUANTITY

0.95+

TwitterORGANIZATION

0.95+

Jeffrey Day VolontePERSON

0.95+

PivotalORGANIZATION

0.94+

M A. M. C. WattsPERSON

0.94+

last weekendDATE

0.93+

Bay threeORGANIZATION

0.93+

past five years agoDATE

0.93+

96% voting sharesQUANTITY

0.92+

DockerORGANIZATION

0.92+

EnzoORGANIZATION

0.92+

3 65OTHER

0.91+

later this yearDATE

0.91+

Sam Burd, Dell Technologies | Dell Technologies World 2019


 

live from Las Vegas it's the queue covering del technology's world 2019 brought to you by Dell technologies and it's ecosystem partners everyone welcome back to the cubes live coverage here in Las Vegas we are here for Dell technology rules 2019 got two sets I'm John Faraday Volante my co-host Dave day to three days of wall-to-wall coverage I've got a great guest December the president of client Solutions Group at Dell technologies Sam handles all of the big edge machines like the PCs my machine here and other cool stuff Sam thanks for joining us today appreciate it thank you guys for having me so one of the themes that we're seeing I'll see through the transformation going back when Michael went private buys EMC new puzzle pieces this is growing and scaling and one of the big surprises or not surprises is the cloud growth and them data grow that's been fueling a lot of existing businesses the client business one of them that you run yeah as do extremely well the numbers are looking good new machines you know the PC revolution continues evolving that's the state of the art what's the current state of the business give us an update hey so like you said the business is doing really well I'm excited this year we'll have our 35th birthday for Dell and the PC business the business I lead at Dell is where it all started 35 years ago in a dorm room at University of Texas now a forty three billion dollar business it is just a part of Dell so we've become a lot more but growing double digits we've seen a resurgence in the edge and I think like you said one of the things I'm seeing as I talk to companies they're almost seeing that edge is the secret weapon as we talk about all this transformation because getting great employees is the challenge if you want your business to lead in an industry and as we go talk to companies and we talked to Jen Expo we talked to Millennials we talked to Gen Z getting them armed with a great piece of technology where they can be productive in a job and help make a difference in a company or career that's what they want to go and do they want that more than drinks in the break room they want that more than volleyball courts outside and when companies are able to do that with our PC products at the edge they get great people in who helped that company be more successful so we're seeing a really good growth and we're we're dedicated to doing some exciting products for people and it's not easy to I just want to unpack the dynamics between the two worlds that go on one is making the machines go faster smaller less expensive so more horsepower lower prices higher functionality and then the integration to get that kind of a seamless works work lifestyle balance where you got consumer business all kind of blending together where you got to connect the networks you got it you can go to work at Starbucks here in once in a while you got to have all this stuff in it working together with what used to be the big iron back-end systems oh yes oh yeah so you got to you've got two jobs it's true what how do you balance that other different teams or different approaches what's the focus you know we we look at a couple things internally we have really focused on not just the hardware design that we're putting together and the speeds and feeds and we can do that great you take our you know our gaming business we have a we were showing off in the alien you can go over to a alien where a kind of gaming section we have here we have things that have more than 300 watts of power for CPUs and graphics in it feels to us if you went back in time it's super compact about what you used to have it's not anything like the latest XPS products I see you guys using there but we can design that kind of power into the systems and then we're focused on the experience we bring alive for people so you think about working with partners I'm working with services teams working with Microsoft working with VMware around how we bring alive the things people want to do on the consumer side like one thing we see people more people now watch TV and pcs then watch TV on TV oh it's like a great experience it's pretty headphones and nobody's bothering ya it's it's pretty good the other the other thing that's interesting I've switched all my viewing that way because we figured out the younger generations that that is even more true for them so in my millennial or gen Z fashion I've started a hundred percent of my TV viewing it's on a PC but it's a great way to do it we've done experience around that with audio video streaming that we go how do we bring that alive same thing on gaming gaming space I want to show you guys hopefully in a couple minutes we can talk about some of the latitudes we announced here but we've done that in the workspace of people want to be productive immediately they want a tool that lets them do that and we said how do we put technology and software and capability together to allow them to have that kind of experience they want that what if some of the things you announced today and you know what's uh what are the exciting parts of them so we brought a are we announced our new latitude lineup so you see from top to bottom some really amazing looking pcs and one of the things if you guys get that look you little high or we go to guys can you guys see that so awesome looking PC the other thing is if you take a look at this we built in different kind of capabilities that allow allow really fast log into the system so there's an Express Express login Express sign-in capability that under no under kind of infrared lights sensors you can basically recognize it recognizes when you walk up to that system it will log you into the system automatically so you don't have to touch the screen the keyboard it all saves you that kind of instant productivity you turn around walk away it'll sense when you're there and when you're not there will log you out of the system we also have something we call Express sign Express charge on this system so people are on the go some of the stats we were sharing when you think about audience here people are working in different offices people are working on the road John you were saying people are working in Starbucks how do we allow you to quickly you plug that in you can get 80% charge in an hour you can get 35% charge in 20 minutes so allow you to get up and going really quickly but basically designing some pretty awesome systems that if you go look at what some of the press is saying about this stuff of finally putting a business system in people's hands that users are gonna covet so we did cool stuff with Alienware we've done that with our XPS product we said we need to bring that into the commercial space so people have really cool tools to get these great reviews just to give a little shout out to props to you guys getting some good reviews also it's it's it's good tailwind for you that Apple is kind of struggling with their MacBooks when the prices are high people are now coming back and look into the PC in fact my son is a big-time gamer you depreciate it the acronym is called PCM R which stands for PC master-race because you know the gamers like to be hardcore on the PC gaming huge growth area alien and where is doing great but people people look at whether it's gamer or work you seeing the gamers are guys I think of canary in the coalmine they're I think a leading indicator of a trend around I want a relationship with my device and I want I want to be able to have things available whether it's mobile or or PC or gaming so it's a little bit more intimacy and then there's also a pressure we're seeing on the trend line around augmented reality built into the machine so you start to see again better monitors for K connections you know better immersive yep either whether it's single sign-on authentication to just overall experience that's a big trend yeah and I think you said it on gaming we've built a community around our Alienware brand we've built entry level gaming systems we've turned gaming that we've been in for 23 years with alien we're now at 3 billion dollar business inside our Dell PC business and there's a lot of affinity for people who going hey turn out awesome powered systems and deliver me a kind of experience and speed that I want to win in the game you know it's the same thing though on the commercial side of going people want tools when they're coming to work don't let them do a great job in in their business I know dad wants this question but I want to get one more thing out PC if people talk about other people don't want to hear about speeds and fees when it comes to machines people on a gear speeds and feeds how many cores is there a graphics accelerator in there is there a GPU I need to get AI what's going on with the inside the specs give us the latest state of the art oh we have like so you can look at core explosion in PCs is great the thing that I really like is all these systems now you see USB connectivity so you can put your just people before we're going hey the display is going away so you walk around see we have 49 inch curved displays we have huge 43 inch displays you can get four display side-by-side you can get to 27 inch displays side-by-side I go to trading floors around the world they're stacking two and three of these displays next to each other you can power that out coming out of the USB port on your system you can power that with the graphics on the system and then we have everything up to go to Alienware which is huge core counts but though the power the watts we literally have two huge power supplies - 300 watt power supplies that you're plugging into the back of our gaming desktops it will almost consume the 15 amps that you have in your house circuit to power that system and we fit that in a you know it's about an eight pound system today that's maybe an inch and inch and a bit thick that if you go back to legacy pcs we're talking about we're almost at 2020 in a new decade if you go back to the start of this decade that was like run in the middle average PC that we're now fitting incredible power into so I think all that and GPUs are up and what's the status on because graphics processors has become a big latest great racing graphics processors that we're now waiting the thing that's exciting to me is on the games think we'll see games now catch up to 2000 series GPUs from laying the race race and I think it's an important innovation because that's going to really come and help the gaming but also it's starting to bleed into some other creative areas we're way to get you stocked up with some alien we're here walking out of it I'm waiting for a display the curtains excited I want the curb display no we we see it in games we also see it in advertising so it's amazing the stuff you can go and do it say render a vehicle in a photo shoot that you used to have to go to a remote location and basically ray-tracing allows you to render that scene by putting individual beams of light into that into the interact with all the geometry that you have and it shows what it'll basically draw that picture for you so you get all kinds of nuances of shadows other images flickers and reflections that are just amazing and lifelike realism so we're gonna see that in games you see graphics designer is doing that in TV commercials and in print ads and you do it without ever having to touch the physical product which it's hugely time and processor compute graphics intensive to go and do that but you're now seeing us able to do that on a I brought in a precision workstation it's a little bit bigger than this and it's a horse-collar on the machines can handle that ray tracing that's the whole point yes guys are connecting the edge with your your laptops your your your your your PC's what are you doing a stress test them on the edge torture test you're doing any fun stuff like dropping them from the building and throwing flames at them and yeah what we do we have some fun labs so in Austin Texas we have a lot of fun whether it's dropping systems which is not unrealistic of what happens in the environment we actually find our hardest users are students in education environments so we've commercial really important because like the XPS I see you guys are using people will take a little bit better care of the stuff when it's their own dollars that went to that but you know the the work system gets thrown in a bag it gets thrown in in the back of the car so you look at temperature testing cold hot drops waters coffee in the office environment water in the office environment that gets thrown against it so we do all that kinds of stuff but we've learned a lot from students and we do things like little micro drop tests because you had literally we had systems that got not banged against the floor but the slammd in the bag by a student you know thousands of times across the lifecycle that we had to go and change how we engineer some of the connectors and how the systems are set up just to make them really durable so whatever you talk about your business a little bit John knows I'd love to get into the business that I want to explore the importance of the the client business to Dell it's about half of your revenue just a little under half of the revenue obviously lower margin than some of the enterprise businesses but it's critical and this is what the company was founded on it absorbs a lot of the corporate overhead it's growing what's going on in the business units dollars what can you share with us yes so forty-three billion dollar business grew double digits last year we had for the last five quarters we've led the industry in growth which is a reflection of our real focus on what customers are looking for and delivering great products to them we have 25 quarters of gaining share 25 consecutive quarters so we have a really good run going in the business we look at this year I see the industry continues to consolidate top three players in our industry are around less than 65% share kind of 63 and change and in most industries you see them as they've become more mature you see them more consolidated than than where we are today it's been consolidating last six years we've gained six hundred basis points a share we think is Michael and our team have invested in great designs and great experiences to customers there's lots of runway to continue growth here and you know that's what we're the thing that gets me excited in our engineers is turning out products that our customers go and love and as we went private you really began to transform this company we said we want to be the best bar no one in this industry and we've really you see that in the Alienware you see that in XPS you see that what we're doing in the latitude space we continue to set a very high bar for ourselves in the growth so people tend to keep their laptops longer you got to sell these cloud apps and it's great as a user you have to replace your your laptop every you know 15 months yeah I'm sure you'd love us to do that but so where's the growth coming from is that new applications is it obviously share gains and and how will it continue yeah well we see it more the premium space is growing a lot where people have said hey I want to trade up whether that's the the gamer like your son a user on XPS who wants a really mobile system that they can throw in their backpack or throw in their purse and take take with them it's interesting in the commercial space we actually see some of the highest end systems that we sell in our work station business have the fastest turnover and change rate because when you can add more cores more horsepower to that and go my expensive engineer designing airplanes or my graphics design or doing advertisements or videos for the company can now be more productive people go I want to spend the $3,000 because in comparison to the salary and the time I'm saving I'll get the best talent they're happier because it gets done faster and my business gets more done that's where they're actually switching the system's over so it's to us to make that easier and then the other thing that we're doing that's really interesting and that we announced this week is we're working across our businesses so we've gotten out of just the you know look at the hardware but we're going how do I partner with the services business how do I partner with VMware and start to make the whole process that get in technology and users hands easier because if you look at if you look at companies today 75% of their spend in our space is on all the stuff other than the hardware and the devices so it's like planning going and doing deployment where I have technical people literally with box cutters opening boxes putting new images on systems they struggle to keep systems up-to-date how do I manage support them take all the calls that are coming in you start looking at that and you go there's a way we've we've always tried to redo it but it was like shuffle around where the people are and hey I can take your people and do the thing for you cheaper or maybe not because then you start getting charged for all these crazy change things now we're going pay with software and services I can start doing this in an automated intelligent way that makes it a lot easier so I can go when I want you me any of us to have an awesome system go start taking that other cost out make it easy and fast and then you go the system can be updated someone can go I get better technology in my users hands and hey I save money doing it because I'm not spending on this other crazy stuff hopefully invest a little more here but also invest in the infrastructure transformation they have going on 5% is seventy five fifty five percent the buckets what a hundred billion is that fair enough in commercial space if we throw phones printers everything in there's about two hundred billion dollars in companies spent on hardware four hundred billion on other stuff if you look at pcs that ratio it's a bunch of the two hundred billion and it's in a billion you can attack with just better services and automation and things like that's and that's what we're doing like with VMware and with our services team with going like how can i integrate take VMware software integrate with our Factory and go when your new system shows up it has your apps and your image on it you plug in you're literally logged in doing final last mile customization so think new employee rather than having to download a bunch of stuff or an IT person comes and sets up your system you get that system with what you need your profile which we figured out we've been figured out hey here's the kind of users aren't you are you're a really mobile person we're going to want to get you this system you're plugged in with that new system going in minutes and it eliminates that sneakernet of a bunch of people doing it and turns it into intelligence and sauce so that's tens of billions in Tam expansion yeah absolutely yes I think it's we look at is hey it's it's a good opportunity for us to expand and then it saves customers it saves them time and money it makes it easier you're innovating on two fronts making a great device more horsepower to get that step-up function on new kinds of productivity that warrant the price increase for the user and then all that integration back-end yes to innovation tracks big time yeah and then we have to keep pushing on the physical hardware and that's where I go if you went back in time ten years ago you know it's like the systems were big and thick we never imagined they would be this slim this powerful I look at the future and go when you think about AR VR you think about more natural interaction with systems with voice and with breaking pen really a first user class with the keyboard I think there's a lot of opportunity going forward we want to do stuff that will cause people to want to buy new systems so it's a good challenge to have well we'll do a deal for you with the cube special sponsorship consideration for the curve monitors and all the crates thanks for coming on and we got ray tracing into the cube conversation here Sam thanks for come on share and congratulations new success PCs getting stronger faster new productivity gains with ray tracing all this other stuff happening this is what cloud and data does it's the cue bringing you all the content here's the content cannon two sets be right back with more coverage here at Dell technology world after the short break [Music]

Published Date : Apr 30 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
$3,000QUANTITY

0.99+

80%QUANTITY

0.99+

35%QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Sam BurdPERSON

0.99+

MichaelPERSON

0.99+

DellORGANIZATION

0.99+

300 wattQUANTITY

0.99+

two hundred billionQUANTITY

0.99+

27 inchQUANTITY

0.99+

four hundred billionQUANTITY

0.99+

15 ampsQUANTITY

0.99+

JohnPERSON

0.99+

DecemberDATE

0.99+

49 inchQUANTITY

0.99+

15 monthsQUANTITY

0.99+

SamPERSON

0.99+

43 inchQUANTITY

0.99+

23 yearsQUANTITY

0.99+

20 minutesQUANTITY

0.99+

25QUANTITY

0.99+

75%QUANTITY

0.99+

Las VegasLOCATION

0.99+

2020DATE

0.99+

last yearDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

Las VegasLOCATION

0.99+

5%QUANTITY

0.99+

thousands of timesQUANTITY

0.99+

AppleORGANIZATION

0.99+

Austin TexasLOCATION

0.99+

three daysQUANTITY

0.99+

63QUANTITY

0.98+

MacBooksCOMMERCIAL_ITEM

0.98+

StarbucksORGANIZATION

0.98+

25 quartersQUANTITY

0.98+

two jobsQUANTITY

0.98+

todayDATE

0.98+

tens of billionsQUANTITY

0.98+

more than 300 wattsQUANTITY

0.98+

XPSTITLE

0.98+

twoQUANTITY

0.98+

35th birthdayQUANTITY

0.98+

about two hundred billion dollarsQUANTITY

0.98+

forty three billion dollarQUANTITY

0.98+

3 billion dollarQUANTITY

0.98+

threeQUANTITY

0.97+

35 years agoDATE

0.97+

this weekDATE

0.97+

hundred percentQUANTITY

0.97+

two worldsQUANTITY

0.97+

two frontsQUANTITY

0.97+

hundred billionQUANTITY

0.97+

DavePERSON

0.97+

seventy five fifty five percentQUANTITY

0.97+

an hourQUANTITY

0.95+

forty-three billion dollarQUANTITY

0.95+

this yearDATE

0.95+

VMwareORGANIZATION

0.94+

ten years agoDATE

0.94+

oneQUANTITY

0.93+

VMwareTITLE

0.93+

Jen ExpoEVENT

0.92+

less than 65% shareQUANTITY

0.91+

2019DATE

0.9+

three playersQUANTITY

0.9+

two setsQUANTITY

0.89+

last six yearsDATE

0.89+

six hundred basis pointsQUANTITY

0.88+

a billionQUANTITY

0.86+

this yearDATE

0.86+

one moreQUANTITY

0.84+

John Faraday VolantePERSON

0.83+

two setsQUANTITY

0.81+

aroundQUANTITY

0.79+

an inch and inchQUANTITY

0.77+

about half of your revenueQUANTITY

0.77+

Clement Pang, Wavefront by VMware | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2018. Brought to you by Amazon web services, intel, and their ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of AWS re:Invent, here at the Venetian in Las Vegas. I'm your host, Rebecca Knight, along with my co-host John Furrier. We're joined by Clement Pang. He is the co-founder of Wavefront by VMware. Welcome. >> Thank you Thank you so much. >> It's great to have you on the show. So, I want you tell our viewers a little bit about Wavefront. You were just purchased by VMware in May. >> Right. >> What do you do, what is Wavefront all about? >> Sure, we were actually purchased last year in May by VMware, yeah. We are an operational analytics company, so monitoring, I think is you could say what we do. And the way that I always introduce Wavefront is kind of a untold secret of Silicon Valley. The reason I said that is because in the, well, just look at the floor. You know, there's so many monitoring companies doing logs, APM, metrics monitoring. And if you really want to look at what do the companies in the Valley really use, right? I'm talking about companies such as Workday, Watts, Groupon, Intuit, DoorDash, Lyft, they're all companies that are customers of Wavefront today. So they've obviously looked at all the tools that are available on the market, on the show floor, and they've decided to be with Wavefront, and they were with us before the acquisition, and they're still with us today, so. >> And they're the scale-up guys, they have large scale >> That's right, yeah, container, infrastructure, running clouds, hybrid clouds. Some of them are still on-prem data centers and so we just gobble up all that data. We are platform, we're not really opinionated about how you get the data. >> You call them hardcore devops. >> Yes, hardcore devops is the right word, yeah. >> Pushing the envelope, lot of new stuff. >> That's right. >> Doing their own innovation >> So even serverless and all the ML stuff that that's been talked about. They're very pioneering. >> Alright, so VMware, they're very inquisitive on technology, very technology buyers. Take a minute to explain the tech under the covers. What's going on. >> Sure, so Wavefront is a at scale time series database with an analytics engine on top of it. So we have actually since expanded beyond just time series data. It could be distributed histograms, it could be tracing, it includes things like events. So anything that you could gather up from your operation stack and application metrics, business metrics, we'll take that data. Again, I just said that we are unopinionated so any data that you have. Like sometimes it could be from a script , it could be from your serverless functions. We'll take that data, we'll store it, we'll render it and visualize it and of course we don't have people looking at charts all day long. We'll alert you if something bad is going on. So teams just really allow the ability to explore the data and just to figure out trends, correlations and just have a platform that scales and just runs reliably. >> With you is Switzerland. >> Yeah, basically I think that's the reason why VMware is very interested, is cause we work with AWS, work with Azure, work with GCP and soon to be AliCloud and IBM, right. >> Talk about why time series data is now more on board. We've got, we've had this conversation with Smug, we saw the new announcement by Amazon. So 'cause if you 're doing real-time, time matters and super important. Why is it important now, why are people coming to the realization as the early adopters, the pioneers. >> That's right, I think I used to work at Google and I think Google, very early on I realized that time series is a way to understand complex systems, especially if you have FMR workloads and so I think what companies have realized is that logs is just very voluminous, it's very difficulty to wield and then traditional APM products, they tend to just show you what they want to show you, like what are the important paying points that you should be monitoring and with Wavefront, it's just a tool that understands time series data and if you think about it, most of the data that you gather out of your operational environment is timer series data. CPU, memory, network, how many people logging in, how many errors, how many people are signing up. We certainly have our customer like Lyft. You know, how many of you are getting Rise, how many credit cards are off. You know all of that information drives, should we pay someone because a certain city, nobody is getting picked up and that's kind of the dimension that you want to be monitoring on, not on the individual like, okay this base, no network even though we monitor those of course. >> You know, Clement, I got to talk to you about the supporting point because we've been covering real time, we've been covering IoT, we've been doing a ton of stuff around looking at the importance of data and having data be addressable in real-time. And the database is part of the problem and also the overall architecture of the holistic operating environment. So to have an actual understanding of time series is one. Then you actually got to operationalize it. Talk about how customers are implementing and getting value out of time series data and how they differentiate that with data leagues that they might spin up as well as the new dupe data in it. Some might not be valuable. All this is like all now coming together. How do people do that? >> So I think there were a couple of dimensions to that. So it's scalability is a big piece. So you have to be able to take in enormous amount of data, (mumbles) data leagues can do that. It has to be real-time, so our latency from ingestion to maturalization on a chart is under our second So if you're a devops team, you're spinning up containers, you can't go blind for even 10 seconds or else you don't know what's going on with your new service that you just launched. So real-time is super important and then there's analytics. So you can't, you can see all the data in real-time but if it's like millions of time series coming in, it's like the matrix, you need to have some way to actually gather some insights out of that data. SO I think that's what we are good at. >> You know a couple of years ago, we were doing Open Compute, a summit that Facebook puts on, you eventually worked with Google so I see he's talking about the cutting edge tech companies. There's so much data going onto the scale, you need AI, you got to have machines so some of the processing, you can't have this manual process or even scrips, you got to have machines that take care of it. Talk about the at-scale component because as the tsunami of data continues to grow, I mean Amazon's got a satellite, Lockheed Martin, that's going to light up edge computing, autonomous vehicles, pentabytes moving to the cloud, time series matters. How do people start thinking about machine learning and AI, what do you guys do. >> So I think post-acquisition I would say, we really double down on looking at AI and machine learning in our system. We, because we don't down sample any of the data that we collect, we have actually the raw data coming in from weather sensors, from machines, from infrastructure, from cloud and we just is able to learn on that because we understand incidence, we understand anomalies. So we can take all of that data and punch it through different kinds of algorithms and figures out, maybe we could just have the computer look at the incoming time series data and tell you if its anomalist, right. The holy grail for VMware I think, is to have a self-driving data center and what that means is you have systems that understands, well yesterday there was a reinforcement learning announcement by Amazon. How do we actually apply those techniques so that we have the observability piece and then we have some way to in fact change against the environment and then we figure out, you know, just let the computer just do it. >> I love this topic, you should come into our studio, if I'm allowed to, we'll do a deep dive on this because there's so many implications to the data because if you have real-time data, you got to have the streaming data come in, you got to make sense of it. The old networking days, we call it differentiate services. You got to differentiate of the data. Machine learning, if the data's good, it works great, but data sucks, machine learning doesn't go well so if I want that dynamic of managing the data so you don't have to do all this cleaning. How do people get that data verified, how do they set up the machine learning. >> Sure, it still required clean data because I mean, it's garbage in, garbage out >> Not dirty data >> So, but the ability for us, for machine learning in general to understand anything in a high dimensional space is for it to figure out, what are the signals from a lot of the noise. A human may require to be reduces in dimensionality so that they could understand a single line, a single chart that they could actually have insights out of. Machines can technically look at hundreds or even tens of thousands of series and figures out, okay these are the two that are the signals and these are the knobs that I could turn that could affect those signals. So I think with machine learning, it actually helps with just the voluminous nature of the data that we're gathering. And figuring out what is the signal from the noise. >> It's a hard problem. So talk about the two functionalities you guys just launched. What's the news, what are you doing here at AWS. >> So the most exciting thing that we launched is our distributed tracing offering. We call it a three-dimensional micro service observability. So we're the only platform that marry metrics, histograms and distributed tracing in a single platform offering. So it's certainly at scale. As I said, it's reliable, it has all the analytical capabilities on top of it, but we basically give you a way to quickly dive down into a problem and realize what the root cause is and to actually see the actual request at it's context. Whether it's troubleshooting , root cause analysis, performance optimization. So it's a single shop kind of experience. You put in our SDK, it goes ahead and figures out, okay you're running Java, you're running Jersey or Job Wizard or Spring Boot and then it figures out, okay these are the key metrics you should be looking at. If there are any violations, we show you the actual request including multiple services that are involved in that request and just give you an out of the box turn keyway to understand at scale, microservice deployments, where are the pain points, where is latency coming from, where are the errors coming from. So that's kind of our first offering that we're launching. Same pricing mode, all that. >> So how are companies going to use this? What kind of business problem is this solving. >> So as the world transitions to a deployment architecture that mostly consists of Microservices, it's no longer a monolytic app, it's no longer an end-tier application. There are a lot of different heterogeneous languages, frameworks are involved, or even AWS. Cloud services, SAS services are involved and you just have to have some way to understand what is goin on. The classic example I have is you could even trace things like an actual order and how it goes through the entire pipeline. Someone places the orders, a couple days later there's someone who, the orders actually get shipped and then it gets delivered. You know, that's technically a trace. It could be that too. You could send that trace to us but you want to understand, so what are the different pieces that was involved. It could be code or it could be like a vendor. I could be like even a human process. All of that is a distributed tracing atom and you could actually send it to Wavefront and we just help you stitch that picture together so you could understand what's really going on. >> What's next for you guys. Now you're part of VMware. What's the investment area, what are you guys looking at building, what's the next horizon? >> So I think, obviously the (mumbles) tracing, we still have a lot to work on and just to help teams figure out, what do they want to see kind of instantly from the data that we've gathered. Again, we just have gathered data for so long, for so many years and at the full resolution so why can't we, what insights can develop out of it and then as I said, we're working on AI and ML so that's kind of the second launch offering that we have here where you know, people have been telling us, it's great to have all the analytics but if I don't have any statistical background to anything like that, can you just tell me, like, I have a chart, a whole bunch of lines, tell me just what I should be focusing on. So that's what we call the AI genie and so you just apply, call it a genie I guess, and then you would basically just have the chart show you what is going wrong and the machines that are going wrong, or maybe a particular service that's going wrong, a particular KPI that's in violation and you could just go there and figure out what's-- >> Yeah, the genie in the bottle. >> That's right (crosstalk) >> So final question before we go. What's it like working for VMware start-up culture. You raised a lot of money doing your so crunch based reports. VMware's cutting edge, they're a part with Amazon, bit turn around there, what's it like there? >> It's a very large company obviously, but they're, obviously as with everything, there's always some good points and bad points. I'll focus on the good. So the good things are there's just a lot of people, very smart people at VMware. They've worked on the problem of virtualization which was, as a computer scientist, I just thought, that's just so hard. How do you run it like the matrix, right, it's kind of like and a lot of very smart people there. A lot of the stuff that we're actually launching includes components that were built inside VMware based on their expertise over the years and we're just able to pull, it's just as I said, a lot of fun toys and how do we connect all of that together and just do an even better job than what we could have been as we were independent. >> Well congratulations on the acquisition. VMware's got the radio event we've covered. We were there, you got a lot of engineers, a lot of great scientists so congratulations. >> Thank you so much. >> Great, Clement thanks so much for coming on theCUBE. >> Thank you so much Rebecca. >> I'm Rebecca Knight for John Furrier. We will have more from AWS re:Invent coming up in just a little bit. (light electronic music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon web services, intel, of AWS re:Invent, here at the Venetian in Las Vegas. Thank you so much. It's great to have you on the show. so monitoring, I think is you could say what we do. and so we just gobble up all that data. So even serverless and all the ML stuff Take a minute to explain the tech under the covers. So anything that you could gather up is cause we work with AWS, work with Azure, So 'cause if you 're doing real-time, time matters most of the data that you gather You know, Clement, I got to talk to you it's like the matrix, you need to have some way and AI, what do you guys do. and what that means is you have systems so you don't have to do all this cleaning. of the data that we're gathering. What's the news, what are you doing here at AWS. and just give you an out of the box turn keyway So how are companies going to use this? and we just help you stitch that picture together what are you guys looking at building, and so you just apply, call it a genie I guess, So final question before we go. and how do we connect all of that together We were there, you got a lot of engineers, for coming on theCUBE. in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

Clement PangPERSON

0.99+

ClementPERSON

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

RebeccaPERSON

0.99+

IBMORGANIZATION

0.99+

GrouponORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

WattsORGANIZATION

0.99+

WavefrontORGANIZATION

0.99+

DoorDashORGANIZATION

0.99+

IntuitORGANIZATION

0.99+

WorkdayORGANIZATION

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

MayDATE

0.99+

LyftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

yesterdayDATE

0.99+

hundredsQUANTITY

0.99+

JavaTITLE

0.99+

10 secondsQUANTITY

0.99+

secondQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

SwitzerlandLOCATION

0.99+

Lockheed MartinORGANIZATION

0.98+

Las VegasLOCATION

0.98+

two functionalitiesQUANTITY

0.97+

Spring BootTITLE

0.97+

VenetianLOCATION

0.97+

todayDATE

0.96+

a couple days laterDATE

0.96+

single lineQUANTITY

0.95+

first offeringQUANTITY

0.95+

single platformQUANTITY

0.95+

single chartQUANTITY

0.94+

second launchQUANTITY

0.94+

single shopQUANTITY

0.94+

tens of thousandsQUANTITY

0.92+

AliCloudORGANIZATION

0.92+

couple of years agoDATE

0.9+

millions of time seriesQUANTITY

0.89+

Job WizardTITLE

0.87+

GCPORGANIZATION

0.82+

theCUBEORGANIZATION

0.81+

Open ComputeEVENT

0.81+

JerseyTITLE

0.8+

Invent 2018EVENT

0.76+

AzureORGANIZATION

0.72+

re:Invent 2018EVENT

0.7+

WavefrontTITLE

0.66+

re:EVENT

0.64+

re:InventEVENT

0.62+

SmugORGANIZATION

0.52+

RiseTITLE

0.47+

InventEVENT

0.43+

Suneil Mishra, Tensyr | Autotech Council 2018


 

>> Narrator: From Milpitas, California, at the edge of Silicon Valley, it's theCUBE, covering autonomous vehicles. Brought to you by Western Digital. >> Hey. Welcome back, everybody. Jeff Frick here with theCUBE. We're in Milpitas, California at the Autotech Council Autonomous Vehicle Event. Autotech Council is an interesting organization really trying to bring a lot of new Silicon Valley technology companies, and get them involved with what's going on in industries. They've done a Teleco Council. This is the auto one. We were here last year. It was all about mapping. This is really kind of looking at the state of autonomous vehicles. We're excited to be here. It's a small intimate event, about 300 people. A couple of cool, dem hook cars out side. And our first guest is here. He's Suneil Mishra. He is the strategic marketing for Tensyr. Nice to be here. >> Thanks, Jeff. Appreciate you having us. >> Yeah. So, give us the overview on Tensyr. >> Sure. So we're a Silicon Valley startup, venture-backed. We're actually just coming out of stealth. So you're one of the first folks to hear about-- >> Jeff: Congratulations. >> what we're up to. And we're basically doing software platforms to actually accelerate autonomous vehicles into production, doing all the things around safety and efficiency, and ROI that will be important when we actually want to make money on all of this stuff. >> Right. So what does that mean because obviously, you're in Palo Alto. I'm in Palo Alto. We see the Waymo cars driving around all the time. And it seems like every day I see a few more cars running around with LIDAR stacks on top. You know, those are all kind of R and D login miles, doing a lot of tests. What are some of the real challenges to get it from where it is today to actual production? And how are you guys helping that process? >> Sure. So yeah, I mean a lot of what people don't think about is these R and D kind of pilot cars. They actually are doing R and D. It's trial and error. That's the whole point of R and D. When you get to production, you can't have that error part anymore. And so safety suddenly becomes a critical element. And part of the things of getting safety is being much more efficient on the vehicle because you have to do a lot more software in order to be safe across multiple different kinds of examples of streets, and locations, of weather conditions, and so on. So, we basically provide essentially all of the glue, all of the grunt work, at the lower levels, to make things as efficient as possible, as safe as possible, as secure as possible. And also making things adaptable and flexible. There's lots of different hardware coming down the pipeline from all different vendors. And if you're a production vehicle, it's which ones you choose. There may be different configurations for different cost points of vehicles. And then of course when you're looking to the future as a production vehicle manufacturer, how do you know which pieces of hardware to use and whether your software will work or not? We kind of give you a lot of insight into all of those things that allow you to certify that your products are safe. And so we don't build the stacks themselves, but we actually take people self-driving models, and we accelerate them onto the vehicles. >> Jeff: With your software in the ecosystem of the self-driving car hardware. >> Exactly. So we have an actual runtime engine that will set on the end device, in this case a vehicle. And it will actually optimize the scheduling, the orchestration of all of your code. That makes it much more efficient. And we can monitor that so you can mitigate for safety. And if something does go wrong, we're essentially like a black box where you can actually see what actually happened to your software. >> So it's interesting. We talked a little bit before we turned the cameras on that a lot of the self-driving vehicles are Fords. We talked to the guys at Phantom and apparently, it's a really nice system to be able to get computer control into the control mechanisms of the car. But you said there's a whole layer of how do you define being able to interact with the control systems of the car, versus is it safe, is it ready for production, and kind of taking it beyond that R and D level. So what are some of the real challenges that people need to be aware of when we're going to make that big leap. >> Yeah, so I mean, a couple of the big things that happen is when you're seeing these pilot vehicles driving around, the amount of software that they actually have on there to control the vehicles is very tuned for the particular cases. That's why you see a lot of these vehicles out in places like Arizona where it's sunny weather. You're not having to deal with snow and all the rest of that stuff. >> Jeff: Right. >> If they actually take a car and move it to Michigan for the snow test, they'll actually deploy different software to do the snow case. But when you're actually in a production vehicle, and nobody can actually come back and change that software, you're going to have to load all of those types of solution, on at the same time. That requires more space, more compute power. And so for solutions like ours, we actually allow the production manufacturers to figure out what the optimal solutions are in those cases because you can't come back and change the software. You don't have an engineer that can go tweak that code. And you don't have a safety driver, of course, to go grab the wheel if something goes wrong. These things essentially have to be able to go out there in the wilderness for years and years, and actually work. So it's a whole different classification of problem that takes a lot more compute power. And people who are seeing those giant sets of sensor rigs don't probably realize there's also a giant trunk for clarisitive, where if there's compute power in the back, running 3,000 watts of power. When you actually get to deployment, you're going to have an embedded system with maybe 500 watts of power. So you have less compute power, and you're trying to do more with it. So it's quite a challenging problem, to actually jump to production. And we're kind of smoothing out a lot of those wrinkles. >> Right. So, I just want to get your kind of perspective on kind of the Apple approach, which everyone kind of sees Tesla as. Right? It's soup to nuts, it's the car's design, it's the software, versus kind of an industry approach where you have all these different players, obviously, 300 people here at this event. There's autonomous vehicle events going on all over the place where you got all these component manufacturers, and component parts, coming together to create the industry autonomous vehicles versus just the Tesla. So what's kind of the vibe in the industry? It feels like early days. Everybody's cooperating. How is this think kind of coalescing? >> Yeah. I think what we're seeing, we basically talk to people up and down the stack, because anyone who's doing this stuff is a potential customer for us, so automotive OEMs to tier one suppliers, to the AI startups are building these software stacks, they're all potential customers for us. What we're seeing from everyone is they're saying there's so many difficult problems to solve along this path that no company can really do it themselves. And of course, you're seeing big companies investing billions of dollars. But it's great because everybody's saying, let's find people that specialize, whether it's in sensors, or compute, all the rest of those things. And kind of get them, and partner with them, have everybody solve the right problem that they're specialized and focused on. And we essentially can kind of come in and we solve parts of those problems, but we're also kind of the glue that fills a lot of those things together. So we actually see ourselves as being quite advantageous in that anyone who's doing their specialized piece, contributes into the collective. And we kind of build that collective and make it easy for the actual end vendor that's trying to sell a car or run a service, to actually access all those mechanisms. >> And are kind of the old school primary manufacturers still the focal point of the coalescing around this organization or are they losing kind of that position? >> I wouldn't say their losing it. It's kind of an interesting play. So you've got a bunch of traditional automotive guys who actually don't really, not to diss them, but they don't really understand large-scale software because they haven't had that in their vehicles until now. And at the same time you've got kind of your startup mode software experts that don't really understand a lot about automotive. But eventually, it's got to go on a car. And so what we're finding is the automotive manufacturers are really saying to get to production, we need certain kinds of safety guarantees and ROI and so on. So they're really driving from that point of view. The software guys are kind of saying, well, we're just going to throw the software over to you and sort of, good luck. So, we're actually finding both sides care, but nobody's quite sure who should be taking the lead. So I think we're getting to the point where ultimately, automotive manufacturers will be the one shipping vehicles and that software's going to be on their car. So they're going to be the ones that care about it most. So we're actually seeing them being quite proactive about how do we solve these problems. How do we get from the R and D stage to the actual production stage? So that's where we're seeing a lot of the interest on our side. >> All right, Suneil. We could go on forever, but we have to leave it there. And congratulations on your launch and coming out of stealth. And we're excited to watch the story unfold. >> Great. Thanks, Jeff. I appreciate the time. >> All right. He's Suneil. I'm Jeff Frick. You're watching The Cube from the Autotech Council Autonomous Vehicle Event in Milpitas, California. Thanks for watching. (upbeat music)

Published Date : Apr 14 2018

SUMMARY :

Brought to you by Western Digital. This is the auto one. Appreciate you having us. So, give us the overview on Tensyr. So you're one of the first folks to hear about-- doing all the things around safety and efficiency, What are some of the real challenges to get And part of the things of getting safety is being Jeff: With your software in the ecosystem of the And we can monitor that so you can mitigate for safety. that a lot of the self-driving vehicles are Fords. and all the rest of that stuff. the production manufacturers to figure out all over the place where you got all And of course, you're seeing big companies And at the same time you've got kind of your startup mode And congratulations on your I appreciate the time. Council Autonomous Vehicle Event in Milpitas, California.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

MichiganLOCATION

0.99+

Jeff FrickPERSON

0.99+

3,000 wattsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

ArizonaLOCATION

0.99+

FordsORGANIZATION

0.99+

500 wattsQUANTITY

0.99+

Suneil MishraPERSON

0.99+

TensyrORGANIZATION

0.99+

Teleco CouncilORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

SuneilPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Autotech CouncilORGANIZATION

0.99+

last yearDATE

0.99+

TeslaORGANIZATION

0.99+

Milpitas, CaliforniaLOCATION

0.99+

300 peopleQUANTITY

0.99+

first guestQUANTITY

0.99+

WaymoORGANIZATION

0.98+

todayDATE

0.98+

billions of dollarsQUANTITY

0.98+

AppleORGANIZATION

0.98+

about 300 peopleQUANTITY

0.97+

oneQUANTITY

0.97+

first folksQUANTITY

0.96+

PhantomORGANIZATION

0.95+

theCUBEORGANIZATION

0.95+

Autotech Council Autonomous Vehicle EventEVENT

0.95+

both sidesQUANTITY

0.94+

The CubeTITLE

0.91+

TensyrPERSON

0.88+

Autotech Council Autonomous Vehicle EventEVENT

0.78+

yearsQUANTITY

0.74+

more carsQUANTITY

0.63+

2018DATE

0.51+

Nader Shalessi, NGD Systems & Scott Shadley, NGD Systems | CUBEConversation, March 2018


 

>> Hi. I'm Peter Burris and welcome to another Cube Conversation. We're here in our Palo Alto studios and we got some really interesting guests, really interesting topic. We're going to talk about something called Computational Storage. Nader Salessi is the CEO of NGD Systems. >> Hello. >> And Scott Shadley is the VP of Marketing of NGD Systems. >> Pleasure to see ya again, Peter. >> So guys, let me set the stage and let's get in to this 'cause actually this is kind of interesting. If you think about a lot of the innovations happening in the marketplace right now and the tech industry right now, we're talking about greater densities of data, more advanced algorithms being applied against that data, greater parallelism in the compute, more I/O aggregate required but the presumption behind all this is that we're going to be flying data all over the organization and the other presumption is things like energy consumption, unlimited, who cares but we know the reality is something different. There is an intersection amongst all of that that seems to need a dressing. Nader, take us through that. >> And that's exactly what we are addressing. So we are bringing other than the energy efficiency in a large capacity storage. Instead of moving the data to do any computation on the data, we bring the computation inside the storage to do the computation locally in a distributive fashion as you have number of storage devices in a server without the need of moving the data and save energy. The main area is that it's a focus point for a lot of mega-data centers is the energy density being what per terabyte or what per terabyte per square inch and that's exactly with our technology we are addressing to have a more sufficient energy efficient computational storage into the market. >> So let me build on that a little bit. See if I got it. So that your traditional large system, you have an enormous amount of data, you have a bunch of logic dedicated to know where the data is. Find it. Once it finds it, it brings it, presents it to a CPU, a server somewhere, who then takes some degree of responsibility for formatting it and then presenting it to the application. And you're bringing that out and putting it down closer to the storage itself and so instead of having this enormous bus that's humming along at unbelievable speeds and maybe 35, 40 watts off the card, you're doing it for-- >> A fraction of that so we'll be able to do that with eight terabytes in an eight watt envelope. Or 64 terabytes to be done in a 15 watt envelope. That's the part that doesn't exist today and being able to not only do the storage part of it but bringing the application seamlessly without changing the application, bringing it down and acting on the data and just setting the subset of the results to the upper levels of the application is what market is looking for that doesn't exist today. >> So you're using, you're still using industry standard memory. You're still using industry standard form factors. What is the special sauce inside this that makes it faster and cheaper from a power standpoint? >> Very good question. So we are using a standard PCIe and MPE protocol for the drive. So we, our technology, the algorithm and the controller technology can have this large capacity of the NAND and we are flash agnostic so it could be any NAND, in fact, later on it could be any MBM. It doesn't need to be the NAND and having additional resources through the standard of the TCP/IP we can bring application down. We're not making any changes to the application. >> So we're taking a new approach to thinking about how I/O gets handled at the storage device. It's got to create some use cases, Scott. Tell us about some of the use cases. >> From a use cases perspective, you can think about it you can go to simple terms as thinking about traffic jams. If you have a traffic jam on the freeway when one lane of traffic gets stuck, well if the cars are able to actually relocate and do the movements on their own you eliminate the traffic bandwidth problem. What we can do is we allow you to say okay if I'm going to go look for a picture in a data set. Instead of having the CPU ask for all the different pictures, do the comparison and memory, tie up CPU resources, you just tell the drive go find this picture. It goes, finds this comparison picture. Tells you all about the picture, sends just that little tidbit back to you. So if you're collecting hundreds of thousands of Facebook photos today, you can analyze those and tell every person that's looking for a different photo what their photo is without having to use massive I/O bandwidth. >> So traditional high-performance computing? >> Yes. >> IOT? >> IOT. All of the AI where you're looking for things, where you're trying to have Artificial Intelligence be smarter, you have to throw CPUs and GPUs at it. Start throwing more storage at it 'cause you have to have to store all the data you're generating. Why not let the storage do some of that work? You can offload some of it from CPUs, GPUs and you can scale more effectively. >> So my colleague, David Floier, has been talking about how for example MAP and Hadoop could be accelerated pretty dramatically. But it's got to be more than just MAP? How are you supporting a range of applications. >> The of use cases totally separate, different from these use cases is for the content delivery, video delivery on the last mile or last hundred feet. So today, everybody's recording at home in their DVRs. What if it's set up having 10,000 DVRs in 10,000 homes is sitting in a central place and it has hundreds of thousands of video but everybody points to it. The new challenge with that is the security portion. With all our technology, we can do the encryption on the way out and authentication right at the storage so the concurrent users can be protected from each other. And that technology doesn't exist. >> Let me think about the business model implications now for a second. So I might enter as a private citizen. I might enter into a deal with Xfinity for example in which I agree to be the point of presense for my entire neighborhood. Is it that kind of thing we're talking about? >> Exactly. So that's the new edge delivery but with a higher security that doesn't exist today because it's a major challenge for everybody. >> Interesting. >> For the security and authentication. Even within the same household there could be multiple users that they need to be protected from each other. >> Very interesting. So Scott, you've got a fair amount of background in the systems universe. How is this technology going to change the way we think about systems? >> Yeah, so the beauty of this is we all thought MVE was going to be the savior of the world. It comes in at flash storage. It gives you the unlimited PCA bandwidth bus. The problem is we've already saturated that problem. We've got devices where a box can hold 24 MVE drives but you can only operate three or four of them at a time even with 16 lanes of PC-83. We're going to PC-84. We've still got a bottleneck because all of the I/O still has to go from the drive to host and back to drive and be managed because you can't run on traditional storage anything other than just data placement. Now, the drives are smart. They're relocating the data on it, protecting it, whatever else, but they're still not doing what can really be done with them. Adding this layer of computational storage with devices like ours, all it has to do is go ask the question and the storage can go do it's thing. So if I've got 24 drives, I can go ask 24 questions and I still have bandwidth to actually write data into that system or read other data out of that system at a random access pattern. >> So that brings us back to the question I asked earlier. Namely, to make this more general purpose, there's got to be a pretty robust software capabilities or libraries. How is that being handled so it can be made more general purpose and folks aren't building deep into the architectures specific controller elements. How is that happening? How does it work? >> So one of the biggest kricks whenever you bring something kind of new and innovative that actually solves a problem that does exist is how to get people to address it, right? 'Cause I want ease. I want to do simple. It took forever to get people to adopt SSDs and now we're telling them that we're giving them smart SSDs. What we're saying and what we're able to accomplish with what we're doing on the library front is very light touch. We're using the MVE protocol. We're tunneling through it with a host agent which is a very small modification at the host and it has that now communicate to all the different drives. So, simplifying that crossover of information is really what's important to your exact statement and we do that through C library and it's very modifiable to various different workloads. It's not tied to each workload has to be independently written. >> So the applications of enterprises of all sorts are actually trying to drive, that are more data orientated, computational orientated around that data. Get the computations closer. You guys are helping. From the new systems designs, we still think MVEOF is going to be very, very important but this could complement it. >> Exactly. >> Especially where I/O and the energy that bus becomes a crucial issue. What's on the horizon? >> Deploying this and driving the energy deficiency. It continues to be the biggest point no matter what we do. There is not enough energy in the world. With the amount of storage and server and computer that's being deployed and that's another area that we are focusing on and continue to focus to have the most optimum energy efficient in the smallest footprint in the model. >> So I got one more question. NGD Systems is not a household name. Where are you guys from? >> So we started the company about five years ago. Before that, myself as well as my two co-founders as well as a team of engineers we used to be at a company called Western Digital for a couple of years doing enterprise classes as this. Before that, I started the, in 2003, in this field for SSD, I started a product, a business line for a company called SDC Estate which we created industrial SSDs then later on became an enterprise class SSD. We became known for enterprise class SSDs in the industry. That's the heritage of the last 15, 17 years with many years of SSD development but this computational storage is already done an optimized SSD for a category that doesn't exist today and add to it a computational storage capability on top of it. >> Scott, last word? >> Yeah. Just from that perspective, we really didn't get into a lot of detail on it but the capabilities of reducing the amount of compute you need in a server whether it be a CPU, GPU or otherwise and actually being able to use intelligent storage to drive the bandwidth growth, the MVE fabric, or just the per-box density is just something that nobody's really taken a significant look at in the past. This is a definite solution to move it forward. >> So I'm going to turn that around and say software developers always find a way to fill up the space. So you can on the one hand look at it from a maybe you have low-cost CPUs but even if you have the same cost CPUs you can do so much more 'cause you can move so much more work out closer to the data. >> Correct. = All right. NGD Systems. Very, very interesting conversation. Thanks so much for coming and being on Cube. Once again, this is Peter Burris with a Cube Conversation. We've been speaking with NGD Systems. Thanks a lot for watching.

Published Date : Mar 17 2018

SUMMARY :

Nader Salessi is the CEO of NGD Systems. So guys, let me set the stage and let's get in to this Instead of moving the data to do any computation and then presenting it to the application. and just setting the subset of the results to the upper What is the special sauce inside this It doesn't need to be the NAND and having additional I/O gets handled at the storage device. relocate and do the movements on their own you eliminate Why not let the storage do some of that work? But it's got to be more than just MAP? at the storage so the Is it that kind of thing we're talking about? So that's the new For the security and authentication. How is this technology going to change the way we think Yeah, so the beauty of this is we all thought MVE was general purpose and folks aren't building deep into the So one of the biggest kricks whenever you bring something So the applications of enterprises of all sorts are What's on the horizon? There is not enough energy in the world. So I got one more question. That's the heritage of the last 15, 17 years with many the amount of compute you need in a server So I'm going to turn that around and say software Once again, this is Peter Burris with a Cube Conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloierPERSON

0.99+

Western DigitalORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

ScottPERSON

0.99+

35QUANTITY

0.99+

Scott ShadleyPERSON

0.99+

2003DATE

0.99+

15 wattQUANTITY

0.99+

Nader ShalessiPERSON

0.99+

NGD SystemsORGANIZATION

0.99+

Nader SalessiPERSON

0.99+

16 lanesQUANTITY

0.99+

24 questionsQUANTITY

0.99+

64 terabytesQUANTITY

0.99+

10,000 homesQUANTITY

0.99+

March 2018DATE

0.99+

PeterPERSON

0.99+

10,000 DVRsQUANTITY

0.99+

hundredsQUANTITY

0.99+

eight wattQUANTITY

0.99+

eight terabytesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two co-foundersQUANTITY

0.99+

threeQUANTITY

0.99+

SDC EstateORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

24 drivesQUANTITY

0.98+

fourQUANTITY

0.98+

one more questionQUANTITY

0.98+

todayDATE

0.98+

hundred feetQUANTITY

0.98+

24 MVEQUANTITY

0.98+

40 wattsQUANTITY

0.95+

oneQUANTITY

0.92+

PC-83COMMERCIAL_ITEM

0.92+

XfinityORGANIZATION

0.91+

hundreds of thousands of videoQUANTITY

0.91+

about five years agoDATE

0.85+

yearsQUANTITY

0.77+

each workloadQUANTITY

0.77+

CubeORGANIZATION

0.76+

15, 17 yearsQUANTITY

0.75+

MVEORGANIZATION

0.73+

square inchQUANTITY

0.7+

one laneQUANTITY

0.7+

terabyteQUANTITY

0.7+

ConversationEVENT

0.67+

CUBEConversationEVENT

0.65+

thousandsQUANTITY

0.62+

PC-84COMMERCIAL_ITEM

0.6+

MVEOFORGANIZATION

0.58+

CubeCOMMERCIAL_ITEM

0.58+

secondQUANTITY

0.54+

coupleQUANTITY

0.52+

I/OEVENT

0.49+

lastDATE

0.37+

Matt Watts, NetApp & Kenneth Cukier, The Economist | NetApp Insight Berlin 2017


 

>> Narrator: Live from Berlin, Germany, it's theCUBE. Covering NetApp Insight 2017. Brought to you by NetApp. (techno music) Welcome back to theCUBE's live coverage of NetApp Insight here in Berlin, Germany. I'm your host, Rebecca Knight, along with my cohost Peter Burris. We have two guests for this segment. We have Matt Watts, he is the director and data strategist and director of technology at NetApp, and Kenneth Cukier, a senior editor at The Economist, and author of the best-selling book Big Data, and author of a soon to be best-selling book on AI. Welcome. Thank you. Thank you much for coming on the show. Pleasure to be here. So, this is the, we keep hearing NetApp saying this is the day of the data visionary. I'd love to hear both of you talk about what a data visionary is, and why companies, why this is a necessary role in today's companies. Okay, so I think if you look at the generations that we've been through in the late nineties, early 2000's, it was all about infrastructure with a little bit of application and some data associated to it. And then as we kind of rolled forward to the next decade the infrastructure discussion became less. It became more about the applications and increasingly more about the data. And if we look at the current decade that we're in right now, the infrastructure discussions have become less, and less, and less. We're still talking about applications, but the focus is on data. And what we haven't seen so much of during that time is the roles changing. We still have a lot of infrastructure people doing infrastructure roles, a lot of application people doing application roles. But the real value in this explosion of data that we're seeing is in the data. And it's time now that companies really look to put data visionaries, people like that in place to understand how do we exploit it, how do we use it, what should we gather, what could we do with the information that we do gather. And so I think the timing is just right now for people to be really considering that. Yeah, I would build on what Matt just said. That, functionally in the business and the enterprise we have the user of data, and we have the professional who collected the data. And sometimes we had a statistician who would analyze it. But pass it along to the user who is an executive, who is an MBA, who is the person who thinks with data and is going to present it to the board or to make a decision based on it. But that person isn't a specialist on data. That person probably doesn't, maybe doesn't even know math. And the person is thinking about the broader issues related to the company. The strategic imperatives. Maybe he speaks some languages, maybe he's a very good salesperson. There's no one in the middle, at least up until now, who can actually play that role of taking the data from the level of the bits and the bytes and in the weeds and the level of the infrastructure, and teasing out the value, and then translating it into the business strategy that can actually move the company along. Now, sometimes those people are going to actually move up the hierarchy themselves and become the executive. But they need not. Right now, there's so much data that's untapped you can still have this function of a person who bridges the world of being in the weeds with the infrastructure and with the data itself, and the larger broader executives suite that need to actually use that data. We've never had that function before, but we need to have it now. So, let me test you guys. Test something in you guys. So what I like to say is, we're at the middle of a significant break in the history of computing. The first 50 years or so it was known process, unknown technology. And so we threw all our time and attention at understanding the technology. >> Matt: Yeah. We knew accounting, we knew HR, we even knew supply-chain, because case law allowed us to decide where a title was when. [Matt] Yep. But today, we're unknown process, known technology. It's going to look like the cloud. Now, the details are always got to be worked out, but increasingly we are, we don't know the process. And so we're on a road map of discovery that is provided by data. Do you guys agree with that? So I would agree, but I'd make a nuance which is I think that's a very nice way of conceptualizing, and I don't disagree. But I would actually say that at the frontier the technology is still unknown as well. The algorithms are changing, the use cases, which you're pointing out, the processes are still, are now unknown, and I think that's a really important way to think about it, because suddenly a lot of possibility opens up when you admit that the processes are unknown because it's not going to look like the way it looked in the past. But I think for most people the technology's unknown because the frontier is changing so quickly. What we're doing with image recognition and voice recognition today is so different than it was just three years ago. Deep learning and reinforcement learning. Well it's going to require armies of people to understand that. Well, tell me about it. This is the full-- Is it? For the most, yes it's a full employment act for data scientists today, and I don't see that changing for a generation. So, everyone says oh what are we going to teach our kids? Well teach them math, teach them stats, teach them some coding. There's going to be a huge need. All you have to do is look at the society. Look at the world and think about what share of it is actually done well, optimized for outcomes that we all agree with. I would say it's probably between, it's in single percents. Probably between 1% and 5% of the world is optimized. One small example: medical science. We collect a lot of data in medicine. Do we use it? No. It's the biggest scandal going on in the world. If patients and citizens really understood the degree to which medical science is still trial and error based on the gumption of the human mind of a doctor and a nurse rather than the data that they actually already collect but don't reuse. There would be Congressional hearings everyday. People, there would be revolutions in the street because, here it is the duty of care of medical practitioners is simply not being upheld. Yeah, I'd take exception to that. Just, not to spend too much time on this, but at the end of the day, the fundamental role of the doctor is to reduce the uncertainty and the fear and the consequences of the patient. >> Kenneth: By any means necessary and they are not doing that. Hold on. You're absolutely right that the process of diagnosing and the process of treatment from a technical standpoint would be better. But there's still the human aspect of actually taking care of somebody. Yeah, I think that's true, and think there is something of the hand of the healer, but I think we're practicing a form of medicine that looks closer to black magic than it does today to science. Bring me the data scientist. >> Peter: Alright. And I think an interesting kind of parallel to that is when you jump on a plane, how often do you think the pilot actually lands that plane? He doesn't. No. Thank you. So, you still need somebody there. Yeah. But still need somebody as the oversight, as that kind of to make a judgment on. So I'm going to unify your story, my father was a cardiologist who was also a flight surgeon in the Air Force in the U.S., and was one of the few people that was empowered by the airline pilots association to determine whether or not someone was fit to fly. >> Matt: Right. And so my dad used to say that he is more worried about the health of a bus driver than he is of an airline pilot. That's great. So, in other words we've been gah-zumped by someone who's father was both a doctor and a pilot. You can't do better than that. So it turns out that we do want Sully on the Hudson, when things go awry. But in most cases I think we need this blend of the data on one side and the human on the other. The idea that the data just because we're going to go in the world of artificial intelligence machine learning is going to mean jobs will be eradicated left and right. I think that's a simplification. I think that the nuance that's much more real is that we're going to live in a hybrid world in which we're going to have human beings using data in much more impressive ways than they've ever done it before. So, talk about that. I mean I think you have made this compelling case that we have this huge need for data and this explosion of data plus the human judgment that is needed to either diagnose an illness or whether or not someone is fit to fly a plane. So then where are we going in terms of this data visionary and in terms of say more of a need for AI? Yeah. Well if you take a look at medicine, what we would have is, the diagnosis would probably be done say for a pathology exam by the algorithm. But then, the health care coach, the doctor will intervene and will have to both interpret this for, first of what it means, translate it to the patient, and then discuss with the patient the trade-offs in terms of their lifestyle choices. For some people, surgery is the right answer. For others, you might not want to do that. And, it's always different with all of the patients in terms of their age, in terms of whether they have children or not, whether they want the potential of complications. It's never so obvious. Just as we do that, or we will do that in medicine, we're going to do that in business as well. Because we're going to take data that we never had about decisions should we go into this market or that market. Should we take a risk and gamble with this product a little bit further, even though we're not having a lot of sales because the profit margins are so good on it. There's no algorithm that can tell you that. And in fact you really want the intellectual ambition and the thirst for risk taking of the human being that defies the data with an instinct that I think it's the right thing to do. And even if we're going to have failures with that, and we will, we'll have out-performance. And that's what we want as well. Because society advances by individual passions, not by whatever the spreadsheet says. Okay. Well there is this issue of agency right? So at the end of the day a human being can get fired, a machine cannot. A machine, in the U.S. anyway, software is covered under the legal strictures of copywriting. Which means it's a speech act. So, what do you do in circumstances where you need to point a finger at something for making a stupid mistake. You keep coming back to the human being. So there is going to be an interesting interplay over the next few years of how this is going to play out. So how is this working, or what's the impact on NetApp as you work with your customers on this stuff? So I think you've got the AI, ML, that's kind of one kind of discussion. And that can lead you into all sorts of rat holes or other discussions around well how do we make decisions, how do we trust it to make decisions, there's a whole aspect that you have to discuss around that. I think if you just bring it back to businesses in general, all the businesses that we look at are looking at new ways of creating new opportunities, new business models, and they're all collecting data. I mean we know the story about General Electric. Used to sell jet engines and now it's much more about what can we do with the data that we collect from the jet engines. So that's finding a new business model. And then you vote with a human role in that as well, is well is there a business model there? We can gather all of this information. We can collect it, we can refine it, we can sort it, but is there actually a new business model there? And I think it's those kind of things that are inspiring us as a company to say well we could uncover something incredible here. If we could unlock that data, we could make sure it's where it needs to be when it needs to be there. You have the resources to bring to bed to be able to extract value from it, you might find a new business model. And I think that's the aspect that I think is of real interest to us going forward, and kind of inspires a lot of what we're doing. Great. Kenneth, Matt, thank you so much for coming on the show. It was a really fun conversation. Thank you. Thank you for having us. We will have more from NetApp Insight just after this. (techno music)

Published Date : Nov 14 2017

SUMMARY :

and the enterprise we and the consequences of the patient. of the hand of the healer, in the Air Force in the U.S., You have the resources to bring to bed

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Rebecca KnightPERSON

0.99+

Matt WattsPERSON

0.99+

KennethPERSON

0.99+

Kenneth CukierPERSON

0.99+

PeterPERSON

0.99+

General ElectricORGANIZATION

0.99+

MattPERSON

0.99+

1%QUANTITY

0.99+

U.S.LOCATION

0.99+

NetAppORGANIZATION

0.99+

5%QUANTITY

0.99+

two guestsQUANTITY

0.99+

NetApp InsightORGANIZATION

0.99+

late ninetiesDATE

0.99+

bothQUANTITY

0.99+

Berlin, GermanyLOCATION

0.99+

three years agoDATE

0.98+

next decadeDATE

0.98+

NetAppTITLE

0.98+

oneQUANTITY

0.98+

first 50 yearsQUANTITY

0.97+

NetApp InsightORGANIZATION

0.97+

early 2000'sDATE

0.97+

2017DATE

0.95+

theCUBEORGANIZATION

0.95+

firstQUANTITY

0.95+

BerlinLOCATION

0.94+

The EconomistORGANIZATION

0.93+

todayDATE

0.93+

single percentsQUANTITY

0.93+

one sideQUANTITY

0.92+

Big DataTITLE

0.89+

One small exampleQUANTITY

0.82+

NetApp Insight 2017EVENT

0.67+

on the HudsonTITLE

0.63+

SullyPERSON

0.56+

currentDATE

0.54+

fewQUANTITY

0.53+

yearsDATE

0.5+

Marc Altshuller, IBM - IBM Fast Track Your Data 2017


 

>> Announcer: Live from Munich, Germany; it's The Cube! Covering IBM Fast Track Your Data, brought to you by IBM. >> Welcome back to Munich, Germany everybody. This is The Cube, the leader in live tech coverage. We're covering Fast Track Your Data, IBM's signature moment here in Munich. Big themes around GDPR, data science, data science being a team sport. I'm Dave Vellante, I'm here with my co-host Jim Kobielus. Marc Altshuller is here, he's the general manager of IBM Business Analytics. Good to see you again Marc. >> Hey, always great to see you. Welcome, it's our first time together. >> Okay so we heard your key note, you were talking about the caveats of correlations, you were talking about rear view mirror analysis versus sort of looking forward, something that I've been sort of harping on for years. You know, I mean I remember the early days of "decision support" and the promises of 360 degree views of the customer, and predictive analytics, and I've always said it, "DSS really never lived up to that", y'know? "Will big data live up to that?" and we're kind of living that now, but what's your take on where we're at in this whole databean? >> I mean look, different customers are at different ends of the spectrum, but people are really getting value. They're becoming these data driven businesses. I like what Rob Thomas talked about on stage, right. Visiting companies a few years ago where they'd say "I'm not a technology company.". Now, how can you possibly say you're not a technology company, regardless of the industry. Your competitors will beat you if they are using data and you're not. >> Yeah, and everybody talks about digital transformation. And you hear that a lot at conferences, you guys haven't been pounding that theme, other than, y'know below the surface. And to us, digital means data, right? And if you're going to transform digitally, then it's all about the data, you mentioned data driven. What are you seeing, I mean most organizations in our view aren't "data driven" they're sort of reactive. Their CEO's maybe want to be data driven, maybe they're aboard conversations as to how to get there, but they're mostly focused on "Alright, how do we keep "the lights on, how do we meet our revenue targets, "how do we grow a little bit, and then whatever money "we have leftover we'll try to, y'know transform." What are you seeing? Is that changing? >> I would say, look I can give you an example right from my own space, the software space. For years we would have product managers, offering managers, maybe interviewing clients, on gut feel deciding what features to put at what priority within the next release. Now we have all these products instrumented behind the scenes with data, so we can literally see the friction points, the exit points, how frequently they come back, how long they're sessions are, we can even see them effectively graduating within the system where they continue to learn, and where they had shorter sessions, they're now going the longer sessions. That's really, really powerful for us in terms of trying to maximize our outcome from a software perspective. So that's where we kind of like, drink our own champagne. >> I got to ask you, so in around 2003, 2004 HBR had an article, front page y'know cover article of how "gut feel beats data and analytics", now this is 2003, 2004, software development as you know it's a lot of art involved, so my question is how are you doing? Is the data informing you in ways that are nonintuitive? And is it driving y'know, business outcomes for IBM? >> It is, look you see, I'll see like GM's of sports teams talking about maybe pushing back a little bit on the data. It's not all data driven, there's a little bit of gut, like is the guy going to, is he a checker in hockey or whatever that happens to be, and I would say, when you actually look at what's going on within baseball, and you look at the data, when you watch baseball growing up, the commentator might say something along the lines of "the pitcher has their stuff" right? "Does the pitcher have their stuff or not?". Now they literally know, the release point based on elevation, IOT within the state of the release point, the spin velocity of the ball, where they mathematically know "does the pitcher have their stuff?", are they hitting their locations? So all that stuff has all become data driven, and if you don't want to embrace it, you get beat, right? I mean even in baseball, I remember talking to one of these Moneyball type guys where I said like "Doesn't weather impact baseball?" And they're like "Yeah, we've looked at that, it absolutely impacts it." 'Cause you always hear of football and remember the old Peyton Manning thing? Don't play Peyton Manning in cold weather, don't bet on Peyton Manning in cold weather. So "I'm like isn't the same in baseball?", And he's like, absolutely it's the same in baseball, players preform different based on the climate. Do any mangers change their lineup based on that? Never. >> Speaking of HBR, I mean in the last few years there was also an article or two by Michael Shrage about the whole notion of real world experimentation and e-commerce, driven by data, y'know in line, to an operational process, like tuning the design iteratively of say, a shopping cart within your e-commerce environment, based on the stats on what work and what does not work. So, in many ways I mean AB testing, real world experimentation thrives on data science. Do you see AB testing becoming a standard business practice everywhere, or only in particular industries like you know, like the Wal-marts of the world? >> Yeah, look so, AB testing, multi-variant testing, they're pervasive, pretty much anyone who has a website ought to be doing this if they're not doing it already. Maybe some startups aren't quite into it. They prioritized in different spots, but mainstream fortune 500 companies are doing this, the tools have made it really easy. I would say, maybe the Achilles heel or the next frontier is, that is effectively saying, kind of creating one pattern of user, putting everyone in a single bucket, right? "Does this button perform better "when it's orange or when it's green? "Oh, it performs better orange." Really, does it perform well for every segmentation orange better than green or is it just a certain segmentation? So that next kind of frontier is going to be, how do we segment it, know a little bit more about you when you're coming in so that AB testing starts to build these kind of sub-profiles, sub-segmentation. >> Micro-segmentation, and of course, the end extreme of that dynamic is one-to-one personalization of experiences and engagements based on knowing 360 degrees about you and what makes you tick as well, so yeah. >> Altshuller: And add onto that context, right? You have your business, let's even keep it really simple, right, you've got your business life, you've got your social life, and your profile of what you're looking for when you're shopping your social life or something is very different than when you're shopping your business life. We have to personalize it to the idea where, I don't want to say schizophrenic but you do have multiple personalities from an online perspective, right? From a digital perspective it all depends in the moment, what is it that you're actually doing, right? And what are you, who are you acting for? >> Marc, I want to ask you, you're homies, your peeps are the business people. >> Yes. >> That's where you spend your time. I'm interested in the relationship between those business people and the data science teams. They're all, we all hear about how data science and unicorns are hard to find, difficult to get the skills, citizen data science is sort of a nirvana. But, how are you seeing businesses bring the domain expertise of the business and blending that with data science? >> So, they do it, I have some cautionary tales that I've experienced in terms of how they're doing it. They feel like, let's just assign the subject matter expert, they'll work with the data scientist, they'll give them context as they're doing their project, but unfortunately what I've seen time and time again, is that subject matter expert right out of the gate brings a tremendous amount of bias based on the types of analysis they've done in the past. >> Vellante: That's not how we do it here. >> Yeah, exactly, like "did you test this?". "Oh yeah, there's no correlation there, we've tried it." Well, just because there's no correlation, as I talked about onstage, doesn't mean it's not part of the pattern in terms of, like you don't want someone in there right off the bat dismissing things. So I always coach, when the business user subject matter experts become involved early, they have to be tremendously open-minded and not all of them can be. I like bringing them in later, because that data scientist, they are unbiased, like they see this data set, it doesn't mean anything to them, they're just numerically telling you what the data set says. Now the business user can then add some context, maybe they grabbed a field that really is an irrelevant field and they can give them that context afterwards. But we just don't want them shutting down, kind of roots, too early in the process. >> You know, we've been talking for a couple of years now within our community about this digital matrix, this digital fabric that's emerged and you're seeing these horizontal layers of technology, whether it's cloud or, you know, security, you all OAuth in with LinkedIn, Facebook, and Twitter. There's a data fabric that's emerging and you're seeing all these new business models, whether it's Uber or Airbnb or WAZE, et cetera, and then you see this blockbuster announcement last week, Amazon buying Whole Foods. And it's just fascinating to us and it's all about the data that a company like an Amazon can be a content company, could be a retail company, now it's becoming a grocer, you see Apple getting into financial services. So, you're seeing industries being able to traverse or companies being able traverse industries and it's all because of the data, so these conversations absolutely are going on in boardrooms. It's all about the digital transformation, the digital disruption, so how do you see, you know, your clients trying to take advantage of that or defend against that? >> Yeah look, I mean, you have to be proactive. You have to be willing to disrupt yourself in all these tech industries, it's just moving too quickly. I read a similar story, I think yesterday, around potentially Blockchain disrupting ridesharing programs, right? Why do you need the intermediary if you have this open ledger and these secure transactions you can do back and forth with this ecosystem. So there's another interesting disruption. Now do the ridesharing guys proactively get into that and promote it, or do they almost in slow motion, get replaced by that at some point. So yeah I think it's a come-on on all of us, like you don't remain a market lead, every market leader gets destructive at some point, the key is, do you disrupt yourself and you remain the market leader, or do you let someone else disrupt you. And if you get disrupted, how quickly can you recover. >> Well you know, you talked to banking executives and they're all talking Blockchain. Blockchain is the future, Bitcoin was designed to disintermediate the bank, so they're many, many banks are embracing it and so it comes back to the data. So my question I have, the discussion I'd like to have is how organizations are valuing data. You can't put data as a value on, y'know an asset on your balance sheet. The accounting industry standards don't exist. They probably won't for decades. So how are companies, y'know crocking data value, is it limiting their ability to move toward a data driven economy, is it a limiting factor that they don't have a good way to value their data, and understand how to monetize it. >> So I have heard of cases where companies have but data on their balance sheet, it's not mainstream at this point, but I mean you've seen it sometimes, and even some bankruptcy proceedings, their industry that's being in a bankruptcy protection where they say "Hey, but this data asset "is really where the value is." >> Vellante: And it's certainly implicit in valuations. >> Correct, I mean you see bios all the time based on the actual data sets, so yeah that data set, they definitely treasure it, and they realize that a lot of their answers are within that data set. And they also I think, understand that they're is a lot of peeling the onion that goes on when you're starting to work through that data, right? You have your initial thoughts, then you correct something based on what the data told you to do, and then the new data comes in based on what your new experience is, and then all of a sudden you have, you see what your next friction point is. You continue to knock down these things, so it is also very iterative working with that data asset. But yeah, these companies are seeing it's very value when they collect the data, but the other thing is the signal of what's driving your business may not be in your data, more and more often it may be in market data that's out there. So you think about social media data, you think about weather data and being able to go and grab that information. I remember watching the show Millions, where they talk about the hedge fund guys running satellites over like Wal-mart parking lots to try to predict the redux for the quarter, right? Like, you're collecting all this data but it's out there. >> Or maybe the value is not so much in the data itself, but in what it enables you to develop as a derivative asset, meaning a statistical predictive model or machine learning model that shows the patterns that you can then drive into, recommendation engines, and your target marketing y'know applications. So you see any clients valuate, doing their valuation of data on those derivative assets? >> Altshuller: Yeah. >> In lieu of... >> In these new business models I see within corporations that have been around for decades, it's actual data offers that they make to maybe their ecosystem, their channel. "Here's data we have, here's how you interpret it, "we'll continue to collect it, we'll continue to curate it, "we'll make it available." And this is really what's driving your business. So yeah, data assets become something that, companies are figuring out how to monetize their data assets. >> Of course those derived assets will decay if those models of, for example machine learning models are not trained with fresh, y'know data from the sources. >> And if we're not testing for new variable too, right? Like if the variable was never in the model, you still have to have this discovery process, that's always going on the see what new variables might be out there, what new data set, right. Like if a new IOT sensor in the baseball stadium becomes available, maybe that one I talked about with elevation of the pitcher, like until you have that you can't use it, but once you have it you have to figure out how to use it. >> Alright lets bring it back to your business, what can I buy from you, what do sell, what are your products? >> Yeah so after being in business analytics is Cognos analytics, Watson analytics, Watts analytics for social media, and planning analytics. Cognos is the "what", what's going on in my business. Watts analytics is the "why", planning analytics is "what do we think is going to happen?". We're starting to do more and more smarter, what do we think's going to happen based on these predictive models instead of just guessing what's going to happen. And then social media really gets into this idea of trying to find the signal, the sentiment. Not just around your own brand, it could be a competitor recall, and what now the intent is of that customer, are they going to now start buying other products, or are they going to stick with the recall company. >> Vellante: Okay so the starting point of your business having Cognos, one of the largest acquisitions ever in IBM's history, and of course it was all about CFO's and reporting and Sarbanes-Oxley was a huge boom to that business, but as I was saying before it, it never really got us to that predictive era. So you're layering those predictive pieces on top. >> That's what you saw on stage. >> Yes, that's right, what, so we saw on stage, and then are you selling to the same constituencies? Or how is constituency that you sell to changing? >> Yeah, no it's actually the same. Well Cognos BI, historically was selling to IT, and Cognos Analytics is selling to the business. But if we take that leap forward then we're now in the market, we have been for a few years now at Cognos Analytics. Yeah, that capability we showed onstage where we talked about not only what's going on, why it's going on, what will happen next, and what we ought to do about it. We're selling that capability for them, the business user, the dashboard becomes like a piece of glass to them. And that glass is able to call services that they don't have to be proficient in, they just want to be able to use them. It calls the weather service, it calls the optimization service, it calls the machine learning data sign service, and it actually gives them information that's forward looking and highly accurate, so they love it, 'cause it's cool they haven't had anything like that before. >> Vellante: Alright Marc Altshuller, thanks very much for coming back on The Cube, it's great to see you. >> Thank you. >> "You can't measure heart" as we say in boston, but you better start measuring. Alright keep right there everybody, Jim and I will right back after this short break. This is The Cube, we're live from Fast Track Your Data in Munich. We'll be right back. (upbeat jingle) (thoughtful music)

Published Date : Jun 24 2017

SUMMARY :

Covering IBM Fast Track Your Data, brought to you by IBM. Good to see you again Marc. Hey, always great to see you. about the caveats of correlations, you were talking about of the spectrum, but people are really getting value. And you hear that a lot at conferences, the exit points, how frequently they come back, and if you don't want to embrace it, you get beat, right? based on the stats on what work and what does not work. how do we segment it, know a little bit more about you Micro-segmentation, and of course, the end extreme I don't want to say schizophrenic but you do have your peeps are the business people. That's where you spend your time. based on the types of analysis they've done in the past. part of the pattern in terms of, like you don't want and it's all because of the data, so these conversations the key is, do you disrupt yourself So my question I have, the discussion I'd like to have So I have heard of cases where companies based on what the data told you to do, but in what it enables you to develop as a derivative asset, "Here's data we have, here's how you interpret it, are not trained with fresh, y'know data from the sources. that you can't use it, but once you have it Cognos is the "what", what's going on in my business. Vellante: Okay so the starting point of your business the dashboard becomes like a piece of glass to them. for coming back on The Cube, it's great to see you. but you better start measuring.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Dave VellantePERSON

0.99+

Marc AltshullerPERSON

0.99+

Michael ShragePERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

MarcPERSON

0.99+

JimPERSON

0.99+

IBMORGANIZATION

0.99+

MunichLOCATION

0.99+

Peyton ManningPERSON

0.99+

Wal-martORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

2004DATE

0.99+

Cognos AnalyticsORGANIZATION

0.99+

2003DATE

0.99+

360 degreesQUANTITY

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

360 degreeQUANTITY

0.99+

last weekDATE

0.99+

MillionsTITLE

0.99+

CognosORGANIZATION

0.99+

one patternQUANTITY

0.99+

UberORGANIZATION

0.99+

IBM Business AnalyticsORGANIZATION

0.99+

Whole FoodsORGANIZATION

0.98+

Munich, GermanyLOCATION

0.98+

first timeQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

AirbnbORGANIZATION

0.98+

AltshullerPERSON

0.98+

2017DATE

0.97+

HBRORGANIZATION

0.97+

companiesQUANTITY

0.97+

VellantePERSON

0.97+

FacebookORGANIZATION

0.96+

Cognos analyticsORGANIZATION

0.94+

TwitterORGANIZATION

0.94+

GDPRTITLE

0.93+

decadesQUANTITY

0.92+

DSSORGANIZATION

0.9+

bostonLOCATION

0.9+

last few yearsDATE

0.89+

Cognos BIORGANIZATION

0.89+

Wal-martsORGANIZATION

0.88+

oneQUANTITY

0.87+

Watson analyticsORGANIZATION

0.86+

few years agoDATE

0.86+

Watts analyticsORGANIZATION

0.85+

2003,DATE

0.8+

single bucketQUANTITY

0.79+

OAuthTITLE

0.76+

VellanteORGANIZATION

0.75+

WAZEORGANIZATION

0.72+

Sarbanes-OxleyORGANIZATION

0.7+

couple of yearsQUANTITY

0.67+

baseballTITLE

0.66+

yearsQUANTITY

0.61+

The CubeCOMMERCIAL_ITEM

0.57+

fortune 500ORGANIZATION

0.47+

CubePERSON

0.46+

CubeTITLE

0.42+

Steve Robinson, IBM - #IBMInterConnect 2016 - #theCUBE


 

>> Las Vegas. Extensive signal from the noise. It's the Q covering interconnect 2016. Brought to you by IBM. Now your host, John Hurry and Dave Ilan. >> Okay, Welcome back, everyone. We are here live in Las Vegas for exclusive coverage of IBM interconnect 2016. This is Silicon Angles. The Q. That's our flagship program. We go out to the events and extract the signal from the noise. I'm John Ferrier with my Coast Day Volante. Our next guest, Steve Robinson News. The GM of client technical engagement before that, in the cloud doing all the blue mix now has the army of technical soldiers out there doing all the action because it's so much robust. So much demand for horizontally scale. The sluices with vertically targeted, prepackaged application development. That's horrible. First you name it big data. Welcome back. Good to see you, John. Thanks. Good to be with you again. Always, like great to have you on because you got a great perspective. You understand the executive viewpoint. A 20 mile stare in the industry. But also you got the in the nuts and bolts in under the hood. >> That's right. A >> lot of action happening under the hood. So let's get that right away. Blue, Mrs Hot Night. Now it's about the developers. What's going on under the hood right now that customers are caring about? >> I always love the Cube. You guys were like one of the first guys talking to us two years ago when we just launched a blue makes on stage. We walked off, got in front of cameras here, and it was great. Over the past year, it's been it's been outstanding. We we're writing about 20,000 folks toe blue mix right now on public, we came out with dedicated and then what people had really been warning was local blue mix as well. So we finally have full hybrid chain that goes from behind the firewall to a single client dedicated cloud all the way up to the public as well. So we've been building that out with service is as well, so have over 106 service is on top of it. You'll see things like Watson, which is unique, our Dash CB analytics, which is unique Internet of things coming in as well. So it's been a great year old building it out and getting more clients on top of it, >> it's like really trying to change the airplane engine in 30,000 feet. Or, in your case, you guys were taken off and from the runway. How has that been? It's been growing pains, of course. Unlearning What? What's going on? What have you learned? Give us the update on >> changing the engine while the plane is flying, and we've used that analogy quite a bit in the labs and way have to show relevance in this market. You know, this market is probably the fastest face technical market I think I've ever been in, and it's moving at such a rapid pace. We had to ship a lot of technology out last year is well, we have every new middleware group in IBM. Putting service is on top of blue mix, so let's get it out there. Let's get it out fast. Now, of course, this year we're gonna harden it up a little bit as well. So more architectures, more points of view. Better look on how this stuff works together hardening up our container strategy, pulling it all the way back to the virtual machine. So both continue to expand it out but let's make it enterprise grade at the same time. >> And also, some differentiation with Watts has been a big play around Catnip. Yeah, really is different because right now with the quote, um, market the way it is court monetization is on number one's mind. Start from startups to enterprises. If you're in business, you want you're top line if you're starting to get monetization. So there's a little bit of IBM in here for people to take in. Well, >> you know, if you look at Watson, you know, when we first started with it, you know, it was this very large big chunk of software that she had to buy. And and we work with Mike Rodents Team toe. Can we chop it up into a set of service is Let's really make this a set of AP eyes, and we started noticing, you know, you saw in Main stage the other day out from Otis. You know, this was a pure startup. He's started picking up the social semantics. Let's pick up the you know, some of the works to text etcetera, conversions, and all of a sudden they're starting to add it in. They said they would have never had access to this technology before way Have that a P I said. Not growing up to 28 we announced a couple cool things this morning. We even showed how would improve your dating life. Probably need some of that with my wife is well to translate between the sexes there, but what people are doing with it now, it's kind of like blowing people. His mind is far beyond what the initial exception waas. >> So your team of your niche is when they get right. It's a large team. It's, but it's a new initiative. New Justice unit, New role for you Talk about that >> way. Kinda had >> a couple pockets of this, but way clearly found that getting clients to the cloud is both a technology challenge as well as a cultural challenge as well. So he brought together some technical experts to kind of help through that entire life chain help up front. You know, many clients are trying to figure out what their overall cloud strategy is, where they truly today and where do they want to get to be? And how can we help him with a road map? That kind of helps them through the transition. Many accounts are very comfortable with the only wanting to be private and only glimpsing forward Thio Public Cloud Helping us bridge across that as well. Then we have the lab service's teams and these air the rial ninjas, the Navy seals. They go as low as you can go and what they're helping. A good way. Yeah, that's good. That's good. That's why they're helping with this very specific technical issue. Technical deployments. A lot of our dedicated local environment. These guys, they're they're really helping it wire in a cz Well, and then we have the garages, you know, we're up Thio. Five of those were going. We announced four new Blockchain garages as well. And this is where firms air coming in to kind of explore do the innovative type project as well. So I think all the way from the initial inception through rolling it out into production, having that team to be able to support him across the >> board. And so this capability existed in IBM previously, But it existed in a sort of bespoke fashion that coordinated >> couple pockets here and there. We always have supports. We had various pockets a lap service's. But we won't really wanna have the capability of seeing that client all the way through their journey, bringing it all under me. We now can easily pass the baton, Handoff says. We need to have that consistent skill there with the clients all the way through their >> journey and is the What's the life cycle of these service is? Is it Is it both pre sales in and post there? Just posted >> many times we'll get involved like our cloud advisers would get involved. Presale. They'll say a specific workload wants to go to the cloud. What are the steps we need to take to make that happen? A CZ well, with our Laps Service's teams, you know, we kind of have, you know, anywhere from a 4 to 6 week engagement. Thio do a specific technology. Let's get it in place. Let's get it wired in et cetera, and then in the garage is you know, we could just take a very novel idea and get it up to, ah, minimal viable product in about a six week period. So again, we're not doing dance lessons for life but strategically placing key skills in with accounts toe. Help him get over that next hump of their journey. >> Steve, when you look at the spectrum from from public all the way down to private and everything in between are you, I wonder if you could describe the level of capability that you are able to achieve with the best practice on Prem with regard to cloud ability. It's service is all the wonderful attributes of child that we've come to know and love. Are you able to, you know, somewhat replicate that roughly replicate that largely replicate, exactly. Replicate that. Where are we today? >> Yeah, I think >> it's a great question. I think. You know, I think most of the clients that we're dealing with have been dealing with some virtualized infrastructure, probably more VMC as they as they've been kind of progressing. That story. One of the things we did it IBM is Could we bring a true cloud infrastructure back behind the firewall? Could we bring an open stack? We bring a cloud foundry base past all the way back through because the goal, of course, is if we could have the same infrastructure private, dedicated and public as they continue to grow and got more comfortable with the public cloud that could start taking work clothes that they had built in one location and start to migrate it out with you. That that local cloud the Maur used for EJ cases. So taking that system of record and building a p i's and allowing to do extensions to that allowing you access into data records that you have today dealing with a lot of extension type cases, you know the core application still needs to be federally regulated. It needs to be under compliance domain. It's gotta be under audit. But maybe I wantto connect it in with a Fitbit or connected in with with a lot Soon are connected in with the Internet of things sensor. I gotta go public cloud for that as well. So locally we can bring that same infrastructure in and then they could doom or service. Is that extended out in the hybrid scenario >> code basis? Because this has come up. Oracle claims this is their big claim to fame. That code base is the same on premise hybrid public. Is that an issue with that? Is that just their marketing, or does it matter what's IBM take on this? >> But we've done ah lot of work with the open standard communities to let's get to a true reference implementation. So on open Stack, we've been doing a lot of work with them, and this is one of the reasons we picked up the Blue box acquisition. Could we really provide a standard open stack locally and also replicate that dedicated and, of course, have it match a reference architecture in public as well? We've also done the same thing with clout. Foundry worked with Sam Ram G to be one of the first vendors, have a certified cloud. Foundry instance is the same local dedicated in public. I think that's kind of the Holy Grail. If you could get the same infrastructural base across all, three, magic can happen. >> But management's important and integration piece becomes the new complexity. I mean, I would say it sounds easy, but it's really hard. Okay, developing in the clouds. Easy, easier ways always used to be right, right well, but not for large enterprises. The integration becomes that new kind of like criteria, right? That separates kind of the junior from the senior type players. I mean do you see the same thing and what we believe >> we do? I think there's usually two issues. We start to see that this model looks great. Let's have the same code base across all three environments. What things? We noticed that a lot of folks, when you get into Private Cloud, had tried to roll their own. You know, open Stack is an open source Project clout. Foundry is an open source project. Let's pull it down and let's see units roll it out and manage it ourselves. These air a little bit you they're very dynamic environments, and they're also a bit punishing if you don't stay current with them, both of them update on a very regular basis. And we found a lot of firms once they applied tenor well, folks to it, they just could not keep up with the right pace of change. So when the technologies we invented was a notion called relay on, this allowed us to actually to use the public cloud is our master copy and then we could provide updates to get down to the dedicated environment and down to the local. This takes the headache completely away from the firm's on trying to keep that local version current. It's not manage service, but it's kind of a new way that we can provide manage patches down to that environment. >> So one of the problems we hear in our community is and presume IBM has some visibility on this. I'm thinking about last year, John, we're at the IBM Z announcement in January, rose 1,000,000 company talked a lot about bringing transaction analytic capabilities together. But one of the problems that our community has practitioners in our community course the data for analytics. A lot of it's in the cloud and a lot of transaction data sitting, you know, on the mainframe, something. How do they bring those two together? Do I remove the data into the data center? Do I do I move pieces in how you see >> we're seeing a lot of that. A lot of it was. Bring the technology down to where the data is, and and now you know the three amount of integration you can do with public data sources, private data sources, et cetera. We're seeing a lot more of the compute want to go out to the cloud as well. You know, we've done some things like around the dash, CB Service's et cetera, where I can start to extract some of that transactional data, but maybe only need a few pieces to really make the data set. That is important to me as I move it out, so I can actually, you know, extract that record. I can actually mask it into being something brand new, and then I could minute we mix it with public data tohave. It do brand new things as well, so I think you're gonna see a lot of dynamic capability across that with or cloud computing technologies coming back behind the firewall and then more ability to release that data be intermixed with public data as well. >> What's the number one thing that you're seeing from customers that you guys were executing on? There's always the low hanging fruit for the easy winds from bringing a team of street team, if you will out. Technical service is out to clients where they really putting that gather, not their five year plans, but their one year. Of course, there's a lot of that agile going on right now. New technologies. You can't isolate one thing and break everything. Za new model. What a customer is caring about, right? What's that? What's the common thing? I think >> over there in 2015 I think the discussion changed and went from Are we going to go to the cloud or we're going to the cloud now? How are we going to do it? And the nice thing about I think a lot of enterprise architecture groups kind of took a step back to say, What do we truly have to do? What is a common platform? What is an integration layer? How do we take some of our old applications and decomposed those into a set of AP eyes? How can we then mix that with public AP eyes? So probably taking one or two projects to be proof points so they could say, this thing really has the magic associated with it. We can really build stuff fast. If we do it the right way, it's gonna be in a catalyst to have the I t. Organization now take the tough steps in what's gonna be the commonality? What common service is are we going to use and how do we start breaking up >> around things you know, we have our own data science and our backcourt operation and one of the things that we always looked at with bloom. It's way start our Amazon. But now, with blue mix, you have a couple things kind of coming together in real time. You said it's getting hard, but those hardened areas are important identity. For instance, where's the data is an instruction and structure. I want a little mongo year or something over there, but with blue mix and compose, I oh, really has a nice fit. I want to explain to the folks we talked before he came on about this new dynamic of composed Io and some of the things that are gluing around blue mix. Could you share this >> William Davis King right? And I think people look to the Cloud Data Service is air. Probably it's the most critical, the most visible, and the one we have to harden up the most is well, even though IBM has been well known for D. B two and we've been a >> wire composed right >> that we did Cognos first, and then we followed up with composed by you because recent waded about, we did compose. I know about eight months ago what we liked about it was all of your favorite flavors, you know? So your your progress, your mongo, you're you're ready. But really having it behave like Like what you would want an enterprise database to do. You can back it up. You can have multiple versions of it. We can replicate itself >> is a perfect cloud need of civic >> class. It has all the cloud properties to it and all the enterprise. Great capabilities with it. Yeah, we've got that now in public, and then you're gonna start seeing dedicated, and you want >> to go bare metal, Just go to soft layer. It's not required right on these things where this will work in the cloud, and then you get the bare metal object you want pushed up the bare metal. No problem. Well, I think >> you know it. Almost hybrid is not gonna get a new definition around it. So it's all gonna be around control and automation, more automation. You need to go all the way up to a cloud foundry where it's managing all the health, checking and keeping your apple. I've etcetera. If you want to go all the way down to bare metal so you can tune it audited et cetera. You can do that as well. I think I've got one of the broader spectrum, is there? >> I'm impressed with the composer. I got to say, Go ahead, get hotel Excited by what? I get excited by just about every way. Just love the whole Dev Ops has been just a game changer in extras. Code has been around for a while, but it's actually going totally mainstream. That's right. The benefits are just off the charts. With Mobile, we have the mobile first guys on. Earlier in the Swift, we had 10 made 12 year old kid. I mean, it's just really amazing. Now that the APS themselves aren't the discussion, it's the under the hood. That's right, so you can have an app look and feel like it's targeted for a vertical, say, retail or whatever. But the actions under the hood yeah, yeah, more than ever. Now >> it's, you know it's funny this year, you know, Dick Tino to the Devil Obsession yesterday and you're the amount of proof points we had around it last year. We were scrambling a little bit and this year it's just we always had to thin out. That's how many guys were having great success with this stuff is coming into its own. >> It totally is. And you guys are give you guys Props were running as fast as you can and you're working hard. And it's not just talk. Yeah, it's It's it's legit. I'm gonna ask you a question. What's the big learnings from last year? This year? What's happened? What do you look back and say? Wow, we really learned a lot or something that might have been Magda ified for you in this journey this past year. >> A lot of it goes back to, you know, this changing culture at IBM, you know, the amount of code we put out in two years was just just unbelievable. But I think also the IBM becoming a true cloud company. Some of that we did with our own shop some, but we did through injecting it with acquisitions. You know, like to compose Io the cloud and team, the blue box guys, et cetera. I think we got the chops now to play it play pro ball way worked very hard, Teoh. How many folks, Can we attract the blue mix? We're getting up to 20,000 week. Right now. We're starting. Get some great recognition and the successes are rolling in as well. So a lot of hard work and a lot of busted knuckles. A lot of guys are tired. Definitely, definitely straight in the game now. >> Ready for the crow bait? Taking the pro GameCube madness starts on cute madness. There were, you know, keep matched all the brackets of the Cube alumni and vote on it turns into a hack a phone because everyone stuffed the ballots. Let's talk about pro ball for next year, a CZ. You guys continue? Sure. The theme here obviously is developer. I mean, the show could be dedicated 100%. The blooming LeBlanc up there kind of going fast at the end of this booth on the clock anymore. Time >> right. Like the Star Wars trailer we had >> going up, he needed more time. So it's good props you got for this year. What's going on the road map this year? What if some of the critical goals that you guys see on your group and then just in general for the thing a >> lot of the activities were gonna be doing again is hardening the stack. I've got a brand new team now called a Solution Architecture, where we're looking at it from top to bottom, taking customer scenarios and really testing it out. How do you do? Back up. How do you do? Disaster recovery? How do you do? Multi geography, You know, things like PC I compliance. The rial enterprise problems are now coming to the class global and their global. And with security and compliance, they're changing in a very dynamic fashion. We have to show how you can do those in the cloud. You'd be amazed on how many conversations we have with Si SOS every single week. Is the cloud secure? How do we do enterprise? Great workloads. IBM is bringing that story to the cloud as well. That's the story of >> a potato that content >> Curation is unbelievable, right? That's the hardest part. And it's not that we have it fixed either. But you were doing more of aggregating it together so that we can really pull it all together. I call it the diamond Mine versus the jewelry store. You know, we always have really did you got yet? The great answers out there somewhere. But if you don't start to pull it together into a single place So one of things we did this year was launched the blue mixed garage methodology where we took all of our best practices. We took text test cases, even sample code, and brought it into a single methodology site where people start to go out, pull it down, use it, etcetera. Previously, we had it scattered all over the place, and we're gonna be doing more things like that. Bring in the assets to the programmers, things that we've tried, things we've tested being more open about it, putting in a single location. >> Well, we certainly would like to help promote that. Any kind of those kind of customer reference architectures. Happy to pump on silicon angle with the bond outlook for the vibe. I'm sorry. Five for the show things year. What's the vibe this year? You know, I think I've >> been very impressed with it, and I think, you know, I've been stepping up its game If you go down to the blue. Mixed garages are motives. A motorcycle on stage, you know, kind of getting a little more hip and happening as well. But I think the clients here and this is always about the customer stories and some of the things that we're hearing from the three guys start ups that are doing GPS logistical management 22 to the big accounts, and the big banks that you really see have embraced the cloud and doing great stories on it as well. I think people come to this show so they see what their peers were doing. And they definitely walk away with a sense that the cloud Israel it's happening and 2016. It is really going to driving it home. That has to be part of everybody. Strategy motorcycles I had put on the Harley Man. We'll take it for a spin guarantee. Come on down >> and give my wife. When I got married, it was terms of conditions. That's right. That's right. Last, Watson that Yeah, Thanks, Steve. Thanks. Taking the time and great to see you again. Congratulations. What? They get technical engagement team that you have all the work that you did that blue mix noted certainly by the cube. Congratulations and continued success with Loomis congratulating >> you guys. Well, always a pleasure. >> Okay. Cube Madness, March 15th Cube Gems go to Twitter. And speaking of jewelry, we have Cube gems hashtag Cube gems. That's the highlights of the videos up there. Real time. And, of course, we're gonna get that TV for all. All the action videos are up there right now. I'll be right back with more coverage after this short break here in Las Vegas.

Published Date : Feb 23 2016

SUMMARY :

Brought to you by IBM. Good to be with you again. That's right. Now it's about the developers. I always love the Cube. What have you learned? pulling it all the way back to the virtual machine. So there's a little bit of IBM in here for people to take really make this a set of AP eyes, and we started noticing, you know, you saw in Main stage the other day out from Otis. New Justice unit, New role for you Talk way. cz Well, and then we have the garages, you know, we're up Thio. that coordinated We now can easily pass the baton, Handoff says. What are the steps we need to take to make that happen? level of capability that you are able to achieve with the best practice One of the things we did it IBM is Could we bring a true cloud That code base is the same on premise hybrid public. We've also done the same thing with clout. I mean do you see the same thing and what we believe And we found a lot of firms once they applied tenor well, folks to it, they just could not keep up with the right So one of the problems we hear in our community is and presume IBM has some visibility That is important to me as I move it out, so I can actually, you know, extract that record. for the easy winds from bringing a team of street team, if you will out. How can we then mix that with public AP eyes? But now, with blue mix, you have a couple things Probably it's the most critical, the most visible, and the one we have to harden up the most that we did Cognos first, and then we followed up with composed by you because recent waded about, It has all the cloud properties to it and all the enterprise. and then you get the bare metal object you want pushed up the bare metal. You need to go all the way up to a cloud foundry where it's managing all the Earlier in the Swift, we had 10 made 12 year old kid. it's, you know it's funny this year, you know, Dick Tino to the Devil Obsession yesterday and you're the amount And you guys are give you guys Props were running as fast as you can and you're working hard. Some of that we did with our own shop some, but we did through injecting it with acquisitions. I mean, the show could be dedicated What if some of the critical goals that you guys see on your group and then just in general for the thing a We have to show how you can do those in the cloud. Bring in the assets to the programmers, things that we've tried, things we've tested being more open about it, Happy to pump on silicon angle with the bond outlook for the vibe. been very impressed with it, and I think, you know, I've been stepping up its game If you go down to the blue. Taking the time and great to see you again. you guys. That's the highlights of the videos up there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

StevePERSON

0.99+

JohnPERSON

0.99+

Dave IlanPERSON

0.99+

oneQUANTITY

0.99+

John FerrierPERSON

0.99+

2015DATE

0.99+

AmazonORGANIZATION

0.99+

John HurryPERSON

0.99+

Star WarsTITLE

0.99+

JanuaryDATE

0.99+

Steve RobinsonPERSON

0.99+

Sam Ram GPERSON

0.99+

Las VegasLOCATION

0.99+

OracleORGANIZATION

0.99+

10QUANTITY

0.99+

30,000 feetQUANTITY

0.99+

HandoffPERSON

0.99+

4QUANTITY

0.99+

2016DATE

0.99+

FiveQUANTITY

0.99+

last yearDATE

0.99+

100%QUANTITY

0.99+

five yearQUANTITY

0.99+

March 15thDATE

0.99+

LoomisPERSON

0.99+

This yearDATE

0.99+

one yearQUANTITY

0.99+

William Davis KingPERSON

0.99+

two issuesQUANTITY

0.99+

todayDATE

0.99+

three guysQUANTITY

0.99+

bothQUANTITY

0.99+

this yearDATE

0.99+

Dick TinoPERSON

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

appleORGANIZATION

0.99+

ThioORGANIZATION

0.99+

two yearsQUANTITY

0.98+

two years agoDATE

0.98+

two projectsQUANTITY

0.98+

singleQUANTITY

0.98+

FirstQUANTITY

0.98+

20 mileQUANTITY

0.98+

1,000,000QUANTITY

0.97+

6 weekQUANTITY

0.97+

WatsonPERSON

0.97+

OtisPERSON

0.97+

OneQUANTITY

0.96+

first guysQUANTITY

0.96+

one locationQUANTITY

0.96+

threeQUANTITY

0.96+

yesterdayDATE

0.96+

first vendorsQUANTITY

0.96+

firstQUANTITY

0.95+

Dev OpsTITLE

0.95+

about 20,000 folksQUANTITY

0.94+

over 106 serviceQUANTITY

0.94+

12 year oldQUANTITY

0.94+

fourQUANTITY

0.93+

MagdaPERSON

0.93+