Image Title

Search Results for first big storm:

Sandy Carter, AWS | AWS re:Invent 2021


 

(upbeat music) >> Welcome back to theCUBE's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCUBE. You're watching CUBE's worldwide leader in tech coverage. We're in person on the show floor. It's also a hybrid event, online as well. CUBE coverage online with Amazon re:Invent site. Great content all around, amazing announcements, transformation in all areas are exploding and in innovation, of course, we have innovation here with Sandy Carter, the worldwide public sector vice-president of partners and programs for Amazon Web Services. Sandy, welcome back, CUBE alumni. Great to see you. Thanks for coming on theCUBE. >> Great to see you and great to see you in person again. It's so exciting. The energy level, oh my God. >> Oh my God. It's so much. Thanks, great keynote. Good to see you again in person. A lot of action, give us the top announcements. What's going on? What are the top 10 AWS announcements? >> Yeah, so we, this year for 2022, as we frame it out, we decided on a 3D strategy, a three-dimensional strategy. So we started with destination then data and then delivery. So if I could do them in that order, does that sound good? >> Yeah. Destination. >> So let's start with destination. So I got this from one of the customers and he said to me, "look, Sandy, I thought it was all going to be about getting to the cloud. But when I got to the cloud, I realized it wasn't about just in the cloud, it was about what you do in the cloud." And so we made some announcements this morning, especially around migration, modernization, and optimization. So for migration, we have the mainframe announcement that Adam made, and then we also echoed it. Cause most of the mainframes today sit in public sector. So this is a managed service, it's working with Micro Focus, one of our partners. And Lockheed Martin one of our partners is one of the first into the mainframe migration, which is a service and services to help customers transform their business with the mainframe. And then as we compliment them, we look at that we also have modernization occurring. So for example, IoT. IDC tells us that IoT and that data has increased four times since COVID because now devices and sensors are tracking a lot of data. So we made an announcement around smart cities and we now have badging for our partners. We have 18 partners solutions now in smart cities. So working backwards from the partners they were talking about given now COVID is kind of in the midst of where it is smart cities and making those cities work better in public transportation and utility, it's just all where it's at. And then the final announcement in that category is containers. So 60% of our customers said that they're going to be using containers. So we announced a Rapid Adoption Assistance program for our partners to be able to help our customers move to containers overall. >> So mainframe migration, I saw that on stage, but Micro Focus, that was a good job. Get that legacy out of the way, move to the cloud. You've got smart cities, which is basically IoT, which brings cloud to the edge. And then containerization for the cloud native, either development or compatibility, interoperability kind of sets that table. That's the destination. >> That's right. That's right. Because all of those things, you know, you've got to get the mainframe to the cloud, but then it's about modernizing, right? Getting rid of all that COBOL code and then, you know, IoT and then making sure that you are ready to go with containers. It's the newest- >> So you've got the 3D, destination, data and delivery. >> That's right. >> Okay. Destination, check. Cloud. Cloud destination. >> Yeah. >> I'm putting dots together in real time. >> Destination cloud. There you go. You've got it. >> I'm still with it after all these interviews. >> Yeah, there you go. >> Data, I'll say killer Swami's onstage today, whole new data, multiple databases. What's the data focus in this area? >> So for our partners, first it's about getting the data to the cloud, which means that we need a way to really migrate it. So we announced an initiative to help get that data to the cloud. We had a set of partners that came on with us early on in this initiative to move that data to the cloud, it's called a Rapid Adoption Assistance, which helps you envision where you want to go with your data. Do you want to put it in a data lake? Do you want data stored as it is? What do you want to visualize? What do you want to do with analytics? So envision that and then get enablement. So all the new announcements, all the new services get enablement and then to pilot it. And then the second announcement in this area is a set of private offers in the marketplace. Our customers told us that they love to go after data, but that there's too many pieces and moving parts. So they need the assessment bundled with the managed service and everything bundled together so it's a solution for them. So those were our two announcements in the data area. >> So take me through the private marketplace thing, because this came up when I was talking with Stephen Orban who's now running the marketplace. What does that mean? So you're saying that this private offer is being enabling the suppliers and in government? >> Yeah. So available in the marketplace, a lot of our government agencies can buy from the marketplace. So if they have a contract, they can come and buy. But instead of having to go and say, okay, here's an assessment to tell me what I should do, now here's the offering, and now here's the managed service, they want it bundled together. So we have a set of offerings that have that bundled together today with the set of our great public sector partners. >> So tons of data action, where's the delivery fit in? >> So delivery. This one is very interesting because our customers are telling us that they no longer want just technology skills, they also need industry skills too. So they're looking for that total package. For example, you know, the state of New Jersey when hurricane Ida hit, category four storm, they wanted someone who obviously could leverage all the data, but they wanted someone who understood disaster response. And so Maxar fits that bill. They have that industry specialty along with the technology specialty. And so for our announcements here, we announced a new competency, which is an industry competency for energy. So think about renewables and sustainability and low carbon. These are the partners that do that. We have 32 different partners who met the needs of that energy competency. So we were able to GA that here today. The other really exciting announcement that we made was for small businesses to get extra training, it's called Think Big for Small Business communities. So we announced last year virtually, Think Big for Small Business. We now have about 200 companies who are part of that program, really getting extra help as diverse companies. Women owned, black owned, brown owned, veteran owned businesses, right? But now what they told us was in addition to the AWS help, what they loved is how we connected them together and we almost just stumbled upon it. I was hosting some meetings and I had Tia from Bellflower, I had Lisa from DLZP together and they got a lot of value just being connected. And we kept hearing that over and over and over again. So now we've programmatized that so it's more scalable than me introducing people to each other. We now have a program to introduce those small business leaders to each other. And then the last one that we announced is our AWS government competency is now the largest competency at AWS. So the government competency, which is pretty powerful. So now we're going to do a focus enhancement for federal. So all of our federal partners with all that opportunity can now take advantage of some private advisory council, some additional training that will go on there, additional go-to market support that they can use to help them. >> Okay. I feel like my brain is going to explode. Those are just the announcements here. There's a lot going. >> Yeah. There's a lot going on. >> I mean it's so much you've got to put them into buckets. Okay. What's the rationale around 3D? Delivery, data... I mean, destination, delivery, data. Destination, meaning cloud. Data, meeting data. And delivery meaning just new ways to get up and running- >> Skills. >> To get this delivery for the services. >> Yep. >> Okay. So is there a pattern emerging? What can you say? Cause remember we talked about this before a year ago, as well as in person at your public sector summit with your partners. Is there a pattern emerging that you're seeing here? Cause lots of the announcements are coming, done with the mainframes. Connect on your watch has been a big explosion. Adam Slansky told me personally, it's on fire. And public sector, we saw a lot of that. >> Well, in fact, you know, if you look at public sector, three factoids that we shared this morning in the keynote. Our public sector partners grew 54% this year, this is after last year we grew 45%. They grew the number of certifications that they had by 40% and the number of new customers by 32%. I mean, those are unreal numbers. Last year we did 28% new customers and we thought that was the cat's meow, now we're at 32%. So our partners are just exploding in this public sector space right now. >> It's almost as if they have an advantage because they dragged their feet for so long. >> It's true. It's true. COVID accelerated their movement to the cloud. >> A lot of slow moving verticals because of the legacy and whether it's regulation or government funding or skills- >> Or mainframes. >> All had to basically move fast, they had no excuses. And then the cloud kind of changes everyone's mindset. How about the culture? I want to ask you about the culture in the public sector, because this is coming up a lot. Again, a lot of your customers that I'm interviewing all talk... and I try to get them to talk about horizontally scalable and machine learning, and they're always, no, it's culture. >> Yeah. It's true. >> Culture is the number one thing. >> It is true. You know, culture eats strategy for lunch. So even if you have a great strategy around the cloud, if you don't have that right culture, you won't win in the marketplace. So we are seeing this a lot. In fact, one of our most popular programs is PTP, Partner Transformation Program. And it lays out a hundred day program on cloud best practices. And guess what's the number one topic? Culture. Culture, governance, technology, all of those things are so important right now. And I think because, you know, a lot of the agencies and governments and countries, they had moved to the cloud now that they're in the cloud, they went through that pain during COVID, now they're seeing all the impact of artificial intelligence and containers and blockchain and all of that, right? It's just crazy. >> That's a great insight. And I'll add to that because I think one of the things I've observed, especially with your partners is the fear of getting eliminated by technology or the fear of having a job change or fear of change in general went away once they started using it because they saw the criticality of the cloud and how it impacted their job, but then what it offered them as new opportunities. In fact, it actually increases more areas to innovate on and do more, whether it's job advancement or cross training or lateral moves, promotion, that's a huge retention piece. >> It really is. And I will tell you that the movement to the cloud enabled people to see it wasn't as scary as they thought it was going to be, and that they could still leverage a lot of the skills that they had and learn new ones. So I think it is. And this is one of the reasons why, I was just talking with Maureen launching that 29 million training program for the cloud, that really touches public sector because there is so many agencies, countries, governments that need to have that training. >> You're talking about Maureen Lonergan, she does the training. She's been working on that for years. >> Yeah. >> That's the only getting better and better. >> Yeah. >> Well Sandy, I've got to ask you, since you have a few minutes left, I want to ask you about your journey. >> Yeah. >> We've interviewed you going back a long time look where we are now. >> I know. It's incredible. >> Look at these two sets going on at CUBE. >> You've been an incredible voice on theCUBE. We really appreciate having you on because you're innovative. You're always moving like a shark. You can't sit still. You're always innovating. Still going on, you had the great women's luncheon from 20 to 200. >> Yeah, we grew. So we started out with 20 people back five years ago and now we had about 200 women and it was incredible because we do different topics. Our topic was around empathy and empathetic leadership. And you know how you can really leverage that today, back with the skills and your people. You know, given that Amazon just announced our new leadership principle about wanting to be the Earth's most employee centric company. It fits right in, empathetic leadership. And we had amazing women at that luncheon that told some great stories about empathy that I think will live in our hearts forever. >> And the other thing I want to point out, we had some of the guests on sitting on theCUBE. We had Linda Jojo from United airlines. >> Oh yeah. >> And a little factoid, yesterday in the keynote, 50% of the speakers were women. >> I know. The first time I did a blog post on it, like we had two amazing women in STEM and we had, you know, the black pilot that was highlighted. So it's showing more diversity. So I was just so excited. Thank you Adam, for doing that because I think that was an amazing, amazing focus here at the conference. >> I wanted to bring up a point. I had a note here to bring up to you. Public sector, you guys doubled the number of partners, large migrations this year. That's a big statoid. You've had 575,000 individuals hold active certifications. Okay. That grew 40% from August 2021, clearly a pandemic impact. A lot of people jumping back in getting their certs, migrating so if they're not... They're in between transitions where they have a tailwind or a headwind, whether you're United Airlines or whether you're Zoom, you got some companies were benefiting from the pandemic and some were retooling. That's something that we talked about actually at the beginning. >> That's right. Absolutely. And I do think that those certifications also demonstrate that customers have raised the bar on what they expect from a partner. It's no longer just like that technology input, it's also that industry side. And so you see the number of certifications going up because customers are demanding higher skill level. And by the way, for the partners we conducted a study with ESG and ESG said that more skilled partners, you drive more margin, profit margin, 42% more profit margin for a higher skilled partner. And we're seeing that really come to fruition with some of these really intense focus on getting more certifications and more training. >> I want to get your thoughts on the healthcare and life science. I just got a note here that tells me that the vertical is one of the fastest growing verticals with 105% year on year growth. Healthcare and life sciences, another important... Again, a lot of legacy, a lot of old silos, forced to expand and innovate with the pandemic growing. >> Yes. You know, government is our largest segment today, our largest competency. Healthcare is our fastest growing segment. So we have a big focus there. And like you said, it's not just around, you know, seeing things stay the same. It's about digital transformation. It's one of the reasons we're also seeing such an increase in our authority to operate program both on the government side and the healthcare side. So we do, you know, FedRAMP and IL5. We had six companies that got IL5, five of them in 2021, which is an amazing achievement. And then, you know, if you think about the healthcare side, our fastest growing compliance is HIPAA and HITRUST. And that ATO program really brings best practices and templates and stronger go to market for those partners too. >> Yeah. I mean, I think it's opportunity recognition and then capture during the pandemic with the cloud. More agility, more speed. >> That's right. >> Sandy, always great to have you on. In the last couple of seconds we have left, summarize the top 10 announcements in a bumper sticker. If you had to kind of put that bumper sticker on the car as it drives away from re:Invent this year, what's on that bumper sticker? What's it say? >> Partners that focus on destination, data and delivery will grow faster and add more value to their customers. >> There it is. The three dimension, DDD. Delivery... Destination, data and delivery. >> There you go. >> Here on theCUBE, bringing you all the data live on the ground here, CUBE studios, two sets wall-to-wall coverage. You're watching theCUBE, the leader in global tech coverage. I'm John Furrier your host. Thanks for watching. (soft techno music)

Published Date : Dec 2 2021

SUMMARY :

We're in person on the show floor. Great to see you and great Good to see you again in person. So we started with destination Cause most of the mainframes Get that legacy out of the that you are ready to go with containers. So you've got the 3D, you go. I'm still with it after What's the data focus in this area? the data to the cloud, is being enabling the and now here's the managed service, So the government competency, Those are just the announcements here. What's the rationale around 3D? Cause lots of the and the number of new customers by 32%. because they dragged movement to the cloud. I want to ask you about the a lot of the agencies and criticality of the cloud a lot of the skills that she does the training. That's the only I want to ask you about your journey. We've interviewed you I know. Look at these two the great women's luncheon So we started out with 20 And the other thing of the speakers were women. and we had, you know, the black That's something that we talked about for the partners we tells me that the vertical So we do, you know, FedRAMP and IL5. and then capture during the that bumper sticker on the car Partners that focus on There it is. live on the ground here,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adam SlanskyPERSON

0.99+

AdamPERSON

0.99+

Sandy CarterPERSON

0.99+

AmazonORGANIZATION

0.99+

Maureen LonerganPERSON

0.99+

August 2021DATE

0.99+

SandyPERSON

0.99+

Linda JojoPERSON

0.99+

105%QUANTITY

0.99+

FedRAMPORGANIZATION

0.99+

Stephen OrbanPERSON

0.99+

2021DATE

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

42%QUANTITY

0.99+

ESGORGANIZATION

0.99+

60%QUANTITY

0.99+

Last yearDATE

0.99+

MaureenPERSON

0.99+

40%QUANTITY

0.99+

last yearDATE

0.99+

2022DATE

0.99+

54%QUANTITY

0.99+

six companiesQUANTITY

0.99+

28%QUANTITY

0.99+

New JerseyLOCATION

0.99+

two announcementsQUANTITY

0.99+

45%QUANTITY

0.99+

50%QUANTITY

0.99+

oneQUANTITY

0.99+

18 partnersQUANTITY

0.99+

second announcementQUANTITY

0.99+

32%QUANTITY

0.99+

IL5ORGANIZATION

0.99+

20 peopleQUANTITY

0.99+

IDCORGANIZATION

0.99+

Lockheed MartinORGANIZATION

0.99+

LisaPERSON

0.99+

32 different partnersQUANTITY

0.99+

twoQUANTITY

0.99+

575,000 individualsQUANTITY

0.99+

BellflowerORGANIZATION

0.99+

DLZPORGANIZATION

0.99+

two setsQUANTITY

0.99+

this yearDATE

0.99+

United AirlinesORGANIZATION

0.98+

yesterdayDATE

0.98+

two setsQUANTITY

0.98+

29 millionQUANTITY

0.98+

todayDATE

0.98+

five years agoDATE

0.98+

HIPAATITLE

0.98+

Micro FocusORGANIZATION

0.97+

a year agoDATE

0.97+

bothQUANTITY

0.97+

EarthLOCATION

0.97+

CUBEORGANIZATION

0.97+

firstQUANTITY

0.97+

three factoidsQUANTITY

0.97+

about 200 companiesQUANTITY

0.96+

SwamiPERSON

0.95+

first timeQUANTITY

0.95+

this morningDATE

0.95+

200QUANTITY

0.93+

pandemicEVENT

0.92+

20QUANTITY

0.92+

four timesQUANTITY

0.9+

Reliance Jio: OpenStack for Mobile Telecom Services


 

>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.

Published Date : Sep 14 2020

SUMMARY :

So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

IndiaLOCATION

0.99+

2014DATE

0.99+

two hoursQUANTITY

0.99+

$50QUANTITY

0.99+

3QUANTITY

0.99+

12 gigabytesQUANTITY

0.99+

three yearsQUANTITY

0.99+

MorantesORGANIZATION

0.99+

70,000,080 millionQUANTITY

0.99+

AndrePERSON

0.99+

three hoursQUANTITY

0.99+

SamsungORGANIZATION

0.99+

2000DATE

0.99+

70 daysQUANTITY

0.99+

GenovaLOCATION

0.99+

five daysQUANTITY

0.99+

2QUANTITY

0.99+

zeroQUANTITY

0.99+

0QUANTITY

0.99+

170 daysQUANTITY

0.99+

100 million subscribersQUANTITY

0.99+

Onda Mantis PartnershipORGANIZATION

0.99+

first phaseQUANTITY

0.99+

100 millionQUANTITY

0.99+

15 minutesQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

firstQUANTITY

0.99+

16DATE

0.99+

four yearsQUANTITY

0.99+

4 monthsQUANTITY

0.99+

one personQUANTITY

0.99+

49 pQUANTITY

0.99+

100 million customersQUANTITY

0.99+

todayDATE

0.99+

one billionQUANTITY

0.99+

Two setsQUANTITY

0.99+

155thQUANTITY

0.99+

oneQUANTITY

0.99+

one key stepQUANTITY

0.99+

last monthDATE

0.99+

first countryQUANTITY

0.98+

3 monthsQUANTITY

0.98+

around 100,000 CPU coresQUANTITY

0.98+

JoePERSON

0.98+

100QUANTITY

0.98+

27,001QUANTITY

0.98+

OneQUANTITY

0.98+

15 different teamsQUANTITY

0.98+

Android IOSTITLE

0.98+

one monthQUANTITY

0.98+

FranceLOCATION

0.98+

506 106 100 millionQUANTITY

0.98+

GeoORGANIZATION

0.98+

Elektronik LeeORGANIZATION

0.98+

FBIORGANIZATION

0.98+

one groupQUANTITY

0.98+

1.6 billion nuclear transactionsQUANTITY

0.98+

AndiPERSON

0.97+

Geo Mobile InternetORGANIZATION

0.97+

five storyQUANTITY

0.97+

PrometheusTITLE

0.97+

Bill Manning, Woodforest National Bank | ZertoCON 2018


 

>> Narrator: Live from Boston, Massachusetts. It's the Cube, covering ZertoCON 2018. Brought to you by Zerto. >> This is the Cube, I'm Paul Gillin, we're on the ground here in Boston for ZertoCON 2018 and joining me is Bill Manning who's in infrastructure operations at Woodforest National Bank. Now I was not familiar with Woodforest National Bank but I understand that regular visitors to WalMart in the south probably are. You're the WalMart bank I understand. >> That's what a lot of people like to call us. >> Your many branches are located in WalMarts in other words. And based in Houston, which has been no stranger to disasters lately >> Correct. >> The topic of IT resilience very much fresh on your mind. What is IT resilience mean in terms of your operations at Woodforest? >> We need to be very resilient in terms of natural disasters, hurricanes mostly. So, in order to prepare ourselves for that we migrate, 70% of our infrastructure between data centers every six months. When hurricane season starts we migrate away from Houston. When it's done, we migrate back. >> Now, why the migration strategy? Why move between data centers? Why not just settle on one data center that's out of harm's way if you will. >> Well there's no one data center that's out of 100% harm's way, so you need to make sure that if one data center goes down, you can always come up at your backup, or your primary data center. >> Now how did you become a Zerto customer? I understand you were one of their first, their first customer? >> We were their first customer we had Kashya before them and then RecoverPoint. Kashya was the precursor to Zerto. And whenever we were having issues with our replication appliance, we decided to look into Zerto, and we bought, implemented, and turned on Zerto fairly quickly. So we were the first customer and then we were the first customer that was using it. We actively utilized it to run a migration. And so far everything's going great. We love the product. And it works very well for us. >> Now being the first customer of a product is typically thought of as a risky proposition. What pushed you over the tipping point? >> We had an appliance that kept failing on us and the last failure was the straw, that broke the back. So we already had Zerto in a, I believe it was an alpha, possibly a beta test implementation, and when that straw finally broke, we turned off the appliance and we turned on Zerto. And it was very seamless. And yes there were headaches. We had issues with it. But a lot of the support tickets, all of the enhancement requests, a lot of those have our name on it. Because we utilized it. >> So you're doing the cloud migration every six months. What are some of the operational issues that you have to take into account when you're moving that size of processing load a couple hundred miles away? Or maybe Austin, maybe 100 miles away. >> We do it so often it's kind of second nature to us now. But we know the pain points of if you do it regularly, you know what happened last time. Hopefully you documented it. And you know what can happen this time. And a lot of times it's Firewall rules, it's what did we do at our current data center that we forgot to do at our other data center, in preparation for migration. So our biggest pain point is making sure we don't forget, oh hey we did something here, let's make sure to replicate it over and do the same thing over at our other data center. >> How has the role of backup changed over the time you've been using Zerto? It's not really, you don't have the luxury of point in time backups anymore. It's a continuous process, isn't it? >> Well we don't utilize Zerto for backups. We utilize another product for our primary backup system and we are a bank. We have seven year retention policies. So there are certain things that we have to keep on tape or on disk for a certain number of years. And Zerto doesn't immediately offer that to us. However we do utilize Zerto in a kind of pseudo backup process. If we need to recover a file that got deleted accidentally, I can either spend an hour using our other process or 10 minutes using Zerto. So we just pop into Zerto, use the journal file level recovery and there you go. >> You had, being in Houston you had a number of major storms in recent years. Are there any stories you can share with us about how you have managed to stay up and running during those storms? >> Our first storm, our first big storm right after Katrina was Rita. And when Rita came through, we didn't have what we have today. We ended up powering down non-critical items and making sure our critical applications were up and running. And luckily we didn't lose much power. We didn't lose any networking. Where as, during Harvey, we lost some networking for a week or two. The difference was we already moved everything to our secondary data center well away from the hurricane. And sure, one of our redundant paths was down. Our other one was up. We still had connectivity and we were doing great. So in terms of where we progress, hurricane season is what we are mainly concerned with. So we utilize Zerto, we move everything over. So if our data center, our primary data center in Houston goes down, we're mildly affected and customers shouldn't even notice. >> How does this make your business more resilient? I mean is this actually, is there business benefits to your, for your customers? >> Of course. >> Of the business being this resilient? >> If we're a bank and our ATMs go down, and we can't get them back up for a few days, our customers notice. If we're a bank and our primary systems go down and you can't take money out of your account for I believe the timeframe is 72 hours, the Federal government comes in and they own us now. We are no longer a bank. Because we didn't, we failed at providing services for our customers, for an extended period of time. And that's unacceptable. So to mitigate that we use a DR strategy. We use a business continuity plan. And we make sure that if something were to happen, even if it were outside of hurricane season, or if we were during hurricane season, and we had an issue at our other data center, Zerto allows us to bring everything back up within minutes. And because we do it regularly, if we're not going to have as many headaches as someone that just says, "Oh, well we've implemented Zerto but we don't utilize it." We run a few test failovers to make sure that we can actually migrate, but we don't bring anything up and run production load. We run production load every six months using Zerto. So that's how we get around making sure that we're highly available and we don't get taken over by the government. >> I hear a lot of talk, Bill, these days about digital transformation. How real is that to what Woodforest is doing? How are you changing the way you do business? >> I think it's already hard for us. I mean we've already gone digital. When I first started, we had couriers picking up paperwork from the branches and taking them to centralized processing locations, and running everything manually. Now it's all digitally. And that was partially thanks to 9/11. There was proof work they couldn't run for weeks because airports were down. And because of that banks started already going digital. So we already have digital transactions. Now if you write a check at WalMart, instead of taking a few days or a week or two to clear, it clears that day or the next day. Because it's all digital. WalMart went digital, we went digital. Most banks are already going digital or have already gone digital. So we just kind of, people ask, we're mostly already there. We're already digital. >> How about cloud? What's your road map when it comes to using multiple cloud providers? >> We're definitely looking into it, they give us a lot of benefit. They give us a lot of service that we can... >> You got a lot of flexibility. >> Flexibility, sure. Flexibility in doing things that we can't necessarily do ourselves. Right now we're taking baby steps. We're not throwing full production load into the cloud. We're looking at, let's put our development environment up there and see what it can provide for our developers. And so far they're enjoying what the opportunities or the possibilities can be. So we're looking forward to hopefully this year getting them up and running and in the cloud and enjoying all of the benefits from there. And after that once we get some development done in there, then we'll probably start seeing some production applications being put into the cloud. Some sort of probably SAS server offering. >> Well hurricane season is coming up in just a couple of months. I wish you the best >> Thank you so much. >> this season. Bill Manning thanks very much for joining us. >> Thank you very much, I appreciate it. >> We'll be right back from ZertoCON, I'm Paul Gillin, this is the Cube. (upbeat tech beats)

Published Date : May 23 2018

SUMMARY :

Brought to you by Zerto. This is the Cube, I'm Paul Gillin, we're on the ground And based in Houston, which has been no stranger What is IT resilience mean in terms of your operations When hurricane season starts we migrate away from Houston. that's out of harm's way if you will. center goes down, you can always come up at your backup, So we were the first customer and then we were the first What pushed you over the tipping point? the appliance and we turned on Zerto. What are some of the operational issues that you have to But we know the pain points of if you do it regularly, It's not really, you don't have the luxury of point So there are certain things that we have to keep on tape You had, being in Houston you had a number of major We still had connectivity and we were doing great. And because we do it regularly, if we're not going to have How real is that to what Woodforest is doing? So we just kind of, people ask, we're mostly already there. They give us a lot of service that we can... And after that once we get some development I wish you the best this season. this is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

HoustonLOCATION

0.99+

WalMartORGANIZATION

0.99+

BostonLOCATION

0.99+

Bill ManningPERSON

0.99+

first customerQUANTITY

0.99+

first customerQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Woodforest National BankORGANIZATION

0.99+

seven yearQUANTITY

0.99+

ZertoORGANIZATION

0.99+

70%QUANTITY

0.99+

100%QUANTITY

0.99+

twoQUANTITY

0.99+

RitaPERSON

0.99+

Boston, MassachusettsLOCATION

0.99+

a weekQUANTITY

0.99+

first stormQUANTITY

0.99+

72 hoursQUANTITY

0.99+

100 milesQUANTITY

0.99+

todayDATE

0.99+

ZertoCONORGANIZATION

0.99+

firstQUANTITY

0.99+

AustinLOCATION

0.98+

this yearDATE

0.98+

an hourQUANTITY

0.98+

RecoverPointORGANIZATION

0.98+

KatrinaEVENT

0.98+

oneQUANTITY

0.98+

second natureQUANTITY

0.97+

BillPERSON

0.97+

KashyaORGANIZATION

0.96+

one data centerQUANTITY

0.96+

ZertoCON 2018EVENT

0.95+

WoodforestLOCATION

0.94+

ZertoCONEVENT

0.93+

next dayDATE

0.91+

HarveyEVENT

0.9+

first big stormQUANTITY

0.89+

WoodforestORGANIZATION

0.88+

every six monthsQUANTITY

0.86+

ZertoTITLE

0.83+

WalMartsLOCATION

0.82+

one data centerQUANTITY

0.81+

9/11DATE

0.73+

couple hundred milesQUANTITY

0.72+

a few daysQUANTITY

0.71+

Federal governmentORGANIZATION

0.65+

SASORGANIZATION

0.64+

RitaEVENT

0.61+

2018DATE

0.57+

Michelle Boockoff-Bajdek, IBM, & John Bobo, NASCAR | IBM Think 2018


 

>> Voiceover: Live from Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante and this is day three of our wall-to-wall coverage of IBM Think 2018, the inaugural event, IBM's consolidated a number of events here, I've been joking there's too many people to count, I think it's between 30 and 40,000 people. Michelle Boockoff-Bajdek is here, she's the president of >> Michelle: Good job. >> Global Marketing, Michelle B-B, for short >> Yes. >> Global Marketing, business solutions at IBM, and John Bobo, who's the managing director of Racing Ops at NASCAR. >> Yes. >> We're going to have, a fun conversation. >> I think it's going to be a fun one. >> Michelle B-B, start us off, why is weather such a hot topic, so important? >> Well, I think as you know we're both about to fly potentially into a snowstorm tonight, I mean weather is a daily habit. 90% of all U.S. adults consume weather on a weekly basis, and at the weather company, which is part of IBM, right, an IBM business, we're helping millions of consumers anticipate, prepare for, and plan, not just in the severe, but also in the every day, do I carry an umbrella, what do I do? We are powering Apple, Facebook, Yahoo, Twitter, So if you're getting your weather from those applications, you're getting it from us. And on average we're reaching about 225 million consumers, but what's really interesting is while we've got this tremendous consumer business and we're helping those millions of consumers, we're also helping businesses out there, right? So, there isn't a business on the planet, and we'll talk a little bit about NASCAR, that isn't impacted by weather. I would argue that it is incredibly essential to business. There's something like a half a trillion dollars in economic impact from weather alone, every single year here in the U.S. And so most businesses don't yet have a weather strategy, so what's really important is that we help them understand how to take weather insights and turn it into a business advantage. >> Well let's talk about that, how does NASCAR take weather insights and turn it into a business advantage, what are you guys doing, John, with, with weather? >> Oh, it's very important to us, we're 38 weekends a year, we're probably one of the longest seasons in professional sports, we produce over 500 hours of live television just in our top-tier series a year, we're a sport, we're a business, we're an entertainment property, and we're entertaining hundreds of thousands of people live at an event, and then millions of people at home who are watching us over the internet or watching us on television through our broadcast partners. Unlike other racing properties, you know, open-wheeled racing, it's a lot of downforce, they can race in the rain. A 3,500 pound stock car cannot race in the rain, it's highly dangerous, so rain alone is going to have to postpone the event, delay the event, and that's a multi-million dollar decision. And so what we're doing with Weather Channel is we're getting real-time information, hyper-localized models designed around our event within four kilometers of every venue, remember, we're in a different venue every week across the country. Last week we're in the Los Angeles market, next week we're going to be in Martinsville, Virginia. It also provides us a level of consistency, as places we go, and knowing we can pick up the phone and get decision support from the weather desk, and they know us, and they care as much about us as we do, and what we need to do, it's been a big help and a big confidence builder. >> So NASCAR fans are some of the most fanatic fans, a fan of course is short for fanatic, they love the sport, they show up, what happens when, give us the before and after, before you kind of used all this weather data, what was it like before, what was the fan impact, and how is that different now? >> Going back when NASCAR first started getting on television, the solution was we would send people out in cars with payphone money, and they would watch for weather all directions, and then they would call it in, say, "the storm's about ten miles out." Then when it went to the bulky cell phones that were about as big as a bread box, we would give them to them and then they would be in the pullover lane and kind of follow the storm in and call Race Control to let us know. It has three big impacts. First is safety, of the fans and safety of our competitors through every event. The second impact is on the competition itself, whether the grip of the tires, the engine temperature, how the wind is going to affect the aerodynamics of the car, and the third is on the industry. We've got a tremendous industry that travels, and what we're going to have to do to move that industry around by a different day, so we couldn't be more grateful for where we're able to make smarter decisions. >> So how do you guys work together, maybe talk about that. >> Well, so, you know, I think, I think one of the things that John alluded to that's so important is that they do have the most accurate, precise data out there, right, so when we talk about accuracy, a single model, or the best model in the world isn't going to produce the best forecast, it's actually a blend of 162 models, and we take the output of that and we're providing a forecast for anywhere that you are, and it's specific to you and it's weighted differently based on where you are. And then we talk about that precision, which gets down to that four kilometer space that John alluded to that is so incredibly important, because one of the things that we know is that weather is in fact hyper-local, right, if you are within two kilometers of a weather-reporting station, your weather report is going to be 15% more accurate. Now think about that for a minute, analytics perspective, right, when you can get 15% more accuracy, >> Dave: Huge. >> You're going to have a much better output, and so that precision point is important, and then there's the scale. John talks about having 38 race weekends and sanctioning 1,200 races, but also we've got millions of consumers that are asking us for weather data on a daily basis, producing 25 billion forecasts for all of those folks, again, 2.2 billion locations around the world at that half a kilometer resolution. And so what this means is that we're able to give John and his Racing Operations Team the best, most accurate forecast on the planet, and not just the raw data, but the insight, so what we've built, in partnership with Flagship, one of our business partners, is the NASCAR Weather Track, and this is a race operations dashboard that is very specific to NASCAR and the elements that are most important to them. What they need to see right there, visible, and then when they have a question they can call right into a meteorologist who is on-hand 24/7 from the Wednesday leading up to a race all the way till that checkered flag goes down, providing them with any insight, right, so we always have that human intelligence, because while the forecast is great you always want somebody making that important decision that is in fact a multi-million dollar one. >> John, can you take us through the anatomy of how you get from data to insight, I mean you got to, it's amazing application here, you got the edge, you got the cloud, you got your operations center, when do you start, how do you get the data, who analyzes the data, how do you get to decision making? >> Yeah, we're data hogs in every aspect of the sport, whether it's our cars, our events, or even our own operations. We get through Flagship Solutions, and they do a fantastic job through a weather dashboard, the different solutions. We start getting reports on Monday for the week ahead. And so we're tracking it, and in fact it adds some drama to the event, especially as we're looking at the forecast for Martinsville this upcoming weekend. We work closely with our broadcast partners, our track partners, you know, we don't own the venues of where we go, we're the sports league, so we're working with broadcast, we're working with our track venues, and then we're also working with everyone in the industry and all our other official sponsors, and people that come to an event to have a great time. Sometimes we're making those decisions in the event itself, while the race is going on, as things may pop up, pop-up storms, things may change, but whether it's their advice on how to create our policy and be smarter about that, whether it's the real-time data that makes us smarter, or just being able to pick up a phone and discuss the various multi-variables that we see occurring in a situation, what we need to do live, to do, and it's important to us. >> So, has it changed the way, sometimes you might have to cancel an event, obviously, so has it changed the way in which you've made that decision and communicate to your, to your customers, your fans? >> Yeah, absolutely, it's made a lot of us smarter, going into a weekend. You know, weather is something everybody has an opinion about, and so we feel grateful that we can get our opinion from the best place in the country. And then what we do with that is we can either move an event up, we can delay an event, and it helps us make those smarter decisions, and we never like to cancel an event cause it's important to the competition, we may postpone it a day, run a race on a Monday or Tuesday, but you know a 10, 11:00 race on a Monday is not the best viewership for our broadcast partners. So, we're doing everything we can to get the race in that day. >> Yeah so it's got to be a pretty radical condition to cancel a race, but then. >> Yes, yeah. >> So what you'll do is you'll predict, you'll pull out the yellow flag, everybody slows down, and you'll be able to anticipate when you're going to have to do that, is that right, versus having people, you know. >> Right. >> Calling on the block phones? >> Or if we say, let's start the race two hours early, and that's good for the track, it's good for our broadcast partners, and we can get the race in before the bad weather occurs, we're going to do that. >> Okay, and then, so, where are you taking this thing, Michelle, I mean, what is John asking you for, how are you responding, maybe talk about the partnership a little bit. >> Well, you know, yes, so I, you know the good news is that we're a year into this partnership and I think it's been fantastic, and our goal is to continue to provide the best weather insights, and I think what we will be looking at are things like scenario plannings, so as we start to look longer-range, what are some of the things that we can do to better anticipate not just the here and now, but how do we plan for scenarios? We've been looking at severe weather playbooks too, so what is our plan for severe weather that we can share across the organization? And then, you know, I think too, it's understanding potentially how can we create a better fan experience, and how can we get some of this weather insight out to the fans themselves so that they can see what's going to happen with the weather and better prepare. It's, you know, NASCAR is such a tremendous partner for us because they're showcasing the power of these weather insights, but there isn't a business on the planet that isn't impacted, I mean, you know we're working with 140 airlines, we're working with utility companies that need to know how much power is going to be consumed on the grid tomorrow, they don't care as much about a temperature, they want to know how much power is going to be consumed, so when you think about the decisions that these companies have to make, yes the forecast is great and it's important, but it really is what are the insights that I can derive from all of that data that are going to make a big difference? >> Investors. >> Oh, absolutely. >> Airlines. >> Airlines, utility companies, retailers. >> Logistics. >> Logistics, you know, if you think about insurance companies, right, there's a billion dollars in damage every single year from hail. Property damage, and so when you think about these organizations where every single, we just did this great weather study, and I have to get you a copy of it, but the Institute of Business Value at IBM did a weather study and we surveyed a thousand C-level executives, every single one of them said that weather had an impact on at least one revenue metric, every single, 100%. And 93% of them said that if they had better weather insights it would have a positive impact on their business. So we know that weather's important, and what we've got to do is really figure out how we can help companies better harness it, but nobody's doing it better than these guys. >> I want to share a stat that we talked about off-camera. >> Sure. >> 'Cause we all travel, I was telling a story, my daughter got her flight canceled, very frustrating, but I like it because at least you now know you can plan at home, but you had a stat that it's actually improved the situation, can you share that? >> Right, yeah, so nobody likes to have their flights canceled, right, and we know that 70% of all airline delays are due to weather, but one of the things we talked about is, you know, is our flight going to go out? Well airlines are now operating with a greater degree of confidence, and so what they're doing is they trust the forecast more. So they're able to cancel flights sooner, and by doing so, and I know nobody really likes to have their flight canceled, but by doing so, when we know sooner, we're now able to return those airlines to normal operations even faster, and reduce cancellations in total by about 11%. That's huge. And so I think that when you look at the business impact that these weather insights can have across all of these industries, it's just tremendous. >> So if you're a business traveler, you're going to be better off in the long run. >> That's right, I promise. >> So John I have to ask you about the data science, when IBM bought the weather company a big part of the announcement was the number of data scientists that you guys brought to the table. There's an IOT aspect as well, which is very important. But from a data science standpoint, how much do you lean on IBM for the data science, do you bring your own data scientists to the table, how to they collaborate? >> No no, we lean totally on them, this is their expertise. Nobody's going to be better at it in the world than they are, but, you know, we know that at certain times past data may be more predictive, we know that at different times different data sets show different things and they show so much, we want to have cars race, we want to concentrate on officiating a race, putting on the bet entertainment we can for sports fans, it's a joy to look at their data and pick up the phone and not have to figure this out for myself. >> Yeah, great. Well John, Michelle, thanks so much for coming. >> Thank you. >> I'll give you the last word, Michelle, IBM Think, the weather, make a prediction, whatever you like. >> Well, I just have to say, for all of you who are heading home tonight, I'm keeping my fingers crossed for you, so good luck there. And if you haven't, this is the one thing I have to say, if you haven't had the opportunity to go to a NASCAR race, please do so, it is one of the most exciting experiences around. >> Oh, and I want to mention, I just downloaded this new app. Storm Radar. >> Oh yes, please do. >> Storm radar. So far, I mean I've only checked it out a little bit, but it looks great. Very high ratings, 13,600 people have rated it, it's a five rating, five stars, you should check it out. >> Michelle: I love that. >> Storm Radar. >> John: It is good isn't it. >> And just, just check it out on your app store. >> So, thanks you guys, >> Michelle: Love that. Thank you so much. >> Really appreciate it. And thank you for watching, we'll be right back right after this short break, you're watching theCUBE live from Think 2018. (light jingle)

Published Date : Mar 21 2018

SUMMARY :

Brought to you by IBM. the inaugural event, and John Bobo, who's the managing director We're going to have, and at the weather company, which is part of IBM, and get decision support from the weather desk, and the third is on the industry. and it's specific to you and it's weighted differently and the elements that are most important to them. and people that come to an event to have a great time. and we never like to cancel an event Yeah so it's got to be a pretty radical condition to cancel versus having people, you know. and we can get the race in before the bad weather occurs, Okay, and then, so, where are you taking this thing, and our goal is to continue to and I have to get you a copy of it, And so I think that when you look at the business impact better off in the long run. So John I have to ask you about the data science, and they show so much, we want to have cars race, for coming. the weather, make a prediction, whatever you like. Well, I just have to say, for all of you who are Oh, and I want to mention, I just downloaded this new app. you should check it out. Thank you so much. And thank you for watching, we'll be right back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NASCARORGANIZATION

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

Michelle Boockoff-BajdekPERSON

0.99+

DavePERSON

0.99+

MichellePERSON

0.99+

John BoboPERSON

0.99+

15%QUANTITY

0.99+

AppleORGANIZATION

0.99+

90%QUANTITY

0.99+

1,200 racesQUANTITY

0.99+

Weather ChannelORGANIZATION

0.99+

3,500 poundQUANTITY

0.99+

Institute of Business ValueORGANIZATION

0.99+

70%QUANTITY

0.99+

162 modelsQUANTITY

0.99+

Los AngelesLOCATION

0.99+

FirstQUANTITY

0.99+

U.S.LOCATION

0.99+

Last weekDATE

0.99+

FlagshipORGANIZATION

0.99+

thirdQUANTITY

0.99+

Las VegasLOCATION

0.99+

five starsQUANTITY

0.99+

YahooORGANIZATION

0.99+

13,600 peopleQUANTITY

0.99+

next weekDATE

0.99+

93%QUANTITY

0.99+

140 airlinesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

MondayDATE

0.99+

100%QUANTITY

0.99+

millionsQUANTITY

0.99+

MartinsvilleLOCATION

0.99+

millions of peopleQUANTITY

0.99+

over 500 hoursQUANTITY

0.99+

WednesdayDATE

0.99+

38 race weekendsQUANTITY

0.99+

TuesdayDATE

0.99+

four kilometersQUANTITY

0.99+

Michelle B-BPERSON

0.99+

Martinsville, VirginiaLOCATION

0.99+

half a kilometerQUANTITY

0.99+

oneQUANTITY

0.98+

40,000 peopleQUANTITY

0.98+

two kilometersQUANTITY

0.98+

about 11%QUANTITY

0.98+

tonightDATE

0.98+

hundreds of thousands of peopleQUANTITY

0.98+

tomorrowDATE

0.98+

four kilometerQUANTITY

0.98+

TwitterORGANIZATION

0.98+

millions of consumersQUANTITY

0.98+

about ten milesQUANTITY

0.98+

a dayQUANTITY

0.97+

second impactQUANTITY

0.97+

half a trillion dollarsQUANTITY

0.97+

bothQUANTITY

0.96+

multi-million dollarQUANTITY

0.96+

single modelQUANTITY

0.96+

Nenshad Bardoliwalla & Pranav Rastogi | BigData NYC 2017


 

>> Announcer: Live from Midtown Manhattan it's theCUBE. Covering Big Data New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> OK, welcome back everyone we're here in New York City it's theCUBE's exclusive coverage of Big Data NYC, in conjunction with Strata Data going on right around the corner. It's out third day talking to all the influencers, CEO's, entrepreneurs, people making it happen in the Big Data world. I'm John Furrier co-host of theCUBE, with my co-host here Jim Kobielus who is the Lead Analyst at Wikibon Big Data. Nenshad Bardoliwalla. >> Bar-do-li-walla. >> Bardo. >> Nenshad Bardoliwalla. >> That guy. >> Okay, done. Of Paxata, Co-Founder & Chief Product Officer it's a tongue twister, third day, being from Jersey, it's hard with our accent, but thanks for being patient with me. >> Happy to be here. >> Pranav Rastogi, Product Manager, Microsoft Azure. Guys, welcome back to theCUBE, good to see you. I apologize for that, third day blues here. So Paxata, we had your partner on Prakash. >> Prakash. >> Prakash. Really a success story, you guys have done really well launching theCUBE fun to watch you guys from launching to the success. Obviously your relationship with Microsoft super important. Talk about the relationship because I think this is really people can start connecting the dots. >> Sure, maybe I'll start and I'LL be happy to get Pranav's point of view as well. Obviously Microsoft is one of the leading brands in the world and there are many aspects of the way that Microsoft has thought about their product development journey that have really been critical to the way that we have thought about Paxata as well. If you look at the number one tool that's used by analysts the world over it's Microsoft Excel. Right, there isn't even anything that's a close second. And if you look at the the evolution of what Microsoft has done in many layers of the stack, whether it's the end user computing paradigm that Excel provides to the world. Whether it's all of their recent innovation in both hybrid cloud technologies as well as the big data technologies that Pranav is part of managing. We just see a very strong synergy between trying to combine the usage by business consumers of being able to take advantage of these big data technologies in a hybrid cloud environment. So there's a very natural resonance between the 2 companies. We're very privileged to have Microsoft Ventures as an investor in Paxata and so the opportunity for us to work with one of the great brands of all time in our industry was really a privilege for us. Yeah, and that's the corporate sides so that wasn't actually part of it. So it's a different part of Microsoft which is great. You have also business opportunity with them. >> Nenshad : We do. >> Obviously data science problem that we're seeing is that they need to get the data faster. All that prep work, seems to be the big issue. >> It does and maybe we can get Pranav's point of view from the Microsoft angle. >> Yeah so to sort of continue what Nenshad was saying, you know the data prep in general is sort of a key core competence which is problematic for lots of users, especially around the knowledge that you need to have in terms of the different tools you can use. Folks who are very proficient will do ETL or data preparation like scenarios using one of the computing engines like Hive or Spark. That's good, but there's this big audience out there who like Excel-like interface, which is easy to use a very visually rich graphical interface where you can drag and drop and can click through. And the idea behind all of this is how quickly can I get insights from my data faster. Because in a big data space, it's volume, variety and velocity. So data is coming at a very fast rate. It's changing it's growing. And if you spend lot of time just doing data prep you're losing the value of data, or the value of data would change over time. So what we're trying to do would sort of enabling Paxata or HDInsight is enabling these users to use Paxata, get insights from data faster by solving key problems of doing data prep. >> So data democracy is a term that we've been kicking around, you guys have been talking about as well. What is actually mean, because we've been teasing out first two days here at theCUBE and BigData NYC is. It's clear the community aspect of data is growing, almost on a similar path as you're seeing with open source software. That genie's out the bottle. Open source software, tier one, it won, it's only growing exponentially. That same paradigm is moving into the data world where the collaboration is super important, in this data democracy, what is that actually mean and how does that relate to you guys? >> So the perspective we have is that first something that one of our customers said, that is there is no democracy without certain degrees of governance. We all live in a in a democracy. And yet we still have rules that we have to abide by. There are still policies that society needs to follow in order for us to be successful citizens. So when when a lot of folks hear the term democracy they really think of the wild wild west, you know. And a lot of the analytic work in the enterprise does have that flavor to it, right, people download stuff to their desktop, they do a little bit of massaging of the data. They email that to their friend, their friend then makes some changes and next thing you know we have what what some folks affectionately call spread mart hell. But if you really want to democratize the technology you have to wrap not only the user experience, like Pranav described, into something that's consumable by a very large number of folks in the enterprise. You have to wrap that with the governance and collaboration capabilities so that multiple people can work off the same data set. That you can apply the permissions so that people, who is allowed to share with each other and under what circumstances are they allowed to share. Under what circumstances are you allowed to promote data from one environment to another? It may be okay for someone like me to work in a sandbox but I cannot push that to a database or HDFS or Azure BLOB storage unless I actually have the right permissions to do so. So I think what you're seeing is that, in general, technology is becoming a, always goes on this trend, towards democratization. Whether it's the phone, whether it's the television, whether it's the personal computer and the same thing is happening with data technologies and certainly companies like. >> Well, Pranav, we're talking about this when you were on theCUBE yesterday. And I want to get your thoughts on this. The old way to solve the governance problem was to put data in silos. That was easy, I'll just put it in a silo and take care of it and access control was different. But now the value of the data is about cross-pollinating and make it freely available, horizontally scalable, so that it can be used. But the same time and you need to have a new governance paradigm. So, you've got to democratize the data by making it available, addressable and use for apps. The same time there's also the concerns on how do you make sure it doesn't get in the wrong hands and so on and so forth. >> Yeah and which is also very sort of common regarding open source projects in the cloud is a how do you ensure that the user authorized to access this open source project or run it has the right credentials is authorized and stuff. So, the benefit that you sort of get in the cloud is there's a centralized authentication system. There's Azure Active Directory, so you know most enterprise would have Active Directory users. Who are then authorized to either access maybe this cluster, or maybe this workload and they can run this job and that sort of further that goes down to the data layer as well. Where we have active policies which then describe what user can access what files and what folders. So if you think about the entrance scenario there is authentication and authorization happening and for the entire system when what user can access what data. And part of what Paxata brings in the picture is like how do you visualize this governance flow as data is coming from various sources, how do you make sure that the person who has access to data does have access data, and the one who doesn't cannot access data. >> Is that the problem with data prep is just that piece of it? What is the big problem with data prep, I mean, that seems to be, everyone keeps coming back to the same problem. What is causing all this data prep. >> People not buying Paxata it's very simple. >> That's a good one. Check out Paxata they're going to solve your problems go. But seriously, there seems to be the same hole people keep digging themselves into. They gather their stuff then next thing they're in the in the same hole they got to prepare all this stuff. >> I think the previous paradigms for doing data preparation tie exactly to the data democracy themes that we're talking about here. If you only have a very silo'd group of people in the organization with very deep technical skills but don't have the business context for what they're actually trying to accomplish, you have this impedance mismatch in the organization between the people who know what they want and the people who have the tools to do it. So what we've tried to do, and again you know taking a page out of the way that Microsoft has approached solving these problems you know both in the past in the present. Is to say look we can actually take the tools that once were only in the hands of the, you know, shamans who know how to utter the right incantations and instead move that into the the common folk who actually. >> The users. >> The users themselves who know what they want to do with the data. Who understand what those data elements mean. So if you were to ask the Paxata point of view, why have we had these data prep problems? Because we've separated the people who had the tools from the people who knew what they wanted to do with it. >> So it sounds to me, correct me if this is the wrong term, that what you offer in your partnership is it basically a broad curational environment for knowledge workers. You know, to sift and sort and annotating shared data with the lineage of the data preserved in essentially a system of record that can follow the data throughout its natural life. Is that a fair characterization? >> Pranav: I would think so yeah. >> You mention, Pranav, the whole issue of how one visualizes or should visualize this entire chain of custody, as it were, for the data, is there is there any special visualization paradigm that you guys offer? Now Microsoft, you've made a fairly significant investment in graph technology throughout your portfolio. I was at Build back in May and Sacha and the others just went to town on all things to do with Microsoft Graph, will that technology be somehow at some point, now or in the future, be reflected in this overall capability that you've established here with your partner here Paxata? >> I am not sure. So far, I think what you've talked about is some Graph capabilities introduced from the Microsoft Graph that's sort of one extreme. The other side of Graph exists today as a developer you can do some Graph based queries. So you can go to Cosmos DB which had a Gremlin API. For Graph based query, so I don't know how. >> I'll get right to the question. What's the Paxata benefits of with HDInsight? How does that, just quickly, explain for the audience. What is that solution, what are the benefits? >> So the the solution is you get a one click install of installing Paxata HDInsight and the benefit is as a benefit for a user persona who's not, sort of, used to big data or Hadoop they can use a very familiar GUI-based experience to get their insights from data faster without having any knowledge of how Spark works or Hadoop works. >> And what does the Microsoft relationship bring to the table for Paxata? >> So I think it's a couple of things. One is Azure is clearly growing at an extremely fast pace. And a lot of the enterprise customers that we work with are moving many of their workloads to Azure and and these cloud based environments. Especially for us, the unique value proposition of a partner who truly understands the hybrid nature of the world. The idea that everything is going to move to the cloud or everything is going to stay on premise is too simplistic. Microsoft understood that from day one. That data would be in it and all of those different places. And they've provided enabling technologies for vendors like us. >> I'll just say it to maybe you're too coy to say it, but the bottom line is you have an Excel-like interface. They have Office 365 they're user's going to instantly love that interface because it's an easy to use interface an Excel-like it's not Excel interface per se. >> Similar. >> Metaphor, graphical user interface. >> Yes it is. >> It's clean and it's targeted at the analyst role or user. >> That's right. >> That's going to resonate in their install base. >> And combined with a lot of these new capabilities that Microsoft is rolling out from a big data perspective. So HDInsight has a very rich portfolio of runtime engines and capabilities. They're introducing new data storage layers whether it's ADLS or Azure BLOB storage, so it's really a nice way of us working together to extract and unlock a lot of the value that Microsoft. >> So, here's the tough question for you, open source projects I see Microsoft, comments were hell froze because LINUX is now part of their DNA, which was a comment I saw at the even this week in Orlando, but they're really getting behind open source. From open compute, it's just clearly new DNA's. They're they're into it. How are you guys working together in open source and what's the impact to developers because now that's only one cloud, there's other clouds out there so data's going to be an important part of it. So open source, together, you guys working together on that and what's the role for the data? >> From an open source perspective, Microsoft plays a big role in embracing open source technologies and making sure that it runs reliably in the cloud. And part of that value prop that we provide in sort of Azure HDInsight is being sure that you can run these open source big data workloads reliably in the cloud. So you can run open source like Apache, Spark, Hive, Storm, Kafka, R Server. And the hard part about running open source technology in the cloud is how do you fine tune it, and how do you configure it, how do you run it reliably. And that's what sort of what we bring in from a cloud perspective. And we also contribute back to the community based on sort of what learned by running these workloads in the cloud. And we believe you know in the broader ecosystem customers will sort of have a mixture of these combinations and their solution They'll be using some of the Microsoft solutions some open source solutions some solutions from ecosystem that's how we see our customer solution sort of being built today. >> What's the big advantage you guys have at Paxata? What's the key differentiator for why someone should work with you guys? Is it the automation? What's the key secret sauce to you guys? >> I think it's a couple of dimensions. One is I think we have come the closest in the industry to getting a user experience that matches the Excel target user. A lot of folks are attempting to do the same but the feedback we consistently get is that when the Excel user uses our solution they just, they get it. >> Was there a design criteria, was that from the beginning how you were going to do this? >> From day one. >> So you engineer everything to make it as simple as like Excel. >> We want people to use our system they shouldn't be coding, they shouldn't be writing scripts. They just need to be able. >> Good Excel you just do good macros though. >> That's right. >> So simple things like that right. >> But the second is being able to interact with the data at scale. There are a lot of solutions out there that make the mistake in our opinion of sampling very tiny amounts of data and then asking you to draw inferences and then publish that to batch jobs. Our whole approach is to smash the batch paradigm and actually bring as much into the interactive world as possible. So end users can actually point and click on 100 million rows of data, instead of the million that you would get in Excel, and get an instantaneous response. Verses designing a job in a batch paradigm and then pushing it through the the batch. >> So it's interactive data profiling over vast corpuses of data in the cloud. >> Nenshad: Correct. >> Nenshad Bardoliwalla thanks for coming on theCUBE appreciate it, congratulations on Paxata and Microsoft Azure, great to have you. Good job on everything you do with Azure. I want to give you guys props, with seeing the growth in the market and the investment's been going well, congratulations. Thanks for sharing, keep coverage here in BigData NYC more coming after this short break.

Published Date : Sep 28 2017

SUMMARY :

Brought to you by SiliconANGLE Media in the Big Data world. it's hard with our accent, So Paxata, we had your partner on Prakash. launching theCUBE fun to watch you guys has done in many layers of the stack, is that they need to get the data faster. from the Microsoft angle. the different tools you can use. and how does that relate to you guys? have the right permissions to do so. But the same time and you need to have So, the benefit that you sort of get in the cloud What is the big problem with data prep, But seriously, there seems to be the same hole and instead move that into the the common folk from the people who knew what they wanted to do with it. is the wrong term, that what you offer for the data, is there is there So you can go to Cosmos DB which had a Gremlin API. What's the Paxata benefits of with HDInsight? So the the solution is you get a one click install And a lot of the enterprise customers but the bottom line is you have an Excel-like interface. user interface. It's clean and it's targeted at the analyst role to extract and unlock a lot of the value So open source, together, you guys working together and making sure that it runs reliably in the cloud. A lot of folks are attempting to do the same So you engineer everything to make it as simple They just need to be able. Good Excel you just do But the second is being able to interact So it's interactive data profiling and Microsoft Azure, great to have you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

JerseyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

ExcelTITLE

0.99+

2 companiesQUANTITY

0.99+

John FurrierPERSON

0.99+

New York CityLOCATION

0.99+

OrlandoLOCATION

0.99+

NenshadPERSON

0.99+

BardoPERSON

0.99+

Nenshad BardoliwallaPERSON

0.99+

third dayQUANTITY

0.99+

bothQUANTITY

0.99+

Office 365TITLE

0.99+

yesterdayDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

100 million rowsQUANTITY

0.99+

BigDataORGANIZATION

0.99+

PaxataORGANIZATION

0.99+

Microsoft VenturesORGANIZATION

0.99+

Pranav RastogiPERSON

0.99+

first two daysQUANTITY

0.99+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

millionQUANTITY

0.98+

secondQUANTITY

0.98+

Midtown ManhattanLOCATION

0.98+

SparkTITLE

0.98+

this weekDATE

0.98+

firstQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

one clickQUANTITY

0.97+

PrakashPERSON

0.97+

AzureTITLE

0.97+

MayDATE

0.97+

Wikibon Big DataORGANIZATION

0.96+

HadoopTITLE

0.96+

HiveTITLE

0.94+

todayDATE

0.94+

Strata DataORGANIZATION

0.94+

PranavPERSON

0.93+

NYCLOCATION

0.93+

one cloudQUANTITY

0.93+

2017DATE

0.92+

ApacheORGANIZATION

0.9+

PaxataTITLE

0.9+

GraphTITLE

0.89+

PranavORGANIZATION

0.88+

Dr. Jisheng Wang, Hewlett Packard Enterprise, Spark Summit 2017 - #SparkSummit - #theCUBE


 

>> Announcer: Live from San Francisco, it's theCUBE covering Sparks Summit 2017 brought to you by Databricks. >> You are watching theCUBE at Sparks Summit 2017. We continue our coverage here talking with developers, partners, customers, all things Spark, and today we're honored now to have our next guest Dr. Jisheng Wang who's the Senior Director of Data Science at the CTO Office at Hewlett Packard Enterprise. Dr. Wang, welcome to the show. >> Yeah, thanks for having me here. >> All right and also to my right we have Mr. Jim Kobielus who's the Lead Analyst for Data Science at Wikibon. Welcome, Jim. >> Great to be here like always. >> Well let's jump into it. At first I want to ask about your background a little bit. We were talking about the organization, maybe you could do a better job (laughs) of telling me where you came from and you just recently joined HPE. >> Yes. I actually recently joined HPE earlier this year through the Niara acquisition, and now I'm the Senior Director of Data Science in the CTO Office of Aruba. Actually, Aruba you probably know like two years back, HP acquired Aruba as a wireless networking company, and now Aruba takes charge of the whole enterprise networking business in HP which is about over three billion annual revenue every year now. >> Host: That's not confusing at all. I can follow you (laughs). >> Yes, okay. >> Well all I know is you're doing some exciting stuff with Spark, so maybe tell us about this new solution that you're developing. >> Yes, actually my most experience of Spark now goes back to the Niara time, so Niara was a three and a half year old startup that invented, reinvented the enterprise security using big data and data science. So what is the problem we solved, we tried to solve in Niara is called a UEBA, user and entity behavioral analytics. So I'll just try to be very brief here. Most of the transitional security solutions focus on detecting attackers from outside, but what if the origin of the attacker is inside the enterprise, say Snowden, what can you do? So you probably heard of many cases today employees leaving the company by stealing lots of the company's IP and sensitive data. So UEBA is a new solution try to monitor the behavioral change of the enterprise users to detect both this kind of malicious insider and also the compromised user. >> Host: Behavioral analytics. >> Yes, so it sounds like it's a native analytics which we run like a product. >> Yeah and Jim you've done a lot of work in the industry on this, so any questions you might have for him around UEBA? >> Yeah, give us a sense for how you're incorporating streaming analytics and machine learning into that UEBA solution and then where Spark fits into the overall approach that you take? >> Right, okay. So actually when we started three and a half years back, the first version when we developed the first version of the data pipeline, we used a mix of Hadoop, YARN, Spark, even Apache Storm for different kind of stream and batch analytics work. But soon after with increased maturity and also the momentum from this open source Apache Spark community, we migrated all our stream and batch, you know the ETL and data analytics work into Spark. And it's not just Spark. It's Spark, Spark streaming, MLE, the whole ecosystem of that. So there are at least a couple advantages we have experienced through this kind of a transition. The first thing which really helped us is the simplification of the infrastructure and also the reduction of the DevOps efforts there. >> So simplification around Spark, the whole stack of Spark that you mentioned. >> Yes. >> Okay. >> So for the Niara solution originally, we supported, even here today, we supported both the on-premise and the cloud deployment. For the cloud we also supported the public cloud like AWS, Microsoft Azure, and also Privia Cloud. So you can understand with, if we have to maintain a stack of different like open source tools over this kind of many different deployments, the overhead of doing the DevOps work to monitor, alarming, debugging this kind of infrastructure over different deployments is very hard. So Spark provides us some unified platform. We can integrate the streaming, you know batch, real-time, near real-time, or even longterm batch job all together. So that heavily reduced both the expertise and also the effort required for the DevOps. This is one of the biggest advantages we experienced, and certainly we also experienced something like the scalability, performance, and also the convenience for developers to develop a new applications, all of this, from Spark. >> So are you using the Spark structured streaming runtime inside of your application? Is that true? >> We actually use Spark in the steaming processing when the data, so like in the UEBS solutions, the first thing is collecting a lot of the data, different account data source, network data, cloud application data. So when the data comes in, the first thing is streaming job for the ETL, to process the data. Then after that, we actually also develop the some, like different frequency like one minute, 10 minute, one hour, one day of this analytics job on top of that. And even recently we have started some early adoption of the deep learning into this, how to use deep learning to monitor the user behavior change over time, especially after user gives a notice what user, is user going to access like most servers or download some of the sensitive data? So all of this requires very complex analytics infrastructure. >> Now there were some announcements today here at Spark Summit by Databricks of adding deep learning support to their core Spark code base. What are your thoughts about the deep learning pipelines, API, that they announced this morning? It's new news, I'll understand if you don't, haven't digested it totally, but you probably have some good thoughts on the topic. >> Yes, actually this is also news for me, so I can just speak from my current experience. How to integrate deep learning into Spark actually was a big challenge so far for us because what we used so far, the deep learning piece, we used TensorFlow. And certainly most of our other stream and data massaging or ETL work is done by Spark. So in this case, there are a couple ways to manage this, too. One is to set up two separate resource pool, one for Spark, the other one for TensorFlow, but in our deployment there is some very small on-premise department which has only like four node or five node cluster. It's not efficient to split resource in that way. So we actually also looking for some closer integration between deep learning and Spark. So one thing we looked before is called the TensorFlow on Spark which was open source a couple months ago by Yahoo. >> Right. >> So maybe this is certainly more exciting news for the Spark team to develop this native integration. >> Jim: Very good. >> Okay and we talked about the UEBA solution, but let's go back to a little broader HPE perspective. You have this concept called the intelligent edge, what's that all about? >> So that's a very cool name. Actually come a little bit back. I come from the enterprise background, and enterprise applications have some, actually a lag behind than consumer applications in terms of the adoption of the new data science technology. So there are some native challenges for that. For example, collecting and storing large amount of this enterprise sensitive data is a huge concern, especially in European countries. Also for the similar reason how to collect, normally weigh developer enterprise applications. You're lack of some good quantity and quality of the trending data. So this is some native challenges when you develop enterprise applications, but even despite of this, HPE and Aruba recently made several acquisitions of analytics companies to accelerate the adoption of analytics into different product line. Actually that intelligent age comes from this IOT, which is internet of things, is expected to be the fastest growing market in the next few years here. >> So are you going to be integrating the UEBA behavioral analytics and Spark capability into your IOT portfolio at HP? Is that a strategy or direction for you? >> Yes. Yes, for the big picture that certainly is. So you can think, I think some of the Gartner Report expected the number of the IOT devices is going to grow over 20 billion by 2020. Since all of this IOT devices are connected to either intranet or internet, either through wire or wireless, so as a networking company, we have the advantage of collecting data and even take some actions at the first of place. So the idea of this intelligent age is we want to turn each of these IOT devices, the small IOT devices like IP camera, like those motion detection, all of these small devices as opposed to the distributed sensor for the data collection and also some inline actor to do some real-time or even close to real-time decisions. For example, the behavior anomaly detection is a very good example here. If IOT devices is compromised, if the IP camera has been compromised, then use that to steal your internal data. We should detect and stop that at the first place. >> Can you tell me about the challenges of putting deep learning algorithms natively on resource constrained endpoints in the IOT? That must be really challenging to get them to perform well considering that there may be just a little bit of memory or flash capacity or whatever on the endpoints. Any thoughts about how that can be done effectively and efficiently? >> Very good question >> And at low cost. >> Yes, very good question. So there are two aspects into this. First is this global training of the intelligence which is not going to be done on each of the device. In that case, each of the device is more like the sensor for the data collection. So we are going to build a, collect the data sent to the cloud, or build all of this giant pool, like computing resource to trend the classifier, to trend the model, but when we trend the model, we are going to ship the model, so the inference and the detection of the model of those behavioral anomaly really happen on the endpoint. >> Do the training centrally and then push the trained algorithms down to the edge devices. >> Yes. But even like, the second as well even like you said, some of the device like say people try to put those small chips in the spoon, in the case of, in hospital to make it like more intelligent, you cannot put even just the detection piece there. So we also looking to some new technology. I know like Caffe recently announced, released some of the lightweight deep learning models. Also there's some, your probably know, there's some of the improvement from the chip industry. >> Jim: Yes. >> How to optimize the chip design for this kind of more analytics driven task there. So we are all looking to this different areas now. >> We have just a couple minutes left, and Jim you get one last question after this, but I got to ask you, what's on your wishlist? What do you wish you could learn or maybe what did you come to Spark Summit hoping to take away? >> I've always treated myself as a technical developer. One thing I am very excited these days is the emerging of the new technology, like a Spark, like TensorFlow, like Caffe, even Big-Deal which was announced this morning. So this is something like the first go, when I come to this big advanced industry events, I want to learn the new technology. And the second thing is mostly to share our experience and also about adopting of this new technology and also learn from other colleagues from different industries, how people change life, disrupt the old industry by taking advantage of the new technologies here. >> The community's growing fast. I'm sure you're going to receive what you're looking for. And Jim, final question? >> Yeah, I heard you mention DevOps and Spark in same context, and that's a huge theme we're seeing, more DevOps is being wrapped around the lifecycle of development and training and deployment of machine learning models. If you could have your ideal DevOps tool for Spark developers, what would it look like? What would it do in a nutshell? >> Actually it's still, I just share my personal experience. In Niara, we actually developed a lot of the in-house DevOps tools like for example, when you run a lot of different Spark jobs, stream, batch, like one minute batch verus one day batch job, how do you monitor the status of those workflows? How do you know when the data stop coming? How do you know when the workflow failed? Then even how, monitor is a big thing and then alarming when you have something failure or something wrong, how do you alarm it, and also the debug is another big challenge. So I certainly see the growing effort from both Databricks and the community on different aspects of that. >> Jim: Very good. >> All right, so I'm going to ask you for kind of a soundbite summary. I'm going to put you on the spot here, you're in an elevator and I want you to answer this one question. Spark has enabled me to do blank better than ever before. >> Certainly, certainly. I think as I explained before, it helped a lot from both the developer, even the start-up try to disrupt some industry. It helps a lot, and I'm really excited to see this deep learning integration, all different road map report, you know, down the road. I think they're on the right track. >> All right. Dr. Wang, thank you so much for spending some time with us. We appreciate it and go enjoy the rest of your day. >> Yeah, thanks for being here. >> And thank you for watching the Cube. We're here at Spark Summit 2017. We'll be back after the break with another guest. (easygoing electronic music)

Published Date : Jun 6 2017

SUMMARY :

brought to you by Databricks. at the CTO Office at Hewlett Packard Enterprise. All right and also to my right we have Mr. Jim Kobielus (laughs) of telling me where you came from of the whole enterprise networking business I can follow you (laughs). that you're developing. of the company's IP and sensitive data. Yes, so it sounds like it's a native analytics of the data pipeline, we used a mix of Hadoop, YARN, the whole stack of Spark that you mentioned. We can integrate the streaming, you know batch, of the deep learning into this, but you probably have some good thoughts on the topic. one for Spark, the other one for TensorFlow, for the Spark team to develop this native integration. Okay and we talked about the UEBA solution, Also for the similar reason how to collect, of the IOT devices is going to grow natively on resource constrained endpoints in the IOT? collect the data sent to the cloud, Do the training centrally But even like, the second as well even like you said, So we are all looking to this different areas now. And the second thing is mostly to share our experience And Jim, final question? If you could have your ideal DevOps tool So I certainly see the growing effort All right, so I'm going to ask you even the start-up try to disrupt some industry. We appreciate it and go enjoy the rest of your day. We'll be back after the break with another guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

HPEORGANIZATION

0.99+

HPORGANIZATION

0.99+

10 minuteQUANTITY

0.99+

one hourQUANTITY

0.99+

one minuteQUANTITY

0.99+

WangPERSON

0.99+

San FranciscoLOCATION

0.99+

YahooORGANIZATION

0.99+

Jisheng WangPERSON

0.99+

NiaraORGANIZATION

0.99+

first versionQUANTITY

0.99+

one dayQUANTITY

0.99+

two aspectsQUANTITY

0.99+

Jim KobielusPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

FirstQUANTITY

0.99+

CaffeORGANIZATION

0.99+

SparkTITLE

0.99+

SparkORGANIZATION

0.99+

oneQUANTITY

0.99+

eachQUANTITY

0.99+

three and a half yearQUANTITY

0.99+

bothQUANTITY

0.99+

Sparks Summit 2017EVENT

0.99+

firstQUANTITY

0.99+

DevOpsTITLE

0.99+

2020DATE

0.99+

second thingQUANTITY

0.99+

ArubaORGANIZATION

0.98+

SnowdenPERSON

0.98+

two years backDATE

0.98+

first thingQUANTITY

0.98+

one last questionQUANTITY

0.98+

AWSORGANIZATION

0.98+

over 20 billionQUANTITY

0.98+

one questionQUANTITY

0.98+

UEBATITLE

0.98+

todayDATE

0.98+

Spark SummitEVENT

0.97+

MicrosoftORGANIZATION

0.97+

Spark Summit 2017EVENT

0.96+

ApacheORGANIZATION

0.96+

three and a half years backDATE

0.96+

DatabricksORGANIZATION

0.96+

one day batchQUANTITY

0.96+

earlier this yearDATE

0.94+

ArubaLOCATION

0.94+

OneQUANTITY

0.94+

#SparkSummitEVENT

0.94+

One thingQUANTITY

0.94+

one thingQUANTITY

0.94+

EuropeanLOCATION

0.94+

GartnerORGANIZATION

0.93+

Yuanhao Sun, Transwarp Technology - BigData SV 2017 - #BigDataSV - #theCUBE


 

>> Announcer: Live from San Jose, California, it's theCUBE, covering Big Data Silicon Valley 2017. (upbeat percussion music) >> Okay, welcome back everyone. Live here in Silicon Valley, San Jose, is the Big Data SV, Big Data Silicon Valley in conjunction with Strata Hadoop, this is theCUBE's exclusive coverage. Over the next two days, we've got wall-to-wall interviews with thought leaders, experts breaking down the future of big data, future of analytics, future of the cloud. I'm John Furrier with my co-host George Gilbert with Wikibon. Our next guest is Yuanhao Sun, who's the co-founder and CTO of Transwarp Technologies. Welcome to theCUBE. You were on, during the, 166 days ago, I noticed, on theCUBE, previously. But now you've got some news. So let's get the news out of the way. What are you guys announcing here, this week? >> Yes, so we are announcing 5.0, the latest version of Transwarp Hub. So in this version, we will call it probably revolutionary product, because the first one is we embedded communities in our product, so we will allow people to isolate different kind of workloads, using dock and containers, and we also provide a scheduler to better support mixed workloads. And the second is, we are building a set of tools allow people to build their warehouse. And then migrate from existing or traditional data warehouse to Hadoop. And we are also providing people capability to build a data mart, actually. It allow you to interactively query data. So we build a column store in memory and on SSD. And we totally write the whole SQL engine. That is a very tiny SQL engine, allow people to query data very quickly. And so today that tiny SQL engine is like about five to ten times faster than Spark 2.0. And we also allow people to build cubes on top of Hadoop. And then, once the cube is built, the SQL performance, like the TBCH performance, is about 100 times faster than existing database, or existing Spark 2.0. So it's super-fast. And in, actually we found a Paralect customer, so they replace their data with software, to build a data mart. And we already migrate, say 100 reports, from their data to our product. So the promise is very good. And the first one is we are providing tool for people to build the machine learning pipelines and we are leveraging TensorFlow, MXNet, and also Spark for people to visualize the pipeline and to build the data mining workflows. So this is kind of like Datasense tools, it's very easy for people to use. >> John: Okay, so take a minute to explain, 'cus that was great, you got the performance there, that's the news out of the way. Take a minute to explain Transwarp, your value proposition, and when people engage you as a customer. >> Yuanhao: Yeah so, people choose our product and the major reason is our compatibility to Oracle, DV2, and teradata SQL syntax, because you know, they have built a lot of applications onto those databases, so when they migrate to Hadoop, they don't want to rewrote whole program, so our compatibility, SQL compatibility is big advantage to them, so this is the first one. And we also support full ANCIT and distribute transactions onto Hadoop. So that a lot of applications can be migrate to our product, with few modification or without any changes. So this is the first our advantage. The second is because we are providing, even the best streaming engine, that is actually derived from Spark. So we apply this technology to IOT applications. You know the IOT pretty soon, they need a very low latency but they also need very complicated models on top of streams. So that's why we are providing full SQL support and machine learning support on top of streaming events. And we are also using event-driven technology to reduce the latency, to five to ten milliseconds. So this is second reason people choose our product. And then today we are announcing 5.0, and I think people will find more reason to choose our product. >> So you have the compatibility SQL, you have the tooling, and now you have the performance. So kind of the triple threat there. So what's the customer saying, when you go out and talk with your customers, what's the view of the current landscape for customers? What are they solving right now, what are the key challenges and pain points that customers have today? >> We have customers in more than 12 vertical segments, and in different verticals they have different pain points, actually so. Take one example: in financial services, the main pain point for them is to migrate existing legacy applications to Hadoop, you know they have accumulated a lot of data, and the performance is very bad using legacy database, so they need high performance Hadoop and Spark to speed up the performance, like reports. But in another vertical, like in logistic and transportation and IOT, the pain point is to find a very low latency streaming engine. At the same time, they need very complicated programming model to write their applications. And that example, like in public sector, they actually need very complicated and large scale search engine. They need to build analytical capability on top of search engine. They can search the results and analyze the result in the same time. >> George: Yuanhao, as always, whenever we get to interview you on theCube, you toss out these gems, sort of like you know diamonds, like big rocks that under millions of years, and incredible pressure, have been squeezed down into these incredibly valuable, kind of, you know, valuable, sort of minerals with lots of goodness in them, so I need you to unpack that diamond back into something that we can make sense out of, or I should say, that's more accessible. You've done something that none of the Hadoop Distro guys have managed to do, which is to build databases that are not just decision support, but can handle OLTP, can handle operational applications. You've done the streaming, you've done what even Databricks can't do without even trying any of the other stuff, which is getting the streaming down to event at a time. Let's step back from all these amazing things, and tell us what was the secret sauce that let you build a platform this advanced? >> So actually, we are driven by our customers, and we do see the trends people are looking for, better solutions, you know there are a lot of pain to set up a habitable class to use the Hadoop technology. So that's why we found it's very meaningful and also very necessary for us to build a SQL database on top of Hadoop. Quite a lot of customers in FS side, they ask us to provide asset until the transaction can be put on top of Hadoop, because they have to guarantee the consistency of their data. Otherwise they cannot use the technology. >> At the risk of interrupting, maybe you can tell us why others have built the analytic databases on top of Hadoop, to give the familiar SQL access, and obviously have a desire also to have transactions next to it, so you can inform a transaction decision with the analytics. One of the questions is, how did you combine the two capabilities? I mean it only took Oracle like 40 years. >> Right, so. Actually our transaction capability is only for analytics, you know, so this OLTP capability it is not for short term transactional applications, it's for data warehouse kind of workloads. >> George: Okay, so when you're ingesting. >> Yes, when you're ingesting, when you modify your data, in batch, you have to guarantee the consistency. So that's the OLTP capability. But we are also building another distributed storage, and distributed database, and that are providing that with OLTP capability. That means you can do concurrent transactions, on that database, but we are still developing that software right now. Today our product providing the digital transaction capability for people to actually build their warehouse. You know quite a lot of people believe data warehouse do not need transaction capability, but we found a lot of people modify their data in data warehouse, you know, they are loading their data continuously to data warehouse, like the CRM tables, customer information, they can be changed over time. So every day people need to update or change the data, that's why we have to provide transaction capability in data warehouse. >> George: Okay, and then so then well tell us also, 'cus the streaming problem is, you know, we're told that roughly two thirds of Spark deployments use streaming as a workload. And the biggest knock on Spark is that it can't process one event at a time, you got to do a little batch. Tell us some of the use cases that can take advantage of doing one event at a time, and how you solved that problem? >> Yuanhao: Yeah so the first use case we encounter is the anti-fraud, or fraud detection application in FSI, so whenever you swipe your credit card, the bank needs to tell you if the transaction is a fraud or not in a few milliseconds. But if you are using Spark streaming, it will usually take 500 milliseconds, so the latency is too high for such kind of application. And that's why we have to provide event per time, like means event-driven processing to detect the fraud, so that we can interrupt the transaction in a few milliseconds, so that's one kind of application. The other can come from IOT applications, so we already put our streaming framework in large manufacture factory. So they have to detect the main function of their equipments in a very short time, otherwise it may explode. So if you... So if you are using Spark streaming, probably when you submit your application, it will take you hundreds of milliseconds, and when you finish your detection, it usually takes a few seconds, so that will be too long for such kind of application. And that's why we need a low latency streaming engine, but you can see it is okay to use Storm or Flink, right? And problem is, we found it is: They need a very complicated programming model, that they are going to solve equation on the streaming events, they need to do the FFT transformation. And they are also asking to run some linear regression or some neural network on top of events, so that's why we have to provide a SQL interface and we are also embedding the CEP capability into our streaming engine, so that you can use pattern to match the events and to send alerts. >> George: So, SQL to get a set of events and maybe join some in the complex event processing, CEP, to say, does this fit a pattern I'm looking for? >> Yuanhao: Yes. >> Okay, and so, and then with the lightweight OLTP, that and any other new projects you're looking at, tell us perhaps the new use cases you'd be appropriated for. >> Yuanhao: Yeah so that's our official product actually, so we are going to solve the problem of large scale OLTP transaction problems like, so you know, a lot of... You know, in China, there is so many population, like in public sector or in banks, they need build a highly scalable transaction systems so that they can support a very high concurrent transactions at the same time, so that's why we are building such kind of technology. You know, in the past, people just divide transaction into multiple databases, like multiple Oracle instances or multiple mySQL instances. But the problem is: if the application is simple, you can very easily divide a transaction over the multiple instances of databases. But if the application is very complicated, especially when the ISV already wrote the applications based on Oracle or traditional database, they already depends on the transaction systems so that's why we have to build a same kind of transaction systems, so that we can support their legacy applications, but they can scale to hundreds of nodes, and they can scale to millions of transactions per second. >> George: On the transactional stuff? >> Yuanhao: Yes. >> Just correct me if I'm wrong, I know we're running out of time but I thought Oracle only scales out when you're doing decision support work, not when you're doing OLTP, not that it, that it can only, that it can maybe stretch to ten nodes or something like that, am I mistaken? >> Yuanhao: Yes, they can scale to 16 to all 32 nodes. >> George: For transactional work? >> For transaction works, but so that's the theoretical limit, but you know, like Google F1 and Google Spanner, they can scale to hundreds of nodes. But you know, the latency is higher than Oracle because you have to use distributed particle to communicate with multiple nodes, so the latency is higher. >> On Google? >> Yes. >> On Google. The latency is higher on the Google? >> 'Cus it has to go like all the way to Europe and back. >> Oracle or Google latency, you said? >> Google, because if you are using two phase commit protocol you have to talk to multiple nodes to broadcast your request to multiple nodes, and then wait for the feedback, so that mean you have a much higher latency, but it's necessary to maintain the consistency. So in a distributed OLTP databases, the latency is usually higher, but the concurrency is also much higher, and scalability is much better. >> George: So that's a problem you've stretched beyond what Oracle's done. >> Yuanhao: Yes, so because customer can tolerant the higher latency, but they need to scale to millions of transactions per second, so that's why we have to build a distributed database. >> George: Okay, for this reason we're going to have to have you back for like maybe five or ten consecutive segments, you know, maybe starting tomorrow. >> We're going to have to get you back for sure. Final question for you: What are you excited about, from a technology, in the landscape, as you look at open source, you're working with Spark, you mentioned Kubernetes, you have micro services, all the cloud. What are you most excited about right now in terms of new technology that's going to help simplify and scale, with low latency, the databases, the software. 'Cus you got IOT, you got autonomous vehicles, you have all this data, what are you excited about? >> So actually, so this technology we already solve these problems actually, but I think the most exciting thing is we found... There's two trends, the first trend is: We found it's very exciting to find more competition framework coming out, like the AI framework, like TensorFlow and MXNet, Torch, and tons of such machine learning frameworks are coming out, so they are solving different kinds of problems, like facial recognition from video and images, like human computer interactions using voice, using audio. So it's very exciting I think, but for... And also it's very, we found it's very exciting we are embedding these, we are combining these technologies together, so that's why we are using competitors you know. We didn't use YARN, because it cannot support TensorFlow or other framework, but you know, if you are using containers and if you have good scheduler, you can schedule any kind of competition frameworks. So we found it's very interesting to, to have these new frameworks, and we can combine together to solve different kinds of problems. >> John: Thanks so much for coming onto theCube, it's an operating system world we're living in now, it's a great time to be a technologist. Certainly the opportunities are out there, and we're breaking it down here inside theCube, live in Silicon Valley, with the best tech executives, best thought leaders and experts here inside theCube. I'm John Furrier with George Gilbert. We'll be right back with more after this short break. (upbeat percussive music)

Published Date : Mar 14 2017

SUMMARY :

Jose, California, it's theCUBE, So let's get the news out of the way. And the first one is we are providing tool and when people engage you as a customer. And then today we are announcing 5.0, So kind of the triple threat there. the pain point is to find so I need you to unpack because they have to guarantee next to it, so you can you know, so this OLTP capability So that's the OLTP capability. 'cus the streaming problem is, you know, the bank needs to tell you Okay, and so, and then and they can scale to millions scale to 16 to all 32 nodes. so the latency is higher. The latency is higher on the Google? 'Cus it has to go like all so that mean you have George: So that's a the higher latency, but they need to scale segments, you know, to get you back for sure. like the AI framework, like it's a great time to be a technologist.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

ChinaLOCATION

0.99+

fiveQUANTITY

0.99+

EuropeLOCATION

0.99+

Transwarp TechnologiesORGANIZATION

0.99+

40 yearsQUANTITY

0.99+

500 millisecondsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

San Jose, CaliforniaLOCATION

0.99+

hundreds of nodesQUANTITY

0.99+

HadoopTITLE

0.99+

TodayDATE

0.99+

ten nodesQUANTITY

0.99+

firstQUANTITY

0.99+

OracleORGANIZATION

0.99+

100 reportsQUANTITY

0.99+

tomorrowDATE

0.99+

secondQUANTITY

0.99+

first oneQUANTITY

0.99+

Yuanhao SunPERSON

0.99+

second reasonQUANTITY

0.99+

Spark 2.0TITLE

0.99+

todayDATE

0.99+

this weekDATE

0.99+

ten timesQUANTITY

0.99+

16QUANTITY

0.99+

two trendsQUANTITY

0.99+

YuanhaoPERSON

0.99+

SQLTITLE

0.99+

SparkTITLE

0.99+

first trendQUANTITY

0.99+

two capabilitiesQUANTITY

0.98+

Silicon Valley, San JoseLOCATION

0.98+

TensorFlowTITLE

0.98+

one eventQUANTITY

0.98+

32 nodesQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

TorchTITLE

0.98+

166 days agoDATE

0.98+

one exampleQUANTITY

0.98+

more than 12 vertical segmentsQUANTITY

0.97+

ten millisecondsQUANTITY

0.97+

hundreds of millisecondsQUANTITY

0.97+

two thirdsQUANTITY

0.97+

MXNetTITLE

0.97+

DatabricksORGANIZATION

0.96+

GoogleORGANIZATION

0.96+

ten consecutive segmentsQUANTITY

0.95+

first useQUANTITY

0.95+

WikibonORGANIZATION

0.95+

Big Data Silicon ValleyORGANIZATION

0.95+

Strata HadoopORGANIZATION

0.95+

about 100 timesQUANTITY

0.94+

Big Data SVORGANIZATION

0.94+

One ofQUANTITY

0.94+