Image Title

Search Results for Framingham:

Warren Jackson, Dell Technologies & Scott Waller, CTO, 5G Open Innovation Lab | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hey, welcome back to the Fira in Barcelona. My name is Dave Vellante. I'm here with David Nicholson, day four of MWC '23. Show's winding down a little bit, but it's still pretty packed here. Lot of innovation, planes, trains, automobiles, and we're talking 5G all week, private networks, connected breweries. It's super exciting. Really happy to have Warren Jackson here as the Edge Gateway Product Technologist at Dell Technologies, and Scott Waller, the CTO of the 5G Open Innovation Lab. Folks, welcome to theCUBE. >> Good to be here. >> Really interesting stories that we're going to talk about. Let's start, Scott, with you, what is the Open Innovation Lab? >> So it was hatched three years ago. Ideated about a bunch of guys from Microsoft who ran startup ventures program, started the developers program over at Microsoft, if you're familiar with MSDN. And they came three years ago and said, how does CSPs working with someone like T-Mobile who's in our backyard, I'm from Seattle. How do they monetize the edge? You need a developer ecosystem of applications and use cases. That's always been the thing. The carriers are building the networks, but where's the ecosystem of startups? So we built a startup ecosystem that is sponsored by partners, Dell being one sponsor, Intel, Microsoft, VMware, Aspirant, you name it. The enterprise folks who are also in the connectivity business. And with that, we're not like a Y Combinator or a Techstars where it's investment first and it's all about funding. It's all about getting introductions from a startup who might have a VR or AI type of application or observability for 5G slicing, and bring that in front of the Microsoft's of the world, or the Intel's and the Dell's of the world that they might not have the capabilities to do it because they're still a small little startup with an MVP. So we really incubate. We're the connectors and build a network. We've had 101 startups over the last three years. They've raised over a billion dollars. And it's really valuable to our partners like T-Mobile and Dell, et cetera, where we're bringing in folks like Expedo and GenXComm and Firecell. Start up private companies that are around here they were cohorts from our program in the past. >> That's awesome because I've often, I mean, I've seen Dell get into this business and I'm like, wow, they've done a really good job of finding these guys. I wonder what the pipeline is. >> We're trying to create the pipeline for the entire industry, whether it's 5G on the edge for the CSPs, or it's for private enterprise networks. >> Warren, what's this cool little thing you got here? >> Yeah, so this is very unique in the Dell portfolio. So when people think of Dell, they think of servers laptops, et cetera. But what this does is it's designed to be deployed at the edge in harsh environments and it allows customers to do analytics, data collection at the edge. And what's unique about it is it's got an extended temperature range. There's no fan in this and there's lots of ports on it for data ingestion. So this is a smaller box Edge Gateway 3200. This is the product that we're using in the brewery. And then we have a bigger brother of this, the Edge Gateway 5200. So the value of it, you can scale depending on what your edge compute requirements are at the edge. >> So tell us about the brewery story. And you covered it, I know you were in the Dell booth, but it's basically an analog brewery. They're taking measurements and temperatures and then writing it down and then entering it in and somebody from your company saw it and said, "We can help you with this problem." Explain the story. >> Yeah, so Scott and I did a walkthrough of the brewery back in November timeframe. >> It's in Framingham, Mass. >> Framingham, Mass, correct. And basically, we talked to him, and we said, what keeps you guys up at night? What's a problem that we can solve? Very simple, a kind of a lower budget, didn't have a lot money to spend on it, but what problem can we solve that will realize great benefit for you? So we looked at their fermentation process, which was completely analog. Somebody was walking around with a clipboard looking at analog gauges. And what we did is we digitized that process. So what this did for them rather than being completely reactive, and by the time they realized there was something going wrong with the fermentation process, it's too late. A batch of scrap. This allowed them to be proactive. So anytime, anywhere on the tablet or a phone, they can see if that fermentation process is going out of range and do something about it before the batch gets scrapped. >> Okay. Amazing. And Scott, you got a picture of this workflow here? >> Yeah, actually this is the final product. >> Explain that. >> As Warren mentioned, the data is actually residing in the industrial side of the network So we wanted to keep the IT/OT separation, which is critical on the factory floor. And so all the data is brought in from the sensors via digital connection once it's converted and into the edge gateway. Then there's a snapshot of it using Telit deviceWISE, their dashboarding application, that is decoding all the digital readings, putting them in a nice dashboard. And then when we gave them, we realized another problem was they're using cheap little Chromebooks that they spill beer on once a week and throw them out. That's why they bought the cheap ones 'cause they go through them so fast. So we got a Dell Latitude Rugged notebook. This is a brand new tablet, but they have the dashboarding software. So no matter if they're out there on the floor, but because the data resides there on the factory they have access to be able to change the parameters. This one's in the maturation cycle. This one's in the crashing cycle where they're bringing the temperature back down, stopping the fermentation process, getting it ready to go to the canning side of the house. >> And they're doing all that from this dashboard. >> They're doing all from the dashboard. They also have a giant screen that we put up there that in the floor instead of walking a hundred yards back behind a whole bunch of machinery equipment from a safety perspective, now they just look up on the screen and go, "Oh, that's red. That's out of range." They're actually doing a bunch of cleaning and a bunch of other things right now, too. So this is real time from Boston. >> Dave: Oh okay. >> Scott: This is actually real time from Boston. >> I'm no hop master, but I'm looking at these things flashing at me and I'm thinking something's wrong with my beer. >> We literally just lit this up last week. So we're still tweaking a few things, but they're also learning around. This is a new capability they never had. Oh, we have the ability to alert and monitor at different processes with different batches, different brews, different yeast types. Then now they're also training and learning. And we're going to turn that into eventually a product that other breweries might be able to use. >> So back to the kind of nuts and bolts of the system. The device that you have here has essentially wifi antennas on the back. >> Warren: Correct. >> Pull that up again if you would, please. >> Now I've seen this, just so people are clear, there are also paddle 5G antennas that go on the other side. >> Correct. >> That's sort of the connection from the 5G network that then gets transmogrified, technical term guys, into wifi so the devices that are physically connected to the brew vats, don't know what they're called. >> Fermentation tanks. >> Fermentation tanks, thank you. Those are wifi. That's a wifi signal that's going into this. Is that correct? >> Scott: No. >> No, it's not. >> It's a hard wire. >> Okay, okay. >> But, you're right. This particular gateway. >> It could be wifi if it's hard wire. >> It could be, yes. Could be any technology really. >> This particular gateway is not outfitted with 5G, but something that was very important in this application was to isolate the IT network, which is on wifi and physically connected from the OT network, which is the 5G connection. So we're sending the data directly from the gateway up to the cloud. The two partners that we worked with on this project were ifm, big sensor manufacturer that actually did the wired sensors into an industrial network called IO-Link. So they're physically wired into the gateway and then in the gateway we have a solution from our partner Telit that has deviceWISE software that actually takes the data in, runs the analytics on it, the logic, and then visualizes that data locally on those panels and also up to their cloud, which is what we're looking at. So they can look at it locally, they're in the plant and then up in the cloud on a phone or a tablet, whatever, when they're at home. >> We're talking about a small business here. I don't know how many employees they have, but it's not thousands. And I love that you're talking about an IT network and an OT network. And so they wanted, it is very common when we talk about industrial internet of things use cases, but we're talking about a tiny business here. >> Warren: Correct. >> They wanted to separate those networks because of cost, because of contention. Explain why. >> Yeah, just because, I mean, they're running their ERP system, their payroll, all of their kind of the way they run their business on their IT network and you don't want to have the same traffic out on the factory floor on that network, so it was pretty important. And the other thing is we really, one of the things that we didn't want to do in this project is interrupt their production process at all. So we installed this entire system in two days. They didn't have to shut down, they didn't have to stop. We didn't have to interrupt their process at all. It was like we were invisible there and we spun the thing up and within two days, very simple, easy, but tremendous value for their business. >> Talk about new markets here. I mean, it's like any company that's analog that needs to go digital. It's like 99% of the companies on the planet. What are you guys seeing out there in terms of the types of examples beyond breweries? >> Yeah, I could talk to that. So I spent a lot of time over the last couple years running my own little IoT company and a lot of it being in agriculture. So like in Washington state, 70% of the world's hops is actually grown in Washington state. It's my hometown. But in the Ag producing regions, there's lack of connectivity. So there's interest in private networks because the carriers aren't necessarily deploying it. But because we have the vast amount of hops there's a lot of IPAs, a lot of hoppy IPAs that come out of Seattle. And with that, there's a ton of craft breweries that are about the same size, some are a little larger. Anheuser-Busch and InBev and Heineken they've got great IoT platforms. They've done it. They're mass scale, they have to digitize. But the smaller shops, they don't, when we talk about IT/OT separation, they're not aware of that. They think it's just, I get local broadband and I get wifi and one hotspot inside my facility and it works. So a little bit of it was the education. I have got years in IT/OT security in my background so that education and we come forward with a solution that actually does that for them. And now they're aware of it. So now when they're asking questions of other vendors that are trying to sell them some type of solution, they're inherently aware of what should be done so they're not vulnerable to ransomware attacks, et cetera. So it's known as the Purdue Model. >> Well, what should they do? >> We came in and keep it completely separated and educated them because in the end too we'll build a design guide and a starter kit out of this that other brewers can use. Because I've toured dozens of breweries in Washington, the exact same scenario, analog gauges, analog process, very manual. And in the end, when you ask the brewer, what do they want out of this? It keeps them up at night because if the temperature goes out of range, because the chiller fails, >> They ruined. >> That's $30,000 lost in beer. That's a lot to a small business. However, it's also once they start digitizing the data and to Warren's point, it's read-only. We're not changing any of the process. We augmented on top of their existing systems. We didn't change their process. But now they have the ability to look at the data and see batch to batch consistency. Quality doesn't always mean best, it means consistency from batch to batch. Every beer from exhibit A from yesterday to two months from now of the same style of beer should be the same taste, flavor, boldness, et cetera. This is giving them the insights on it. >> It's like St. Louis Buds, when we were kids. We would buy the St. Louis Buds 'cause they tasted better than the Merrimack Buds. And then Budweiser made them all the same. >> Must be an East coast thing. >> It's an old guy thing, Dave. You weren't born yet. >> I was in high school. Yeah, I was in high school. >> We like the hops. >> We weren't 21. Do me a favor, clarify OT versus IT. It's something we talk about all the time, but not everyone's familiar with that separation. Define OT for me. >> It's really the factory floor. You got IT systems that are ERP systems, billing, you're getting your emails, stuff like that. Where the ransomware usually gets infected in. The OT side is the industrial control network. >> David: What's the 'O' stand for? >> Operation. >> David: Operation? >> Yeah, the operations side. >> 'Cause some people will think objects 'cause we think internet of things. >> The industrial operations, think of it that way. >> But in a sense those are things that are connected. >> And you think of that as they are the safety systems as well. So a machine, if someone doesn't push the stop button, you'd think if there's a lot of traffic on that network, it isn't guaranteed that that stop button actually stops that blade from coming down, someone's going to lose their arm. So it's very tied to safety, reliability, low latency. It is crafted in design that it never touches the internet inherently without having to go through a security gateway which is what we did. >> You mentioned the large companies like InBev, et cetera. You're saying they're already there. Are they not part of your target market? Or are there ways that you can help them? Is this really more of a small to mid-size company? >> For this particular solution, I think so, yeah. Because the cost to entry is low. I mean, you talk about InBev, they have millions of dollars of budgets to spend on OT. So they're completely automated from top to bottom. But these little craft brewers, which they're everywhere in the US. Vermont, Washington state, they're completely manual. A lot of these guys just started in their garage. And they just scaled up and they got a cult kind of following around their beers. One thing that we found here this week, when you talk around edge and 5G and beer, those things get people excited. In our booth we're serving beer, and all these kind of topics, it brings people together. >> And it lets the little guy compete more effectively with the big giants. >> Correct. >> And how do you do more with less as the little guy is kind of the big thing and to Warren's point, we have folks come up and say, "Great, this is for beer, but what about wine? What about the fermentation process of wine?" Same materials in the end. A vessel of some sort, maybe it's stainless steel. The clamps are the same, the sensors are the same. The parameters like temperature are key in any type of fermentation. We had someone talking about olive oil and using that. It's the same sanitary beverage style equipment. We grabbed sensors that were off the shelf and then we integrated them in and used the set of platforms that we could. How do we rapidly enable these guys at the lowest possible cost with stuff that's at the shelf. And there's four different companies in the solution. >> We were having a conversation with T-Mobile a little earlier and she mentioned the idea of this sounding scary. And this is a great example of showing that in fact, at a relatively small scale, this technology makes a lot of sense. So from that perspective, of course you can implement private 5G networks at an industrial scale with tens of millions of dollars of investment. But what about all of the other things below? And that seems to be a perfect example. >> Yeah, correct. And it's one of the things with the gateway and having flexibility the way Dell did a great job of putting really good modems in it. It had a wide spectrum range of what bands they support. So being able to say, at a larger facility, I mean, if Heineken wants to deploy something like this, oh, heck yeah, they probably could do it. And they might have a private 5G network, but let's say T-Mobile offers a private offering on their public via a slice. It's easy to connect that radio to it. You just change the sims. >> Is that how the CSPs fit here? How are they monetized? >> Yeah, correct. So one of our partners is T-Mobile and so we're working with them. We've got other telco partners that are coming on board in our lab. And so we'll do the same thing. We're going to take this back and put it in the lab and offer it up as others because the baseline building blocks or Lego blocks per se can be used in a bunch of different industries. It's really that starter point of giving folks the idea of what's possible. >> So small manufacturing, agriculture you mentioned, any other sort of use cases we should tune into? >> I think it's environmental monitoring, all of that stuff, I see it in IoT deployments all over the world. Just the simple starter kits 'cause a farmer doesn't want to get sold a solution, a platform, where he's got to hire a bunch of coders and partner with the big carriers. He just wants something that works. >> Another use case that we see a lot, a high cost in a lot of these places is the cost of energy. And a lot of companies don't know what they're spending on electricity. So a very simple energy monitoring system like that, it's a really good ROI. I'm going to spend five or $10,000 on a system like this, but I'm going to save $20,000 over a year 'cause I'm able to see, have visibility into that data. That's a lot of what this story's about, just giving visibility into the process. >> It's very cool, and like you said, it gets people excited. Is it a big market? How do you size it? Is it a big TAM? >> Yeah, so one thing that Dell brings to the table in this space is people are buying their laptops, their servers and whatnot from Dell and companies are comfortable in doing business with Dell because of our model direct to customer and whatnot. So our ability to bring a device like this to the OT space and have them have that same user experience they have with laptops and our client products in a ruggedized solution like this and bring a lot of partners to the table makes it easy for our customers to implement this across all kinds of industries. >> So we're talking to billions, tens of billions. Do we know how big this market is? What's the TAM? I mean, come on, you work for Dell. You have to do a TAM analysis. >> Yes, no, yeah. I mean, it really is in the billions. The market is huge for this one. I think we just tapped into it. We're kind of focused in on the brewery piece of it and the liquor piece of it, but the possibilities are endless. >> Yeah, that's tip of the spear. Guys, great story. >> It's scalable. I think the biggest thing, just my final feedback is working and partnering with Dell is we got something as small as this edge gateway that I can run a Packet Core on and run a 5G standalone node and then have one of the small little 5G radios out there. And I've got these deployed in a farm. Give the farmer an idea of what's possible, give him a unit on his tractor, and now he can do something that, we're providing connectivity he had never had before. But as we scale up, we've got the big brother to this. When we scale up from that, we got the telco size units that we can put. So it's very scalable. It's just a great suite of offerings. >> Yeah, outstanding. Guys, thanks for sharing the story. Great to have you on theCUBE. >> Good to be with you today. >> Stop by for beer later. >> You know it. All right, Dave Vellante for Dave Nicholson and the entire CUBE team, we're here live at the Fira in Barcelona MWC '23 day four. Keep it right there. (upbeat music)

Published Date : Mar 2 2023

SUMMARY :

that drive human progress. and Scott Waller, the CTO of that we're going to talk about. the capabilities to do it of finding these guys. for the entire industry, So the value of it, Explain the story. of the brewery back in November timeframe. and by the time they realized of this workflow here? is the final product. and into the edge gateway. that from this dashboard. that in the floor instead Scott: This is actually and I'm thinking something's that other breweries might be able to use. nuts and bolts of the system. Pull that up again that go on the other side. so the devices that are Is that correct? This particular gateway. if it's hard wire. It could be, yes. that actually takes the data in, And I love that you're because of cost, because of contention. And the other thing is we really, It's like 99% of the that are about the same size, And in the end, when you ask the brewer, We're not changing any of the process. than the Merrimack Buds. It's an old guy thing, Dave. I was in high school. It's something we talk about all the time, It's really the factory floor. 'cause we think internet of things. The industrial operations, But in a sense those are doesn't push the stop button, You mentioned the large Because the cost to entry is low. And it lets the little is kind of the big thing and she mentioned the idea And it's one of the of giving folks the all over the world. places is the cost of energy. It's very cool, and like you and bring a lot of partners to the table What's the TAM? and the liquor piece of it, Yeah, that's tip of the spear. got the big brother to this. Guys, thanks for sharing the story. and the entire CUBE team,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave VellantePERSON

0.99+

ScottPERSON

0.99+

WarrenPERSON

0.99+

T-MobileORGANIZATION

0.99+

$30,000QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Scott WallerPERSON

0.99+

SeattleLOCATION

0.99+

Warren JacksonPERSON

0.99+

DellORGANIZATION

0.99+

WashingtonLOCATION

0.99+

DavePERSON

0.99+

$10,000QUANTITY

0.99+

USLOCATION

0.99+

99%QUANTITY

0.99+

DavidPERSON

0.99+

fiveQUANTITY

0.99+

InBevORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

two partnersQUANTITY

0.99+

IntelORGANIZATION

0.99+

NovemberDATE

0.99+

Anheuser-BuschORGANIZATION

0.99+

yesterdayDATE

0.99+

TelitORGANIZATION

0.99+

70%QUANTITY

0.99+

BostonLOCATION

0.99+

oneQUANTITY

0.99+

BarcelonaLOCATION

0.99+

101 startupsQUANTITY

0.99+

HeinekenORGANIZATION

0.99+

GenXCommORGANIZATION

0.99+

ExpedoORGANIZATION

0.99+

thousandsQUANTITY

0.99+

last weekDATE

0.99+

5G Open Innovation LabORGANIZATION

0.99+

three years agoDATE

0.99+

billionsQUANTITY

0.99+

AspirantORGANIZATION

0.98+

this weekDATE

0.98+

FirecellORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

MWC '23EVENT

0.98+

two daysQUANTITY

0.98+

todayDATE

0.98+

four different companiesQUANTITY

0.98+

Edge Gateway 5200COMMERCIAL_ITEM

0.98+

Open Innovation LabORGANIZATION

0.98+

millions of dollarsQUANTITY

0.97+

telcoORGANIZATION

0.97+

CUBEORGANIZATION

0.97+

over a billion dollarsQUANTITY

0.97+

COVID-19 Impact on Global IT Spending - March 2020


 

hello everyone and welcome to this week's wiki Bond cube insights powered by ETR in this breaking analysis we're going to share fresh data from etrs latest spending survey in particular ETR added a drill down question on the impact of coronavirus now yesterday I had the pleasure of hosting ETRS director of research Sagar khadiyah who took us through the details of that survey and we're gonna bring his comments in to this discussion so today I want to accomplish three things first I want to summarize the macro where are we at this point on the second day of spring in Massachusetts second I want to assess the impact from Co vid 19 on i.t spend for 2020 and the third thing I want to do is drill down into the findings from ET ARS latest survey after we do this I'll summarize and talk about what the outlook it looks like so where are we today you know we've gone from the fear of missing out in the stock market to basically fall out fear now as you well know the economic impact is not pretty I gotta say this is the first time I've ever seen a government imposed recession rightly so to save lives but I've also never seen such an escrow the board doubled downward shift in both supply and demand this creates uncertainty and ambiguity in pricing which makes forecasting anything really really difficult the liquidity shock and the credit risks are really of primary concern right now the price of oil is a huge issue why it's because energy companies account for a very sizable portion of the high-yield credit market over 10% so as prices fall it's going to be harder for oil companies to repay loans this creates default risk so this is the markets freaked out and functioning very very poorly now a poorly functioning market signals that we are not at the bottom everybody wants to know where the bottom is I'm not a stock picker and I'm not a market technician but I've seen a lot of downturns I'll share a quick story when I was at IDC we had an exclusive deal with Goldman Sachs two of the Goldman analysts were embedded into our Framingham offices now in 1987 on Black Monday and the following weeks I would stand at their real-time terminal there was no internet back then kids nobody had access to real-time trades but I did and I would watch the market in freefall and I would see it bounce back and then I would see it freefall again what I will tell you is this bottoms are impossible to protect everybody says that why because bottoms are not technical their psychological their emotional and in 1987 and then after the dot-com bust and after the financial crisis each time you saw the S&P with rally sometimes it would rally as high as 10% it would suck people back into the market and then pull back and that's going to happen here the markets not just gonna be fine any day now now if you're looking for some positives there is some silver linings that the canals in Venice are running clear which is amazing to see nitrous oxide levels over China are way way down okay let's shift and take a look at what this all means for IT spending what are the industries that are being most affected right now now as I show here there are some obvious sectors like energy and transportation retail etc but let's listen to Sagar from ETR what he told me yesterday now pay particular attention to what he says about supply chains roll the clip yeah industrials materials manufacturing retail consumer you know the healthcare pharma they you know those are the verticals from a supply chain perspective that are in you know elevated levels of broken supply chains and what's actually interesting is we in this survey we actually asked not only whether your supply chains were broken today but do you anticipate or do you continue or do and just they continue getting experiencing broken supply chains in three months from now and those percentages were up and I think that really tells us that this is not a one or two month type of recovery we're gonna see supply chains and demand continuing to be broken continuing to come down over the next three four months that I think is probably one of the biggest takeaways from the drill-down study so you see in the EGR survey it really underscores that we are not likely to see a quick snap back it's not a 1 or a two-month fix now in my own research I go out to the field I talked to people on the cube within our network I can add some excuse me some comments and some color here what we see is that healthcare right now is so swamped that they're not buying anything I mean they just they just are how many cycles most customers are taking they're skunkworks put anyone hold they're narrowing the capital spend and really focusing only on mission-critical items banks even though banks are down they have capital and they're still buying they got cash thanks they're smart and they're negotiating very hard for big discounts the other thing is a lot of customers have no choice but to buy many are on an AR are in your recurring revenue or annual current contract and have compliance edicts like we got to send out monthly statements if they don't renew they can't use their software to do that it's different but somewhat similar with maintenance contracts so you're seeing that sales teams are clearly bringing down their forecasts but they're not cutting them in half mmm not yet anyway all right but here's what's somewhat counterintuitive and you really you can really only quantify this with data some companies actually believe it or not they're spending more why because they try to preserve productivity would their work from home solutions they need infrastructure to do that so they're pivoting their budget to work from home they also have to secure that infrastructure so that means the cyber cyber security is seeing a little bit of momentum now let's take a look at the EGR data this is from more than a thousand CIOs and IT buyers it's fresh data right from March 40% of the survey said they see no current impact on their IT budget that is surprisingly high and look at all the green to the right-hand side you know most are showing five to ten percent increase but more than 20% of the respondents are actually expecting to increase budget in 2020 for things like work from home infrastructure let's take a listen to saga kadia who explains this further roll the clip yeah I think that's I think the the positive spent or the no change in spend I think that is what a lot of the market right now is missing and I haven't seen a lot of research on that because no one else has really been able to quantify how budgets are changing and so as you noted we're actually seeing people accelerate spend because of Kovan 19 and the reason is you know they're trying to avoid a catastrophe in productivity they are ramping up all this work from home infrastructure right not just collaboration tools virtualization infrastructure increasing VPN networking bandwidth mobile devices laptops security desktop support right you're a fortune 500 organization and you have 40 50 60 thousand employees working from home all the sudden you have to be able to support those employees and as a result you're actually seeing a large number of organizations accelerating spend and even the ones that are being hurt by the broken supply chains the demand coming down you're seeing some of their spendy seller ation being offset by spending a little bit more kind of what we're calling this kind of work from home infrastructure so sada went on to explain that consensus consensus expectations for global IT spend they were roughly at four percent before coronavirus and the pullback takes us now to flat or zero percent but what's not been reported is really the offset to the declines particularly from the work from home infrastructure now obviously this could all change in a likely will but this next chart really underscores that uncertainty and really the dynamic nature of the risk here what this track charts is the daily impact of the expected retraction so earlier this month in the et our survey you saw about a two percent retraction and exceed by March seventeenth it's down to flat so as we heard from Sagar the et our thesis is currently at 0% IT spend for growth in 2020 because of some of the offset now if the news continues to worsen the outlook is going to follow alright I want to wrap up by summarizing and and talking about what what's next and what you can expect so the current call as I said is for flat IT spend in 2020 it would be worse if not for the uptick in work from home and corollaries security infrastructure now it's not just collaboration and video tools it's virtualization solutions it's VPNs network upgrades mobile devices laptops and and the software to to secure all this stuff and make it work now despite the work from home offset we fully expect this picture or worsen over the next three months you got a watch for the duration of the the remote work at home mandates the travel bans the the no meeting policies there's a little doubt that productivity is going to be heard as we discussed yesterday with Sagar you can't just flick a switch and scale remote worker productivity you know that's a real challenge now having said that the expectation from CIOs is that this spending decline is going to be temporary what's unclear is the shape of the recovery is it going to be a v-shaped or a slow slog you can see the distant rim on the other side of the canyon it's there we just don't know how far away it is and we don't know how deep the canyon really is now there will be changes in our opinion that are going to be permanent as we said on a last braking analysis over the next several months organizations they're going to learn new things and that is going to shape their thinking in the future I personally expect accelerated digital transformations and a sustained viability of the work from home options you're gonna see new capabilities from distant learning with all the college shutdowns you're also going to see new risk mitigation paradigms you know the list goes on and on and on in terms of what we're going to see here as I said earlier there seemed to be some environmental benefits you know if you're looking for some positives here I think this next generation is much more in tune with that and you have my word and my promise and our team's promise that the cube and ETR is going to be here to keep you up to date et our survey data keeps rolling in you can check that out at ETR dot plus they are vigilant on this issue as are we from our remote studios look this is the new normal our skeleton crews are in studio and we're keeping the content flowing as many folks on our team they're working from home and they're on the grid currently our Palo Alto studio is fully operational four days a week each week and we're capturing remote guests on camera and Boston is open as well so get in touch if you need anything we are here to help and we're here to serve you okay this is Dave Villante for wiki bones cube insights powered by ETR thanks for watching this breaking analysis remember these episodes they're available on podcasts wherever you listen please connect with me emails David Galante at Silicon angle dot-com comment on my LinkedIn post I always appreciate that from the community thanks for watching everybody wishing good health and safety for you and your families we'll see you next time [Music]

Published Date : Mar 20 2020

SUMMARY :

that the cube and ETR is going to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VillantePERSON

0.99+

David GalantePERSON

0.99+

fiveQUANTITY

0.99+

1987DATE

0.99+

Goldman SachsORGANIZATION

0.99+

2020DATE

0.99+

March 2020DATE

0.99+

VeniceLOCATION

0.99+

ETRSORGANIZATION

0.99+

Sagar khadiyahPERSON

0.99+

MarchDATE

0.99+

MassachusettsLOCATION

0.99+

ETRORGANIZATION

0.99+

four percentQUANTITY

0.99+

yesterdayDATE

0.99+

1QUANTITY

0.99+

oneQUANTITY

0.99+

March seventeenthDATE

0.99+

10%QUANTITY

0.99+

ChinaLOCATION

0.99+

0%QUANTITY

0.99+

two monthQUANTITY

0.99+

ET ARSORGANIZATION

0.99+

zero percentQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

two-monthQUANTITY

0.99+

more than a thousand CIOsQUANTITY

0.98+

BostonLOCATION

0.98+

SagarPERSON

0.98+

Palo AltoLOCATION

0.98+

first timeQUANTITY

0.98+

ten percentQUANTITY

0.97+

over 10%QUANTITY

0.97+

each timeQUANTITY

0.96+

third thingQUANTITY

0.96+

GoldmanORGANIZATION

0.96+

four days a weekQUANTITY

0.96+

etrsORGANIZATION

0.95+

bothQUANTITY

0.95+

this weekDATE

0.95+

coronavirusOTHER

0.94+

Co vid 19OTHER

0.94+

Silicon angle dot-comORGANIZATION

0.94+

firstQUANTITY

0.92+

three monthsQUANTITY

0.91+

earlier this monthDATE

0.9+

EGRORGANIZATION

0.9+

saga kadiaTITLE

0.89+

secondQUANTITY

0.88+

S&PORGANIZATION

0.88+

FraminghamORGANIZATION

0.87+

60 thousand employeesQUANTITY

0.86+

three thingsQUANTITY

0.85+

LinkedInORGANIZATION

0.85+

each weekQUANTITY

0.83+

more than 20% of the respondentsQUANTITY

0.8+

second dayQUANTITY

0.8+

two percentQUANTITY

0.75+

next three monthsDATE

0.71+

three four monthsQUANTITY

0.7+

40 50QUANTITY

0.7+

Black MondayEVENT

0.69+

COVID-19OTHER

0.67+

40%QUANTITY

0.66+

ImpactTITLE

0.61+

IDCORGANIZATION

0.57+

500QUANTITY

0.51+

next several monthsDATE

0.46+

wikiTITLE

0.46+

Kovan 19EVENT

0.45+

Laura Stevens, American Heart Association | AWS re:Invent


 

>> Narrator: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone, this is theCUBE's exclusive live coverage here in Las Vegas for AWS Amazon web services re:Invent 2017. I'm John Furrier with Keith Townsend. Our next guest is Laura Stevens, data scientist at the American Heart Association, an AWS customer, welcome to theCUBE. >> Hi, it's nice to be here. >> So, the new architecture, we're seeing all this great stuff, but one of the things that they mention is data is the killer app, that's my word, Verna didn't say that, but essentially saying that. You guys are doing some good work with AWS and precision medicine, what's the story? How does this all work, what are you working with them on? >> Yeah, so the American Heart Association was founded in 1924, and it is the oldest and largest voluntary organization dedicated to curing heart disease and stroke, and I think in the past few years what the American Heart Association has realized is that the potential of technology and data can really help us create innovative ways and really launch precision medicine in a fashion that hasn't been capable to do before. >> What are you guys doing with AWS? What's that, what's the solution? >> Yeah so the HA has strategically partnered with Amazon Web Services to basically use technology as a way to power precision medicine, and so when I say precision medicine, I mean identifying individual treatments, based on one's genetics, their environmental factors, their life factors, that then results in preventative and treatment that's catered to you as an individual rather than kind of a one size fits all approach that is currently happening. >> So more tailored? >> Yeah, specifically tailored to you as an individual. >> What do I do, get a genome sequence? I walk in, they throw a high force computing, sequence my genomes, maybe edit some genes while they're at it, I mean what's going on. There's some cutting edge conversations out there we see in some of the academic areas, course per that was me just throwing that in for fun, but data has to be there. What kind of data do you guys look at? Is it personal data, is it like how big is the data? Give us a sense of some of the data science work that you're doing? >> Yeah so the American Heart Association has launched the Institute for Precision Cardiovascular Medicine, and as a result, with Amazon, they created the precision medicine platform, which is a data marketplace that houses and provides analytic tools that enable high performance computing and data sharing for all sorts of different types of data, whether it be personal data, clinical trial data, pharmaceutical data, other data that's collected in different industries, hospital data, so a variety of data. >> So Laura, there's a lot of think fud out there around the ability to store data in a cloud, but there's also some valid concerns. A lot of individual researchers, I would imagine, don't have the skillset to properly protect data. What is the Heart Association doing with the framework to help your customers protect data? >> Yeah so the I guess security of data, the security of the individual, and the privacy of the individual is at the heart of the AHA, and it's their number one concern, and making anything that they provide that a number one priority, and the way that we do that in partnering with AWS is with this cloud environment we've been able to create even if you have data that you'd like to use sort of a walled garden behind your data so that it's not accessible to people who don't have access to the data, and it's also HIPAA compliant, it meets the standards that the utmost secure standards of health care today. >> So I want to make sure we're clear on this, the Heart Association doesn't collect data themselves. Are you guys creating a platform for your members to leverage this technology? >> So there's, I would so maybe both actually. The American Heart Association does have data that it is associated with, with its volunteers and the hospitals that it's associated with, and then on top of that, we've actually just launched My Research Legacy, which allows individuals of the community to, who want to share their data, whether you're healthy or just sick, either one, they want to share their data and help in aiding to cure heart disease and stroke, and so they can share their own data, and then on top of that, anybody, we are committed to strategically partnering with anybody who's involved and wants to share their data and make their data accessible. >> So I can share my data? >> Yes, you can share your data. >> Wow, so what type of tools do you guys use against that data set and what are some of the outcomes? >> Yeah so I think the foundation is the cloud, and that's where the data is stored and housed, and then from there, we have a variety of different tools that enable researchers to kind of custom build data sets that they want to answer the specific research questions they have, and so some of those tools, they range from common tools that are already in use today on your personal computer, such as Python or R Bioconductor, and then they have more high performance computing tools, such as Hal or any kind of s3 environment, or Amazon services, and then on top of that I think what is so awesome about the platform is that it's very dynamic, so a tool that's needed to use for high performance computing or a tool that's needed even just as a on a smaller data set, that can easily be installed and may be available to researchers, and so that they can use it for their research. >> So kind of data as a service. I would love to know about the community itself. How are you guys sharing the results of kind of oh this process worked great for this type of analysis amongst your members? >> Yeah so I think that there's kind of two different targets in that sense that you can think of is that there's the researchers and the researchers that come to the platform and then there's actually the patient itself, and ultimately the HA's goal is to make, to use the data and use the researcher for patient centered care, so with the researchers specifically, we have a variety of tutorials available so that researchers can one, learn how to perform high performance computing analysis, see what other people have done. We have a forum where researchers can log on and enable, I guess access other researchers and talk to them about different analysis, and then additionally we have My Research Legacy, which is patient centered, so it's this is what's been found and this is what we can give back to you as the patient about your specific individualized treatment. >> What do you do on a daily basis? Take us through your job, are you writing code, are you slinging API's around? What are some of the things that you're doing? >> I think I might say all of the above. I think right now my main effort is focused on one, conducting research using the platform, so I do use the platform to answer my own research questions, and those we have presented at different conferences, for example the American Heart Association, we had a talk here about the precision medicine platform, and then two, I'm focused on strategically making the precision medicine platform better by getting more data, adding data to the platform, improving the way that data is harmonized in the platform, and improving the amount of data that we have, and the diversity, and the variety. >> Alright, we'll help you with that, so let's help you get some people recruited, so what do they got to do to volunteer, volunteer their data, because I think this is one of those things where you know people do want to help. So, how do they, how you onboard? You use the website, is it easy, one click? Do they have to wear an iWatch, I mean what I mean? >> Yeah. >> What's the deal? What do I got to do? >> So I think I would encourage researchers and scientists and anybody who is data centric to go to precision.heart.org, and they can just sign up for an account, they can contact us through that, there's plenty of different ways to get in touch with us and plenty of ways to help. >> Precision.heart.org. >> Yup, precision.heart.org. >> Stu: Register now. >> Register now click, >> Powered by AWS. >> Yup. >> Alright so I gotta ask you as an AWS customer, okay take your customer hat off, put your citizen's hat on, what is Amazon mean to you, I mean is it, how do you describe it to people who don't use it? >> Okay yeah, so I think... the HA's ultimate mission right, is to provide individualized treatment and cures for cardiovascular disease and stroke. Amazon is a way to enable that and make that actually happen so that we can mine extremely large data sets, identify those individualized patterns. It allows us to store data in a fashion where we can provide a market place where there's extremely large amounts of data, extremely diverse amounts of data, and data that can be processed effectively, so that it can be directly used for research. >> What's your favorite tool or product or service within Amazon? >> That's a good question. I think, I mean the cloud and s3 buckets are definitely in a sense they're my favorites because there's so much that can be stored right there, Athena I think is also pretty awesome, and then the EMR clusters with Spark. >> The list is too long. >> My jam. >> It is. (laughs) >> So, one of the interesting things that I love is a lot of my friends are in non-profits, fundraising is a big, big challenge, grants are again, a big challenge, have you guys seen any new opportunities as a result of the results of the research coming out of HA and AWS in the cloud? >> Yeah so I think one of the coolest things about the HA is that they have this Institute for Precision Cardiovascular Medicine, and the strategic partnership between the HA and AWS, even just this year we've launched 13 new grants, where the HA kind of backs the research behind, and the AWS provides credit so that people can come to the cloud and use the cloud and use the tools available on a grant funded basis. >> So tell me a little bit more about that program. Anybody specifically that you, kind of like saying, seeing that's used these credits from AWS to do some cool research? >> Yeah definitely, so I think specifically we have one grantee right now that is really focused on identifying outcomes across multiple clinical trials, so currently clinical trials take 20 years, and there's a large variety of them. I don't know if any of you are familiar with the Framingham heart study, the Dallas heart study, the Jackson heart study, and trying to determine how those trials compare, and what outcomes we can generate, and research insights we can generate across multiple data sets is something that's been challenging due to the ability to not being able to necessarily access that data, all of those different data sets together, and then two, trying to find ways to actually compare them, and so with the precision medicine platform, we have a grantee at the University of Colorado-Denver, who has been able to find those synchronicities across data sets and has actually created kind of a framework that then can be implemented in the precision medicine platform. >> Well I just registered, it takes really two seconds to register, that's cool. Thanks so much for pointing out precision.heart.org. Final question, you said EMR's your jam. (laughing) >> Why, why is it? Why do you like it so much, is it fast, is it easy to use? >> I think the speed is one of the things. When it comes to using genetic data and multiple biological levels of data, whether it be your genetics, your lifestyle, your environment factors, there's... it just ends up being extremely large amounts of data, and to be able to implement things like server-less AI, and artificial intelligence, and machine learning on that data set is time consuming, and having the power of an EMR cluster that is scalable makes that so much faster so that we can then answer our research questions faster and identify those insights and get them to out in the world. >> Gotta love the new services they're launching, too. It just builds on top of it. Doesn't it? >> Yes. >> Yeah, soon everyone's gonna be jamming on AWS in our opinion. Thanks so much for coming on, appreciate the stories and commentary. >> Yeah. >> Precision.heart.org, you want to volunteer if you're a researcher or a user, want to share your data, they've got a lot of data science mojo going on over there, so check it out. It's theCUBE bringing a lot of data here, tons of data from the show, three days of wall to wall coverage, we'll be back with more live coverage after this short break. (upbeat music)

Published Date : Nov 30 2017

SUMMARY :

Narrator: Live from Las Vegas, scientist at the American Heart Association, but one of the things that they mention is that the potential of technology Yeah so the HA has strategically partnered What kind of data do you guys look at? Yeah so the American Heart Association has launched the framework to help your customers protect data? so that it's not accessible to people who the Heart Association doesn't collect data themselves. and the hospitals that it's associated with, and so that they can use it for their research. How are you guys sharing the results of kind back to you as the patient about your conferences, for example the American Heart Association, do they got to do to volunteer, volunteer to go to precision.heart.org, and they can actually happen so that we can mine extremely I mean the cloud and s3 buckets It is. and the AWS provides credit so that people from AWS to do some cool research? kind of a framework that then can be implemented Final question, you said EMR's your jam. of data, and to be able to implement Gotta love the new services they're launching, too. Thanks so much for coming on, appreciate the Precision.heart.org, you want to volunteer

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Laura StevensPERSON

0.99+

LauraPERSON

0.99+

American Heart AssociationORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

John FurrierPERSON

0.99+

two secondsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Heart AssociationORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Institute for Precision Cardiovascular MedicineORGANIZATION

0.99+

1924DATE

0.99+

AHAORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Institute for Precision Cardiovascular MedicineORGANIZATION

0.99+

13 new grantsQUANTITY

0.99+

precision.heart.orgOTHER

0.99+

PythonTITLE

0.99+

HAORGANIZATION

0.99+

twoQUANTITY

0.99+

Precision.heart.orgOTHER

0.99+

University of ColoradoORGANIZATION

0.99+

this yearDATE

0.99+

HIPAATITLE

0.98+

oneQUANTITY

0.98+

R BioconductorTITLE

0.98+

bothQUANTITY

0.98+

DallasLOCATION

0.98+

IntelORGANIZATION

0.98+

iWatchCOMMERCIAL_ITEM

0.97+

one clickQUANTITY

0.97+

three daysQUANTITY

0.96+

VernaPERSON

0.96+

tons of dataQUANTITY

0.96+

s3TITLE

0.92+

one granteeQUANTITY

0.92+

theCUBETITLE

0.9+

two different targetsQUANTITY

0.9+

My Research LegacyTITLE

0.9+

Invent 2017EVENT

0.89+

FraminghamORGANIZATION

0.89+

SparkTITLE

0.85+

DenverORGANIZATION

0.83+

todayDATE

0.82+

HalTITLE

0.82+

lot of dataQUANTITY

0.79+

Narrator: Live from LasTITLE

0.79+

InventEVENT

0.71+

re:Invent 2017EVENT

0.71+

past few yearsDATE

0.7+

one sizeQUANTITY

0.67+

EMRORGANIZATION

0.64+

VegasLOCATION

0.63+

s3COMMERCIAL_ITEM

0.56+

reEVENT

0.53+

JacksonPERSON

0.52+

precisionORGANIZATION

0.5+

Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+

Richard Cramer, Informatica - Informatica World 2017 - #INFA17 - #theCUBE


 

>> Announcer: Live from San Francisco, It's The Cube. Covering Informatica World 2017 brought to you by Informatica. >> Hello everyone, welcome back to The Cube coverage, exclusive coverage of Informatica 2017, we are live in San Francisco breaking down all the action of Informatica's big conference Informatica World 2017, I'm John Furrier with Silicon Angle The Cube, my cohost Peter Burris, head of research and also general manager wikibon.com check it out, great research there, next guest is Richard Cramer, Chief Healthcare Strategist fpr Informatica, welcome to The Cube. >> Thank you John. >> Great to see you, we were just talking before we went live about you love data, you love customers, and healthcare is booming, certainly healthcare is one of those use cases, it's a vertical that everyone can relate to, one. Two, it's the most dynamic with data right now and internet of things connected sensors you know what a room looks like a zillion things connected, now you got wearables, still you got the data problem, it's never going away, certainly it exists there, but now it's changing, so break it down for us, what is the challenges and drivers right now in the healthcare industry relative to getting great software and great solutions to help patients. >> Well you're 100% right, one of the things that's exciting about healthcare is it matters to all of us. Every one of us is a patient, every one of us has a horror story of interacting with a healthcare system and so when we look at the opportunity for data, healthcare has historically not used data very well. We had the High-tech Act in 2009 that got electronic healthcare records in place, we're coming out of the backside of that, so arguably for the first time we finally have the deep rich clinical data that we've needed to do analytics with for the first time. We now have the technology that's coming around with what we call data 3.0 and big data processing power and then as you mentioned internet of things and all of the rich sources of new data that we can do discovery on and learn new things about how to treat patients better and then really the final component is we have the financial incentives are finally aligned. We used to in healthcare pay for piecework. The more you did, the more you got paid. And shockingly we were inefficient, we did too much. (laughs) And now we're changing to paying for value. And we can pay for value because we can finally measure quality and outcomes because we have the data. And so that's really the analytics opportunity that's so exciting in healthcare right now. >> What's interesting is that in this digital transformation, and business transformation and all the conversations we've had over the years on The Cube and look at all the top shows in the enterprise and emerging tech, you're seeing one pattern, we had the Chicago Cubs on yesterday talking baseball but whether it's sports, business, or healthcare or whatever vertical, there's kind of three things, and we'll take baseball, right? Fan experience, how to run the players and the team, and how to run the organization. Healthcare is the same thing, how to run an organization, how to take care of the players, the doctors and the practitioners, and then also the end user, the fan experience, the patient experience. So now you have, it used to be hey are we running our organization and the practitioners were part of that maybe subordinate to it, maybe they interacted with it, but now like a baseball team you have how do I run my organization, how do I make the players, the doctors and practitioners successful, and now the patients, the end users are now part of it as well. This is opening up massive innovation opportunities. What's your reaction to that and how should people think about the data in that context? >> So I think the first piece of what you said is very true, which is really for the first time, healthcare organizations are behaving like real businesses. When you start to get paid for results, you now care about a lot of things that you didn't care about before. Patient experience matters 'cause consumers have choice, those types of things, so all of those digital transformation examples from other industries are now relevant and front and center for healthcare organizations. Which is radically different and so that opportunity to use data and use it for a specific purpose is very valuable. I think the other thing that's important with digital transformation is historically healthcare is very local. It's regional, you go to the hospital that's closest to you. And digital disruption is all about removing geographic barriers. The goal in healthcare today is we're reducing cost, you want to push healthcare out of that high cost hospital into the most cost effective highest quality organization you can. That maybe a retail clinic in a shopping mall. And how do you do that? You do that with digital technology. Telehealth in the home. All of those types of things are traditional digital transformation types of capabilities that healthcare has not traditionally cared about. >> So optimizing a network effect if you will, we always hear in network, out of network as a term (laughs) >> Yep. >> My wife and I go oh it's in network, oh good, so out of network always kind of means spendy but now you're talking about a reconfiguration of making things much more efficient as piece parts. >> Well exactly right and the idea of the network, the network used to be drive everybody to the hospital, 'cause that's where we made our money. Well when you're getting paid for results, the hospital's a cost center, not a revenue center. You actually want to keep people out of the hospital. And as a consumer and as somebody who's paying for healthcare, that's actually a good thing. If I can avoid going to the hospital and get healthcare in a more convenient setting that I want to do at home or someplace closer to home and not be admitted to a hospital, hospitals are dangerous places. >> Peter, you've been doing, I've seen you and I comment on Facebook all the time, certainly the healthcare things sparks the conversation but big data can solve a lot of this stuff, I know you're doing a lot of thinking around this. >> Well so fascinating conversation, I'd say a couple things really quickly and then get your take on it. First off a lot of the evidence based management techniques we heard about yesterday originated in healthcare. Because of the >> You mean like data management and all that stuff? >> Peer review, how we handle clinical trials, the amount of data that's out there, so a lot of the principles about how data could be used in a management framework began in healthcare and they have kind of diffused the marketplace, but the data hasn't been there. Now there's some very powerfully aligned interests. Hospitals like their data, manufacturers of products like their data, doctors like their data, consumers don't know what to do with their data. They don't know what value the data is. So if we take a look at those interests, it's going to be hard, and there's a lot of standards, there's a lot of conventions, each of those groups have their so now the data's available, but the integration is going to be a major challenge. People are using HIPAA as an excuse not to do it, manufacturers and other folks are using other kinds of excuses not to facilitate the data because everybody wants control of the final money. So we've heard a lot at the conference about how, liberate the data, free it up, make it available to do more work, but the second step is integration. You have got the integration problem of all integration problems in data. >> Yes. >> Talk about how some of the healthcare leaders are starting to think about how they're going to break down some of these barriers and begin the process of integrating some of their data so they can in fact enact different types of behaviors. >> Yeah and great context for what's happening in healthcare with data. So if you think of five, six, seven years ago at Informatica, my role was to go and look at what other industries had done for traditional enterprise data warehousing and bring that knowledge back into healthcare and say healthcare you're ten years behind the rest of industry (laughs) here's how you should think about your data analytics. Well that's completely different now. The data challenge as you've outlined it are we've always had data complexity, we now have internet of things data like nobody's business and we also have this obligation to use the data far more effectively than we ever have before. Well one of the key parts of this is that the idea of centralizing and controlling data as a path value is no longer viable. We can argue whether it was ever successful, but it really is not even an option anymore when you look at the proliferation of data sources, the proliferation of data types, the complexity, we simply can't govern data to perfection before we get using (laughs) which is traditionally the healthcare approach. What we're really looking at now is this whole idea of big data analytics applied to all data and being able to do discovery that says we can make good decisions with data that may not be perfect and this is the big data, put it into a data lake, do some self service discovery, some self service data preparation, reduce the distance between the people who know what the data means and being able to get hands on and work with it so that you can iterate and you can discover. You cannot do that in an old fashioned EDW context where we have to extract, transform, load, govern to perfection, all the data before anybody ever gets to use it. >> John: That's why I'm excited about data in motion. >> Well even, yeah data, we'll get to that in a second because that's important, but even before we get there, John, I mean again, think about how powerful some of these industries are. Drug companies keep drug prices high in the U.S. because they have visibility into the data, the nature of the treatments, et cetera. One of the most interesting things, this is one I want to attest with you on. Is that doctors, where a lot of this evidence based management has started because of peer review, because of their science orientation, even though they get grooved into their own treatments, generally speaking our interest is in exploring new pathways to health and wellness. So is, do you have a very powerful user group that will adopt this ability to integrate data very quickly because they can get greater visibility into new tactics, new techniques, new healthcare regimes as well as new information about patients? Are doctors going to be crucial to this process in your opinion? >> Doctors are going to be crucial to the discussion, we had a healthcare breakfast with a speaker from Deloitte the other day who talked about using data with clinicians to have a data discussion. Not use data to tell them you're wrong or whatnot but actually to engage them in the discovery process of here's what the data shows about your practice. And you talk about the idea of data control, that's absolutely one of the biggest barriers. The technology does not solve data control. >> Right. >> In the old days, everybody admits we have silo data, we have HIPAA, it was so hard to break down those barriers and actually share data that nobody really addressed the fact that people didn't want to. Because they couldn't. Well now with the technology that's available it's >> What's possible, the art of possible. >> Yeah, now it's possible to actually get data from everywhere and do things with it quickly. We run into the fact that people have to explicitly say I don't want to share. >> But here's where that data movement issue becomes so important John and I think that this is a play for Informatica. Because metadata is going to be crucial to this process. Being, giving people who do have some understanding of data, clinicians, physicians, because of their background, because of the way that medicine is supposed to be run at that level, giving them visibility into the data that's available, that could inform their practices and their decisions is really crucial. >> Absolutely. One of, a good friend who's a clinician has been asking for years, he says if all you did was give me access to data about my patients so I could explore my own clinical practice, says I'm guaranteed I take care of diabetics the way I learned in medical school 25 years ago. There has been a lot of innovation in that and just having the perspective on my own practice patterns from my own data would change my behavior. And we, typically I haven't been able to do that. We can now. >> So I've got to ask you, so let's get down and dirty on Informatica, 'cause first of all I think instrumentation of everything now is a reality, I think people now are warming up to certainly in levels, super hot to like I realize it's a transformation area. What are you guys saying to customers? Because they're kind of drowning in the data, one. Two, they are maybe held back, 'cause of HIPAA and other things, now it's time to act, so the art of the possible things are now possible, damn I got to get a plan, so they're hustling around to put a plan together, architecture, plan, what do you guys pitch to customers? What is the value proposition that you go in, and take us through an example, a use case of a day in the life of your role with customers. >> So I have the best job in Informatica. I get to go out and meet with senior customer executive teams and talk about data, how they're going to use data, and how we can help them do it. So it's the best job in the company. But if you look at the typical pitch, we start out, we first we get them to agree with the principle, centralizing control is dead, being able to manage data as an enterprise asset in a decentralized fashion with customer self service is the future reality. And everybody universally says yep, we get it, we agree. >> John: Next. (laughs) Check. >> But then we talk about what does that actually mean? And it's amazing how at every step in my presentation, the 20 questions always are the same, it comes down to well how do we control that? How do we control that? >> Peter: How do we manage it? >> So you start with, you think of this idea that says hey, decentralized data, customer self service, you got to have a data catalog. Well enterprise information catalog is a perfect solution. If you don't know where your data assets are and who's using them, you cannot manage data as an asset >> And they're comfortable with that because that's the old mindset of the warehouse like that big fenced in organization, but now they say okay I can free it up >> Yes. >> And manage it with a catalog and get the control I need. >> That's right and so the first piece is the catalog, well then the minute you say to people the catalog is the way to get value from your data, there's somebody in every room that says ooh that value represents risk. You're letting people see data and make data easy to find, that can't possibly be good, it's risky. Well then we have secure at source was the opposite product from enterprise information catalog that says here's the risk profile of all those data sources for HIPAA and protected health information so we got a great answer to that question, and then you look and you say well how do I fundamentally work with data differently and that's the idea of a data lake. Rather than making data hard to get in so it's easy to query which is the traditional enterprise data warehouse, and even people who do enterprise data warehousing well, little secret is, takes too long, costs too much, and it's not agile. >> Yeah. >> We're not suggesting for a second that a centralized repository, a trustworthy data governed within an inch of it's life so that it can be used broadly throughout the organization without people hurting themselves is not good, it can't be the only place to work with data. Takes too long, costs too much, and it's not agile. What you want is the data lake that says put all the data that you care about in a place, big data, IOT data, data that you don't know what you're going to use, and apply effort at query time only to the data that you care about. >> And we're always talking about cleanliness and hygiene yesterday versus heart surgeon, different roles in an organization, the big fear that we hear from customers, we talk to on The Cube, I want to get your thoughts and then reaction of this is that my data lakes turn into a data swamp. Because it's just, I'm not using it, it's just sitting there, it gets stale, I'm not managing it properly, I'm not vectoring it into the right apps in real time, moving it around, your reaction to that objection. >> Early days of the data lake, absolutely data swamp because we didn't have the tools, people weren't using them correctly, so just because you put it in a data lake doesn't mean that it's ungoverned. It doesn't mean you don't want to put the catalog on it so you know what's there and how to use it. It doesn't mean you don't want to have end to end transparency and visibility from the data consumer to the data source because transparency is actually the first level of government. That's what provides confidence. It's not agreeing on a single version of the truth and making sure the data's right. It's just simply allowing the transparency and so when you have a data lake with a catalog, with intelligent data lake for self service data preparation, with the ability to see end to end what's happening with that data, I don't care that it's not been governed if I can inspect it easily and quickly to validate that your assumptions are reasonable, 'cause this is the biggest thing in healthcare. We can't handle the new data, the IOT data, and the scope of things we want to do that we haven't thought about the old way. >> Yeah we have limited time. >> One last question. Framingham Heart Study has shown us that healthcare data ages differently than most other data. How do we anticipate what data's going to be important today and what data's going to be important in the future? Given that we're talking about people and how they age over time. >> So the key thing with that and we talked about it earlier, you can't analyze data that you threw away. And so a big part of this is if the data might potentially be of interest, stage it, and don't put it in an archive, don't put it someplace in the database backup, it's got to be staged and accessible, which is the data lake. >> And ready. >> And ready, you've got to, and you can't have distance between it. Somebody can't have to go and request it. They need to be able to work on it. And that's the revolution that really is represented by data 3.0, we finally can afford to save data, huge amounts of data, that we don't know we care about. Because somebody may care about it in the future. >> Peter: That's right. >> Great Richard, great commentary, great insight, and appreciate you coming on The Cube and sharing what's update in the healthcare obviously super important again they're running like business, a lot of optimization, a lot of changes going on, you guys are doing some good work there, congratulations data 3.0 strategy. Hopefully that'll permeate down to the healthcare organizations and hopefully the user experience, me, the patient when I go in, I want to be in and out >> Peter: Wellness. >> Of the hospital and also preventative which I'm trying to do a good job on but too many Cube interviews keeping me busy, I'm going to have a heart attack on The Cube, no I'm only kidding (laughs) Great coverage here at Informatica World in San Francisco, I'm John Furrier, Peter Burris, more live coverage of day two at Informatica World, Cube, we'll be right back stay with us.

Published Date : May 17 2017

SUMMARY :

brought to you by Informatica. we are live in San Francisco breaking down all the action Two, it's the most dynamic with data right now so arguably for the first time we finally have Healthcare is the same thing, how to run an organization, Telehealth in the home. but now you're talking about a reconfiguration Well exactly right and the idea of the network, certainly the healthcare things sparks the conversation Because of the but the integration is going to be a major challenge. and begin the process of integrating some of their data all the data before anybody ever gets to use it. One of the most interesting things, the other day who talked about using data with clinicians In the old days, everybody admits we have silo data, the art of possible. We run into the fact that people have to explicitly say because of the way that medicine is supposed to be run and just having the perspective on my own practice patterns What is the value proposition that you go in, how they're going to use data, and how we can help them do it. and who's using them, you cannot manage data as an asset and that's the idea of a data lake. that says put all the data that you care about in a place, the big fear that we hear from customers, and the scope of things we want to do and how they age over time. So the key thing with that and we talked about it earlier, And that's the revolution that really is represented and hopefully the user experience, me, Of the hospital and also preventative

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Peter BurrisPERSON

0.99+

RichardPERSON

0.99+

Richard CramerPERSON

0.99+

PeterPERSON

0.99+

Chicago CubsORGANIZATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

InformaticaORGANIZATION

0.99+

20 questionsQUANTITY

0.99+

ten yearsQUANTITY

0.99+

HIPAATITLE

0.99+

100%QUANTITY

0.99+

first pieceQUANTITY

0.99+

yesterdayDATE

0.99+

second stepQUANTITY

0.99+

eachQUANTITY

0.99+

High-tech ActTITLE

0.99+

first timeQUANTITY

0.99+

Informatica World 2017EVENT

0.99+

oneQUANTITY

0.99+

DeloitteORGANIZATION

0.98+

day twoQUANTITY

0.98+

#INFA17EVENT

0.98+

OneQUANTITY

0.97+

TwoQUANTITY

0.97+

three thingsQUANTITY

0.97+

Informatica 2017EVENT

0.97+

U.S.LOCATION

0.96+

firstQUANTITY

0.96+

FirstQUANTITY

0.95+

single versionQUANTITY

0.94+

seven years agoDATE

0.93+

todayDATE

0.92+

CubeORGANIZATION

0.92+

one patternQUANTITY

0.92+

Framingham Heart StudyORGANIZATION

0.91+

Informatica WorldORGANIZATION

0.9+

FacebookORGANIZATION

0.9+

2009DATE

0.89+

The CubeORGANIZATION

0.88+

first levelQUANTITY

0.87+

fiveDATE

0.84+

Silicon AngleORGANIZATION

0.83+

25 years agoDATE

0.8+

secondQUANTITY

0.77+

every oneQUANTITY

0.77+

baseballTITLE

0.76+

ChiefPERSON

0.75+

wikibon.comORGANIZATION

0.74+

yearsQUANTITY

0.73+

sixQUANTITY

0.67+

#theCUBEORGANIZATION

0.62+

a dayQUANTITY

0.61+