Nader Salessi and Scott Shadley, NGD Systems | CUBEConversation, August 2018
(energetic music) >> Hi, I'm Peter Burroughs and welcome to another CUBEConversation from our wonderful studios in Palo Alto, California. Today we're talking storage, not just any kind of storage, but fast, intelligent storage. We're got NGD Systems with us, and specifically welcome back to theCUBE, Nader Salessi, CEO founder, and Scott Shadley, VP of Marketing. >> Good to see you again, Peter. >> So, the last time we were here we had a great conversation about the role that storage is going to play in overall system performance. And Nader, when I think of NGD Systems, I think of really smart people doing great engineering to create really fast high-performance products. Where are we in the state of the art of fast storage, fast systems? >> So, what we are learning from the customers, the demand of the storage continues to grow exponentially. They want larger capacity per drive. All the challenge they have, physical space is limited always and the power consumption. It is not necessarily just power consumption of the device, they have also self resources for implementing their Hyperscale Data Centers from the physical space, from buying servers, network storage. The challenge that they face is power available from the utility companies are limited. They cannot overcome that. So, if they need to increase their capacity of the storage by double the size of the storage in a year timeframe, they cannot get access to the utilities and the power. So, they need to focus on energy efficiency. >> So, when I think of NGD Systems, what I should think about is smart, fast, and efficient from a power standpoint. >> That's correct. So, that's one of the areas that we are focusing a lot to provide energy efficient. We are improving the watt per terabyte by a factor of 10 compared to the best in class available the other side of the SSD drive that exists in the industry. >> Oh, let me make sure I got that. So, by improved wattage by a factor of 10 for the same capacity. >> Correct. In meaning we are improving watts per terabyte in a same physical space. And that's the challenge that the industry is facing. >> Got it. >> The next set of the challenge that all the hyperscalers are facing, and we are learning from them, moving of the data is a challenge. It just takes time and it's not efficient. So, the more they can do inside of the drive to do the manipulation of the data without moving the data, that's what they are looking for. And that's exactly where we are focusing and with our intelligent product that we're introducing. In the fourth quarter of this year, we are introducing mass producible solution that can take it to a mass production. >> So, give us an example of that, 'cause I know you were one of the first suppliers of technology that did things like brought mass produced down closer to the data. Is that the basic notion that we're talking here, and what are the use cases we're looking at? >> So, there are by far a lot more use cases and I'll let Scott go into some of the use cases that we have implemented as an example with some large partners which we are also announcing this coming week, or next week during the FMS. So, Scott, do you want to explore? >> Yeah, absolutely, so Peter, just to give to your point. There's a lot of different ways you can look at making storage intelligent. What we looked at we took a different direction. We're not trying to just do simple things like the minor database applications, we're going for what's new and innovative in the way of things like AI and machine learning. So, we talked last time a little bit about this image similarity search concept. As Nader mentioned, we're going to be live with a guest speaker at FMS implementing a version of that. >> Now, FMS is Flash Memory Summit. >> Yes, for those that don't know, Flash Memory Summit that happens every year. Other things that we've worked on again with partners relate to things like relational databases and being able to do things like implement Google TensorFlow live in the drive. We've also been able to port docker containers directly into the drives, so then there's now a customer's ability to take any application you're running, whatever format it's in and literally drop it in a container format into the drive and execute the commands in place on the data. And we're seeing improvements of 10 to 50x on execution time of those applications because they're not physically moving data around. >> So, to put this, kind of summarize this, if a customer, user, has a choice of moving 50 terabytes around of raw data as opposed to moving maybe a couple of hundred kilobytes or megabytes of application down to the drive, then obviously you want to move the smaller down. But it requires a fair amount of processing power and control be located very close to the data. So, how's that happening? >> So, by architecting the fundamentals of the storage from a sketch, we are able to provide the right solution. So, architecting within each control, or each SSD there is a controller for managing the flash and the interfacing with the host. As part of that, we have embedded additional resources. Part of it is a quad-core application processor, 64 bit application processor that is running at at least a gigahertz that the application can come down and run on that, or operating system is running on it. In addition to that we have embedded the hardware accelerator to accelerate certain functions that makes sense to be done. Plus the access to the data tower that is readily available at a much higher bandwidth than the host interface. So, that's how we are at this end. Then of course, by providing the complete software stack to make it easy for the customers to bring their application rather than starting from scratch, or having it very specialized and custom solutions. >> So, when I think about if I'm a CIO or if I'm a senior person in infrastructure, I'm thinking, what workloads naturally lend themselves to high degrees of parallelism? Then I'm thinking, how can I move more of that parallelism closer to the data. Have I got that right? >> Exactly. >> Absolutely. >> So, how's this turning into product? >> So, for that perspective we've been releasing, or we've had released now two platforms we've called the Catalina Family of Solutions. And they've been POCs, prototypes, and some limited production volume. As Nadir mentioned, our third platform we're calling the Newport Platform is going to be an ASIC base solution that's going to be able to drive that mass marketed option that he referenced. It's a whole bunch of unique things about it. A, we have the application coprocessors. It's the first SSD controller to ever be done on a 14 nanometer process node, so that's where the energy efficiency piece of this comes into play. And the fact that we can do the densities the customers are looking for. 'Cause right now, there's a challenge in the market to be able to do a large enough drive at the right performance characteristics and power consumption to solve the need. >> So, you're following some locations in Southern California, from Catalina to Newport. In the next couple of years you'll be in the San Bernardino Mountains. >> Sure. >> So, as we think about where the technology is, so give us an example of the performance improvements, which you're seeing from an overall benchmark standpoint. >> So, one of the other use cases that it may not be intuitive to think about this is for the content of a video, for video content everywhere. So, the new generation of contents, they are large, they are massive, they require massive amounts of storage. And the old fashioned way of doing it, they have multiple drives in a server. They all converge and they go through another server for the encryption and authentication. Well, we are moving that function inside of the storage. Now, all of a sudden, same server instead of having all converging and going through one narrow pipe, all the drives concurrently can serve multiple subscriber in parallel by more than factor of 10. And that's substantial from the performance point of view. So, it is not necessary the old fashioned way of measuring it, what's the IOPS, those are the old way of measuring it. The new ways, the end users how they can access the data without being a bottleneck. And that's again, another use case of it. The other use case as Scott mentioned for the doing image similarity search, in the old fashioned way when they were accessing a billion images, it's working fine with the current SSDs, off the shelf SSDs, and the current servers, and GPUs. The challenge they are facing as they increase this database to a trillion images, it just cannot do it that old way. So, it's more than just how many gigabytes they push through or how many IOPS. It's being able to look at it from the system level point of view, and how many subscribers or how many customers can access it concurrent. >> So, you're describing a number of relatively specialized types of applications, but nonetheless, applications of significant value to their businesses. But let's talk just for a second about how a customer would employ the technology. Customers don't mind specialized or more specialized devices as long as they fit in within the conventions for how they get used. So, what's the complexity of introducing your product? >> Very good point that you're raising. Fundamentally, we are a solid state drive storage as a block storage based on PCle NVMe without any drivers. They plug it in, it's plug and play. It works. On top of that, and for this scenario of the block storage, we are the highest capacity, lowest power consumption, or lowest watt per terabytes, and servicing the majority of the market that nowadays are focused more on the read and consistent read, rather than what's the again, IOPS or how fast is the write. So, we have our architecture and the algorithms is set up that we would provide a very narrow beam of the consistent latency no matter what workload they put on it, and provide the right solution for them. Then on top of that, if they have a specialized workload or the use cases, they still can enable it or disable it based on simple software switch. >> So, Scott, when you think about partners, the ecosystem, I know that we talked about this a bit last time, getting started, expanding it, where are we in terms of NGD Systems getting the market? >> Absolutely, so from that perspective we've gone beyond the proof of concept only phase. We've actually got production orders that have shipped to customers. We're starting to see that roll out in the back half of this quarter. As Nadir mentioned, we roll into Q4 with the new product, then upgrade those customers and start getting into even larger rollouts. But it's not just a couple of mom and pop shops type of thing. It's some big names. It's some high-level partners. And we're starting to now build out the ecosystem and how to deliver it through server ODMs or other partners that can play off of the system, whether it be storage array providers or even some of the big box players. >> So, we're now here with Newport. >> Yes. >> You've no doubt got plans. We don't have to go too deep into 'em, but as your company starts to scale, what's the cadence going to look like? Are you going to be able to continue to push the state of the art from performance smarts and energy efficiency standpoint? >> Absolutely, there are already things that are in the pipeline for the next generation of how to bring more intelligence inside of the drive. With more resources for a lot more workload to be able to adapt itself to many, many use cases, rather than only maybe today it might be a dozen use cases, to go infinite. It truly is a platform rather than just a unique for this application. And then we're going to expand on that toward the next generation of it. So obviously, as we ramp up the first generation of product in mass production, the R&D's working on the next generation of the intelligence that they've got to pour into it and continue that cadence. And of course, we scale the company accordingly. >> Great news from NGD Systems. Nadir. >> It is wonderful, this FMS coming up, we are announcing there are new generation of product, as well as announcing the close partner with one of the hyperscalers that we are introducing the next generation of product. >> Fantastic, Scott Shadley, VP of Marketing. Nadir Salessi, CEO founder. NGD Systems, thanks very much again for being on theCUBE. And to you, once again, thanks for watching this CUBEConversation. Until we meet again, thanks for watching. (energetic music)
SUMMARY :
and Scott Shadley, VP of Marketing. So, the last time we were here we had a great conversation the demand of the storage continues to grow exponentially. So, when I think of NGD Systems, So, that's one of the areas that we of 10 for the same capacity. And that's the challenge that the industry is facing. So, the more they can do inside of the drive Is that the basic notion that we're talking here, and I'll let Scott go into some of the use cases in the way of things like AI and machine learning. and execute the commands in place on the data. So, to put this, kind of summarize this, of the storage from a sketch, we are able of that parallelism closer to the data. And the fact that we can do the densities in the San Bernardino Mountains. So, as we think about where the technology is, So, the new generation of contents, they are large, So, what's the complexity of introducing your product? of the market that nowadays are focused more or other partners that can play off of the system, to push the state of the art from performance smarts of product in mass production, the R&D's working the next generation of product. And to you, once again, thanks
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Odie | PERSON | 0.99+ |
Mitzi Chang | PERSON | 0.99+ |
Ruba | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Alicia | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Jarvis | PERSON | 0.99+ |
Rick Echevarria | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Rebecca | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Acronis | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Infosys | ORGANIZATION | 0.99+ |
Thomas | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Anant | PERSON | 0.99+ |
Mahesh | PERSON | 0.99+ |
Scott Shadley | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Alicia Halloran | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Nadir Salessi | PERSON | 0.99+ |
Miami Beach | LOCATION | 0.99+ |
Mahesh Ram | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
January of 2013 | DATE | 0.99+ |
America | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Bruce Bottles | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Asia Pacific | LOCATION | 0.99+ |
March | DATE | 0.99+ |
David Cope | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rick Echavarria | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
July of 2017 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Catalina | LOCATION | 0.99+ |
Newport | LOCATION | 0.99+ |
Zappos | ORGANIZATION | 0.99+ |
NGD Systems | ORGANIZATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Nader Shalessi, NGD Systems & Scott Shadley, NGD Systems | CUBEConversation, March 2018
>> Hi. I'm Peter Burris and welcome to another Cube Conversation. We're here in our Palo Alto studios and we got some really interesting guests, really interesting topic. We're going to talk about something called Computational Storage. Nader Salessi is the CEO of NGD Systems. >> Hello. >> And Scott Shadley is the VP of Marketing of NGD Systems. >> Pleasure to see ya again, Peter. >> So guys, let me set the stage and let's get in to this 'cause actually this is kind of interesting. If you think about a lot of the innovations happening in the marketplace right now and the tech industry right now, we're talking about greater densities of data, more advanced algorithms being applied against that data, greater parallelism in the compute, more I/O aggregate required but the presumption behind all this is that we're going to be flying data all over the organization and the other presumption is things like energy consumption, unlimited, who cares but we know the reality is something different. There is an intersection amongst all of that that seems to need a dressing. Nader, take us through that. >> And that's exactly what we are addressing. So we are bringing other than the energy efficiency in a large capacity storage. Instead of moving the data to do any computation on the data, we bring the computation inside the storage to do the computation locally in a distributive fashion as you have number of storage devices in a server without the need of moving the data and save energy. The main area is that it's a focus point for a lot of mega-data centers is the energy density being what per terabyte or what per terabyte per square inch and that's exactly with our technology we are addressing to have a more sufficient energy efficient computational storage into the market. >> So let me build on that a little bit. See if I got it. So that your traditional large system, you have an enormous amount of data, you have a bunch of logic dedicated to know where the data is. Find it. Once it finds it, it brings it, presents it to a CPU, a server somewhere, who then takes some degree of responsibility for formatting it and then presenting it to the application. And you're bringing that out and putting it down closer to the storage itself and so instead of having this enormous bus that's humming along at unbelievable speeds and maybe 35, 40 watts off the card, you're doing it for-- >> A fraction of that so we'll be able to do that with eight terabytes in an eight watt envelope. Or 64 terabytes to be done in a 15 watt envelope. That's the part that doesn't exist today and being able to not only do the storage part of it but bringing the application seamlessly without changing the application, bringing it down and acting on the data and just setting the subset of the results to the upper levels of the application is what market is looking for that doesn't exist today. >> So you're using, you're still using industry standard memory. You're still using industry standard form factors. What is the special sauce inside this that makes it faster and cheaper from a power standpoint? >> Very good question. So we are using a standard PCIe and MPE protocol for the drive. So we, our technology, the algorithm and the controller technology can have this large capacity of the NAND and we are flash agnostic so it could be any NAND, in fact, later on it could be any MBM. It doesn't need to be the NAND and having additional resources through the standard of the TCP/IP we can bring application down. We're not making any changes to the application. >> So we're taking a new approach to thinking about how I/O gets handled at the storage device. It's got to create some use cases, Scott. Tell us about some of the use cases. >> From a use cases perspective, you can think about it you can go to simple terms as thinking about traffic jams. If you have a traffic jam on the freeway when one lane of traffic gets stuck, well if the cars are able to actually relocate and do the movements on their own you eliminate the traffic bandwidth problem. What we can do is we allow you to say okay if I'm going to go look for a picture in a data set. Instead of having the CPU ask for all the different pictures, do the comparison and memory, tie up CPU resources, you just tell the drive go find this picture. It goes, finds this comparison picture. Tells you all about the picture, sends just that little tidbit back to you. So if you're collecting hundreds of thousands of Facebook photos today, you can analyze those and tell every person that's looking for a different photo what their photo is without having to use massive I/O bandwidth. >> So traditional high-performance computing? >> Yes. >> IOT? >> IOT. All of the AI where you're looking for things, where you're trying to have Artificial Intelligence be smarter, you have to throw CPUs and GPUs at it. Start throwing more storage at it 'cause you have to have to store all the data you're generating. Why not let the storage do some of that work? You can offload some of it from CPUs, GPUs and you can scale more effectively. >> So my colleague, David Floier, has been talking about how for example MAP and Hadoop could be accelerated pretty dramatically. But it's got to be more than just MAP? How are you supporting a range of applications. >> The of use cases totally separate, different from these use cases is for the content delivery, video delivery on the last mile or last hundred feet. So today, everybody's recording at home in their DVRs. What if it's set up having 10,000 DVRs in 10,000 homes is sitting in a central place and it has hundreds of thousands of video but everybody points to it. The new challenge with that is the security portion. With all our technology, we can do the encryption on the way out and authentication right at the storage so the concurrent users can be protected from each other. And that technology doesn't exist. >> Let me think about the business model implications now for a second. So I might enter as a private citizen. I might enter into a deal with Xfinity for example in which I agree to be the point of presense for my entire neighborhood. Is it that kind of thing we're talking about? >> Exactly. So that's the new edge delivery but with a higher security that doesn't exist today because it's a major challenge for everybody. >> Interesting. >> For the security and authentication. Even within the same household there could be multiple users that they need to be protected from each other. >> Very interesting. So Scott, you've got a fair amount of background in the systems universe. How is this technology going to change the way we think about systems? >> Yeah, so the beauty of this is we all thought MVE was going to be the savior of the world. It comes in at flash storage. It gives you the unlimited PCA bandwidth bus. The problem is we've already saturated that problem. We've got devices where a box can hold 24 MVE drives but you can only operate three or four of them at a time even with 16 lanes of PC-83. We're going to PC-84. We've still got a bottleneck because all of the I/O still has to go from the drive to host and back to drive and be managed because you can't run on traditional storage anything other than just data placement. Now, the drives are smart. They're relocating the data on it, protecting it, whatever else, but they're still not doing what can really be done with them. Adding this layer of computational storage with devices like ours, all it has to do is go ask the question and the storage can go do it's thing. So if I've got 24 drives, I can go ask 24 questions and I still have bandwidth to actually write data into that system or read other data out of that system at a random access pattern. >> So that brings us back to the question I asked earlier. Namely, to make this more general purpose, there's got to be a pretty robust software capabilities or libraries. How is that being handled so it can be made more general purpose and folks aren't building deep into the architectures specific controller elements. How is that happening? How does it work? >> So one of the biggest kricks whenever you bring something kind of new and innovative that actually solves a problem that does exist is how to get people to address it, right? 'Cause I want ease. I want to do simple. It took forever to get people to adopt SSDs and now we're telling them that we're giving them smart SSDs. What we're saying and what we're able to accomplish with what we're doing on the library front is very light touch. We're using the MVE protocol. We're tunneling through it with a host agent which is a very small modification at the host and it has that now communicate to all the different drives. So, simplifying that crossover of information is really what's important to your exact statement and we do that through C library and it's very modifiable to various different workloads. It's not tied to each workload has to be independently written. >> So the applications of enterprises of all sorts are actually trying to drive, that are more data orientated, computational orientated around that data. Get the computations closer. You guys are helping. From the new systems designs, we still think MVEOF is going to be very, very important but this could complement it. >> Exactly. >> Especially where I/O and the energy that bus becomes a crucial issue. What's on the horizon? >> Deploying this and driving the energy deficiency. It continues to be the biggest point no matter what we do. There is not enough energy in the world. With the amount of storage and server and computer that's being deployed and that's another area that we are focusing on and continue to focus to have the most optimum energy efficient in the smallest footprint in the model. >> So I got one more question. NGD Systems is not a household name. Where are you guys from? >> So we started the company about five years ago. Before that, myself as well as my two co-founders as well as a team of engineers we used to be at a company called Western Digital for a couple of years doing enterprise classes as this. Before that, I started the, in 2003, in this field for SSD, I started a product, a business line for a company called SDC Estate which we created industrial SSDs then later on became an enterprise class SSD. We became known for enterprise class SSDs in the industry. That's the heritage of the last 15, 17 years with many years of SSD development but this computational storage is already done an optimized SSD for a category that doesn't exist today and add to it a computational storage capability on top of it. >> Scott, last word? >> Yeah. Just from that perspective, we really didn't get into a lot of detail on it but the capabilities of reducing the amount of compute you need in a server whether it be a CPU, GPU or otherwise and actually being able to use intelligent storage to drive the bandwidth growth, the MVE fabric, or just the per-box density is just something that nobody's really taken a significant look at in the past. This is a definite solution to move it forward. >> So I'm going to turn that around and say software developers always find a way to fill up the space. So you can on the one hand look at it from a maybe you have low-cost CPUs but even if you have the same cost CPUs you can do so much more 'cause you can move so much more work out closer to the data. >> Correct. = All right. NGD Systems. Very, very interesting conversation. Thanks so much for coming and being on Cube. Once again, this is Peter Burris with a Cube Conversation. We've been speaking with NGD Systems. Thanks a lot for watching.
SUMMARY :
Nader Salessi is the CEO of NGD Systems. So guys, let me set the stage and let's get in to this Instead of moving the data to do any computation and then presenting it to the application. and just setting the subset of the results to the upper What is the special sauce inside this It doesn't need to be the NAND and having additional I/O gets handled at the storage device. relocate and do the movements on their own you eliminate Why not let the storage do some of that work? But it's got to be more than just MAP? at the storage so the Is it that kind of thing we're talking about? So that's the new For the security and authentication. How is this technology going to change the way we think Yeah, so the beauty of this is we all thought MVE was general purpose and folks aren't building deep into the So one of the biggest kricks whenever you bring something So the applications of enterprises of all sorts are What's on the horizon? There is not enough energy in the world. So I got one more question. That's the heritage of the last 15, 17 years with many the amount of compute you need in a server So I'm going to turn that around and say software Once again, this is Peter Burris with a Cube Conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floier | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
35 | QUANTITY | 0.99+ |
Scott Shadley | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
15 watt | QUANTITY | 0.99+ |
Nader Shalessi | PERSON | 0.99+ |
NGD Systems | ORGANIZATION | 0.99+ |
Nader Salessi | PERSON | 0.99+ |
16 lanes | QUANTITY | 0.99+ |
24 questions | QUANTITY | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
10,000 homes | QUANTITY | 0.99+ |
March 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
10,000 DVRs | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
eight watt | QUANTITY | 0.99+ |
eight terabytes | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two co-founders | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
SDC Estate | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
24 drives | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
one more question | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
hundred feet | QUANTITY | 0.98+ |
24 MVE | QUANTITY | 0.98+ |
40 watts | QUANTITY | 0.95+ |
one | QUANTITY | 0.92+ |
PC-83 | COMMERCIAL_ITEM | 0.92+ |
Xfinity | ORGANIZATION | 0.91+ |
hundreds of thousands of video | QUANTITY | 0.91+ |
about five years ago | DATE | 0.85+ |
years | QUANTITY | 0.77+ |
each workload | QUANTITY | 0.77+ |
Cube | ORGANIZATION | 0.76+ |
15, 17 years | QUANTITY | 0.75+ |
MVE | ORGANIZATION | 0.73+ |
square inch | QUANTITY | 0.7+ |
one lane | QUANTITY | 0.7+ |
terabyte | QUANTITY | 0.7+ |
Conversation | EVENT | 0.67+ |
CUBEConversation | EVENT | 0.65+ |
thousands | QUANTITY | 0.62+ |
PC-84 | COMMERCIAL_ITEM | 0.6+ |
MVEOF | ORGANIZATION | 0.58+ |
Cube | COMMERCIAL_ITEM | 0.58+ |
second | QUANTITY | 0.54+ |
couple | QUANTITY | 0.52+ |
I/O | EVENT | 0.49+ |
last | DATE | 0.37+ |