Image Title

Search Results for John Rogers':

Miniaturized System for Cell Handling and Analysis


 

>> So nice to meet you. And I'm Tetsuhiko Teshima from German branch of MEI Laboratories. I'm working at the Technische Universitat Munchen to conduct wet experiment like using chemical and biological samples. So it's great honor and pleasure for me to have a chance to share with you some topics about miniaturized biointerfaces that I have been working on over the last six or seven years, I guess. So before starting, please let me introduce myself and my background. So I started to work in this company since this March, but until the last year, I was working in NTT Basic Research Laboratories that is located in Kanagawa, Japan. And I have work on basic nanoscience research. But when going back to the further, I was originally a student studying biology especially infectious microbiology. And then I learned about the miniaturize fluidic system to manipulate single cells and MEMS technologies that is kind of a fabrication process for semiconductor devices. So, this background motivate me to start interdisciplinary work, especially about biomedical engineering at NTT Corporation. So in recent year, wearable electrodes have been developed to continuously monitor the vital data, including the heart rate, ECG, or EMG waveforms for rapid diagnosis and early stage treatment of disease. So conventionally, rigid metals or metal-plated fibers have been widely used as the electrodes but they lack flexibility and biocompatibilities, which results in the noise in obtaining data and the patient allergic reaction during the long time years. So at NTT, we are working on the research and development of the conductive composite materials. So, due to its high flexibility and hydrophilicity and biocompatibilities, so this electrodes can successfully record ECG without any rashes and itches to the skin. So now these wearable electrodes cores toy are commercially available and funds are applied for not only the medical care and rehabilitation for the patients, but also for example, remote monitoring system of the workers, integration with these sportswear and entertainment show. But this product is originated from the basic scientific findings especially on the conductive polymers, PEDOT:PSS and silk fibers. So there was some mainly conducted by two key scientists clinician doctor Tsukada, and chemist doctor, Nakashima. In order to realize this product, they try so many prototypes. And make so many effort to obtain the pharmaceutical probables for medical usage. So through this experience, we are going back to the original material science and research and making non-toxic interfaces with cells and tissues in order to seek new kind of development. So, as a next challenge, I have focused on the electrodes that work inside the bodies. So we have the tissues and organs with electrical signals like heart and brain. So if implanted electrodes can work on these tissues, this help us to increase the variety of the vital data like EEG. And also it can directly treat the targeted tissues as a surgical, too, like CRT pacing. So in this case, these biointerfaces should be populated in very humid environment and in non-toxic manner. They also should be transformed into soft, three dimensional structures, in order to fit the shape of cells and tissues because they have very complicated 3D structures. So I decided to develop the basic electrode component that meets all of these requirements that is biocompatible for example, like 3D film-electrodes. So what I tried at first is to create a non-toxic, very soft and flexible film-electrodes using the materials that are using the heatable electrodes that is silk bundles and PEDOT:PSS. So, firstly, I dissolve the silk bundle to extract a specific protein and process into a palette shape using MEMS technologies, one of my main skill. So by adding the conductive polymers, >> PEDOT: PSS little by little, the palettes will gradually become blue but maintain the high optical transparency. Through this experiment, I discover a very unique materials scientific aspect of silk fibroin. So when PEDOT:PSS got added, the molecular structure and the confirmation of silk protein dramatically change from alpha helix to the beta sheet, and I focused this structure change, leads to the increase in conductivity compared with the PEDOT:PSS pristine films. By using the lithographic fabrication process, the films can be process into very tiny shape, with same deviation as single cell Lego. So this electrode is made of the silk fibroin, the, are very cell friendly protein. So the suspender cells prefer to adhere to their surface. So after attaching the cells on a surface, I can manipulate the cells while maintain the adhesive properties and electrically simulate the cells for the cool, very weak electrical signals from the cells. So in this step we created a non-toxic, transparent, and very flexible films and film-based electrodes. But please note that the, they are 2D and they're not 3D. So in the next step, I try to investigated how to transform these same 2D film to 3D shape. So here, among two polymers I used, so I replace the PEDOT:PSS with different type of polymers, there is parylene, like this. So when the parylene is adhering to the silk fibroin layers so, the gradient of the mechanical stiffness is formed in the synchronous directions as shown here. And this gradient causes the driving force of same film folding, like this. So this is a, this is a movie of the self-folding bilayer films. And you can see these rectangular patterns spontaneously transform into the cylindrical shapes. So just before folding, I suspended the cells on top of the films that is derived from the heart muscles. So the folding films, so here can gently rub the cells inside the tubes and you can incubate them safely more than for two weeks in order to reconstitute the self-beating, fiber-shaped muscle tissues, as shown here. So also this reconstituted tissues can be manipulated like building blocks by picking up and dissolving using glass capillaries. So I believe this techniques has a potential to facilitate high-order self-assembly like artificial neural networks or tissue engineering. So I realized to transform the two different film to 3D shape. So I use this method to transform into 3D electrodes. So in the final step, instead of the silk fibroin, I focus on using extremely thin electrodes materials that is called graphene. So as I explained as extremely thin, so it consist of the only single layer of carbon atom. So since they has just a single atom thickness, it has very high optical transparency and flexibility. So when the graphene was transform to the parylene surface I found this bilayer was tightly bonded due to the strong molecular interactions and the graphene itself straight on the parylene surface and this cell film becomes three dimensional electrodes, like tubeless structures. So as you can see in this movie, like this. So just after releasing them from the service lead, I instantly undergoes a phase transition and collapse. So since, this hexagonal molecular structure of graphene is distorted due to the folding process, so electrical characteristics dramatically change from firstly metallic to the semiconductor like non-linear shape, shown here. Or interestingly, the curvature and direction of the cell folding can be well controls with number of graphene, this and it's crystalline directions. So when a merged layers graphene were transfer, the curvature radius become smaller and smaller. And when the crystal, crystal, sorry, single crystalline graphene was loaded on the surface of parylene, this bilayer was folded in one fixed same direction, especially along the arms here siding. So by simply transferring the single carbon atom layer to the parylene surface, so we achieved the self-assembly of 3D transparent electrodes. In order to demonstrate biocompatibility of this graphene electrodes, we apply for the interface with neurons. So as there was a self-folding of silk fibroin, so we suspended the neurons are encapsulated in the self-folded graphene tubes, like this. So I made it a very tiny holes on the films. So the encapsulated neurons can uptake the nutrition and oxygen through this pore. So I culture the neurons for, without any damage, to the cells, and they exhibit cell-cell contact for tissue-like structures and they elongate their nuclei and axon to the outside through this pore. Therefore, the embedded neurons properly exhibit cell-cell interaction and drive intrinsic morphologies and function, which shows achievement of biocompatibility of the graphene electrodes. So in summary, we have been working on producing tiny 3D electrodes, step-by-step, using only four materials. For example, by mixing conductive polymer, >> PEDOT: PSS with silk fibroin, I made transparent and flexible 2D electrodes. By making a bilayer with silk fibroin with parylene, I demonstrated the self-assembly from 2D film to 3D shape. Finally, by transferring the graphene to paralyene, we could assembly tiny 3D electrodes. So in the future, we will continue to work on making bioelectrodes from the material science and biological viewpoints. However, these two approaches are not sufficient for the research or the bioelectronics. And we especially needed the technology of electrochemical assessment of fabricated electrodes and the method to lead up of obtain vital data and manipulation and analysis of obtain data. Therefore, I belong to both of the TUM and NTT research, in order to achieve the four system. So when I look over the world R&D of the bioelectronics, especially implantable electronics are very active, regardless of the university and industry. So firstly, John Rogers' group in University of Illinois, in United States, started to advocate about the implantable, flexible bioelectronics, more than 10 years ago. So now the research on, about it, is rapidly growing all over the world, not only US, but the Asia and Europe. So, the industrial community also tend to participate in this field. So I really hope to contributed to the scientific achievement and the creation of industry from the German basis, by making the most of my experience and cooperation with Japan and American side. So finally, I like to introduce my colleagues in TUM. So they are loved members and he, he is supervisor, Professor Bernhard Wolfrum, especially of the electrochemistry and electrochemical engineering process for biomedical application. So I'm so happy to work with this wonderful team and also appreciated the daily support of the members in NTT research in United States. Finally, let me just conclude by acknowledging my supervisor, mentors, Professor Wolfrum, Director Tomoike, and Dr. Alexander. And also the member from NTT who always support me, especially Mr. Kikuchi, Dr. Nakashima, Tsukada fellow, Director Goto, Dr. Yamamoto, and Director Sogawa. Finally, let me thanks Professor Offenhausser from Julich, for his kind assistance and introduction to this wonderful collaboration schemes. So, that's all. And I hope this presentation was useful to you. Thank you very much.

Published Date : Sep 21 2020

SUMMARY :

So by adding the conductive polymers, So in the next step, and the method to lead

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KikuchiPERSON

0.99+

NakashimaPERSON

0.99+

Tetsuhiko TeshimaPERSON

0.99+

TomoikePERSON

0.99+

TsukadaPERSON

0.99+

YamamotoPERSON

0.99+

AlexanderPERSON

0.99+

GotoPERSON

0.99+

SogawaPERSON

0.99+

NTTORGANIZATION

0.99+

United StatesLOCATION

0.99+

NTT Basic Research LaboratoriesORGANIZATION

0.99+

MEI LaboratoriesORGANIZATION

0.99+

TUMORGANIZATION

0.99+

last yearDATE

0.99+

Bernhard WolfrumPERSON

0.99+

two key scientistsQUANTITY

0.99+

WolfrumPERSON

0.99+

Kanagawa, JapanLOCATION

0.99+

EuropeLOCATION

0.99+

two polymersQUANTITY

0.99+

USLOCATION

0.99+

AsiaLOCATION

0.99+

bothQUANTITY

0.99+

John Rogers'PERSON

0.99+

University of IllinoisORGANIZATION

0.98+

NTT CorporationORGANIZATION

0.97+

OffenhausserPERSON

0.97+

firstlyQUANTITY

0.96+

two approachesQUANTITY

0.96+

LegoORGANIZATION

0.95+

firstQUANTITY

0.95+

Technische Universitat MunchenORGANIZATION

0.94+

single cellQUANTITY

0.94+

single atomQUANTITY

0.94+

more than 10 years agoDATE

0.93+

JulichPERSON

0.92+

two different filmQUANTITY

0.9+

single layer of carbon atomQUANTITY

0.9+

single carbon atomQUANTITY

0.88+

two weeksQUANTITY

0.87+

paryleneOTHER

0.87+

four materialsQUANTITY

0.85+

2DQUANTITY

0.84+

oneQUANTITY

0.84+

single cellsQUANTITY

0.81+

four systemQUANTITY

0.8+

3DQUANTITY

0.77+

seven yearsQUANTITY

0.77+

ibroinOTHER

0.77+

singleQUANTITY

0.73+

JapanLOCATION

0.72+

last sixDATE

0.64+

ProfessorPERSON

0.59+

three dimensionalQUANTITY

0.58+

this MarchDATE

0.57+

GermanOTHER

0.56+

yearDATE

0.51+

ECGOTHER

0.5+

AmericanLOCATION

0.48+

Josh Rogers, Syncsort | theCUBE NYC 2018


 

>> Live from New York, it's theCUBE, covering theCUBE New York City 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Okay, welcome back, everyone. We're here live in New York City for CUBE NYC. This is our ninth year covering the big data ecosystem, now it's AI, machine-learning, used to be Hadoop, now it's growing, ninth year covering theCUBE here in New York City. I'm John Furrier, with Dave Vellante. Our next guest, Josh Rogers, CEO of Syncsort. I'm going back, long history in theCUBE. You guys have been on every year. Really appreciate chatting with you. Been fun to watch the evolution of Syncsort and also get the insight. Thanks for coming on, appreciate it. >> Thanks for having me. It's great to see you. >> So you guys have constantly been on this wave, and it's been fun to watch. You guys had a lot of IP in your company, and then just watching you guys kind of surf the big data wave, but also make some good decisions, made some good calls. You're always out front. You guys are on the right parts of the wave. I mean now it's cloud, you guys are doing some things. Give us a quick update. You guys got a brand refresh, so you got the new logo goin' on there. Give us a quick update on Syncsort. You got some news, you got the brand refresh. Give us a quick update. >> Sure. I'll start with the brand refresh. We refreshed the brand, and you see that in the web properties and in the messaging that we use in all of our communications. And, we did that because the value proposition of the portfolio had expanded so much, and we had gained so much more insight into some of the key use cases that we're helping customers solve that we really felt we had to do a better job of telling our story and, probably most importantly, engage with the more senior level within these organizations. What we've seen is that when you think about the largest enterprises in the world, we offer a series of solutions around two fundamental value propositions that tend to be top of mind for these executives. The first is how do I take the 20, 30, 40 years of investment in infrastructure and run that as efficiently as possible. You know, I can't make any compromises on the availability of that. I certainly have to improve my governance and secureability of that environment. But, fundamentally, I need to make sure I could run those mission-critical workloads, but I need to also save some money along the way, because what I really want to do is be a data-driven enterprise. What I really want to do is take advantage of the data that gets produced in these transactional applications that run on my AS400 or IBM I-infra environment, my mainframe environment, even in my traditional data warehouse, and make sure that I'm getting the most out of that data by analyzing it in a next-generation set of-- >> I mean one of the trends I want to get your thoughts on, Josh, cause you're kind of talking through the big, meagatrend which is infrastructure agnostic from an application standpoint. So the that's the trend with dev ops, and you guys have certainly had diverse solutions across your portfolio, but, at the end of the day, this is the abstraction layer customers want. They want to run workloads on environments that they know are in production, that work well with applications, so they almost want to view the infrastructure, or cloud, if you will, same thing, as just agnostic, but let the programmability take care of itself, under the hood, if you will. >> Right, and what we see is that people are absolutely kind of into extending and modernizing existing applications. This is in the large enterprise, and those applications and core components will still run on mainframe environments. And so, what we see in terms of use cases is how do we help customers understand how to monitor that, the performance of those applications. If I have a tier that's sitting on the cloud, but it's transacting with the mainframe behind the firewall, how do I get an end-to-end view of application performance? How do I take the data that ultimately gets logged in a DB2 database on the mainframe and make that available in a next-generation repository, like Hadoop, so that I can do advanced analytics? When you think about solving both the optimization and the integration challenge there, you need a lot of expertise in both sides, the old and the new, and I think that's what we uniquely offer. >> You guys done a good job with integration. I want to ask quick question on the integration piece. Is this becoming more and more table stakes, but also challenging at the same time? Integration and connecting systems together, if their stateless, is no problem, you use APIs, right, and do that, but as you start to get data that needs state information, you start to think to think about some of the challenges around different, disparate systems being distributed, but networked, in some cases, even decentralized, so distributed networking is being radically changed by the data decisions on the architecture, but also integration, call it API 2.0 or this new way to connect and integrate. >> Yeah, so what we've tried to focus on is kind of solving that piece between these older applications that run these legacy platforms and making them available to whatever the consumer is. Today, we see Kafka and in Amazon we see Kinesis as kind of key buses delivering data as a service, and so the role that we see ourselves playing and what we announced this week is an ability to track changed data, deliver it in realtime in these older systems, but deliver it to these new targets: Kafka, Kinesis, and whatever comes next. Because really that's the fundamental partner we're trying to be to our customers is we will help you solve the integration challenge between this infrastructure you've been building for 30 years and this next-generation technology that lets you get the next leg of value out of your data. >> So Jim, when you think about the evolution of this whole big data space, the early narrative in the trade press was, well, NoSQL is going to replace Oracle and DB2, and the data lake is going to replace the EDW, and unstructured data is all that matters, and so forth. And now, you look at what's really happened is the EDW is a fundamental component of making decisions and insights, and SQL is the killer app for Hadoop. And I take an example of say fraud detection, and when you think and this is where you guys sit in the middle from the standpoint of data quality, data integration, in order to do what we've done in the past 10 years take fraud detection down from well, I look at my statement a month or two later and then call the credit card company, it's now gone to a text that's instantaneous. Still some false positives, and I'm sure working on that even. So maybe you could describe that use case or any other, your favorite use case, and what your role is there in terms of taking those different data sources, integrating them, improving the data quality. >> So, I think when you think about a use case where I'm trying to improve the SLA or the responsiveness of how do manage against or detect fraud, rather than trying to detect it on a daily basis, I'm trying to detect it at transaction time. The reality is you want to leverage the existing infrastructure you have. So if you have a data warehouse that has detailed information about transaction history, maybe that's a good source. If you have an application that's running on the mainframe that's doing those transaction realtime, the ultimate answer is how do I knit together the existing infrastructure I have and embed the additional intelligence and capability I need from these new capabilities, like, for example, using Kafka, to deliver a complete solution. What we do is we help customers kind of tie that together, Specifically, we announced this integration I mentioned earlier where we can take a changed data element in a DB2 database and publish it into Kafka. That is a key requirement in delivering this real-time fraud detection if I in fact am running transactions on a mainframe, which most of the banks are. >> Without ripping and replacing >> Why would you want to rip out an application >> You don't. >> your core customer file when you can just extend it. >> And you mentioned the Cloudera 6 certification. You guys have been early on there. Maybe talk a little about that relationship, the engineering work that has to get done for you to be able to get into the press release day one. >> We just mentioned that my first time on theCUBE was in 2013, and that was on the back of our initial product release in the big data world. When we brought the initial DMX-h release to market, we knew that we needed to have deep partnerships with Cloudera and the key platform providers. I went and saw Mike Olson, I introduced myself, he was gracious enough to give me an hour, and explain what we thought we could do to help them develop more value proposition around their platform, and it's been a terrific relationship. Our architecture and our engineering and product management relationship is such that it allows us to very rapidly certify and work on their new releases, usually within a couple a days. Not only can customers take advantage of that, which is pretty unique in the industry, but we get some some visibility from Cloudera as evidenced by Tendu's quote in the press release that was released this week, which is terrific. >> Talk about your business a little bit. You guys are like a 50-year old startup. You've had this really interesting history. I remember you from when I first started in the industry following you guys. You've restructured the company, you've done some spin outs, you've done some M and A, but it seems to be working. Talk about growth and progress that you're making. >> We're the leader in the Big Iron to Big Data market. We define that as allowing customers to optimize their traditional legacy investments for cost and performance, and then we help them maximize the value of the data that get generated in those environments by integrating it with next-generation analytic environments. To do that, we need a broad set of capability. There's a lot of different ways to optimize existing infrastructure. One is capacity management, so we made an acquisition about a year ago in the capacity management space. We're allowing customers to figure out how do I make sure I've got not too much and not too little capacity. That's an example of optimization. Another area of capability is data quality. If I'm maximize the value of the data that gets produced in these older environments, it would be great that when it lands in these next-generation repositories it's as high quality as possible. We acquired Trillium about a year ago, or actually coming up >> How's that comin'? >> on two years ago and we think that's a great capability for our customers It's going terrific. We took their core data quality engine, and now it runs natively on a distributed Hadoop infrastructure. We have customers leveraging it to deliver unprecedented volume of matching, so not only breakthrough performance, but this whole notion of write once, run anywhere. I can run it on an SMP environment. I can run it on Hadoop. I can run it Hadoop in the cloud. We've seen terrific growth in that business based on our continued innovation, particularly pointing it at the big data space. >> One of the things that I'm impressed with you guys is you guys have transformed, so having a transformation message to your customers is you have a lot of credibility, but what's interesting is is that the world with containers and Kubernetes now and multi-cloud, you're seeing that you don't have to kill the legacy to bring in the new stuff. You can see you can connect systems, when you guys have done with legacy systems, look at connect the data. You don't have to kill that to bring in the new. >> Right >> You can do cloud-native, you can do some really cool things. >> Right. I think there's-- >> This rip and replace concept is kind of going away. You put containers around it too. That helps. >> Right. It's expensive and it's risky, so why do that. I think that's the realization. The reality is that when people build these mission-critical systems, they stay in place for not five years, but 25 years. The question is how do you allow the customers to leverage what they have and the investment they've made, but take advantage of the next wave, and that's what we're singularly focused on, and I think we're doing a great job of that, not just for customers, but also for these next-generation partners, which has been a lot of fun for us. >> And we also heard people doing analytics they want to have their own multi-tenent, isolated environments, which goes to don't screw this system up, if it's doing a great job on a mission-critical thing, don't bundle it, just connect it to the network, and you're good. >> And on the cloud side, we're continuing to look at our portfolio and say what capabilities will customers want to consume in a cloud-delivery model. We've been doing that in the data quality space for quite awhile. We just launched and announced over the last about three months ago capacity management as a service. You'll continue to see, both on the optimization side and on the integration side, us continuing to deliver new ways for customers to consume the capabilities they need. >> That's a key thing for you guys, integration. That's pretty much how you guys put the stake in the ground and engineer your activities around integration. >> Yeah, we start with the premise that your going to need to continue to run this older investments that you made, and you're going to need to integrate the new stuff with that. >> What's next? What's goin' on the rest of the year with you guys? >> We'll continue to invest heavily in the realtime and changed-data capture space. We think that's really interesting. We're seeing a tremendous amount of demand there. We've made a series of acquisitions in the security space. We believe that the ability to secure data in the core systems and its journey to the next-generation systems is absolutely critical, so we'll continue to invest there. And then, I'd say governance, that's an area that we think is incredibly important as people start to really take advantage of these data lakes they're building, they have to establish real governance capabilities around those. We believe we have an important role to play there. And there's other adjacencies, but those are probably the big areas we're investing in right now. >> Just continuing to move the ball down the field in the Syncsort cadence of acquisitions, organic development. Congratulations. Josh, thanks for comin' on. To John Rogers, CEO of Syncsort, here inside theCUBE. I'm John Furrier with Dave Vellante. Stay with us for more big data coverage, AI coverage, cloud coverage here. Part of CUBE NYC, we're in New York City live. We'll be right back after this short break. Stay with us. (techno music)

Published Date : Sep 17 2018

SUMMARY :

Brought to you by SiliconANGLE Media and also get the insight. It's great to see you. kind of surf the big data wave, take advantage of the data I mean one of the trends I want to in a DB2 database on the by the data decisions on the architecture, and so the role that we and SQL is the killer app for Hadoop. the existing infrastructure you have. when you can just extend it. the engineering work that has to get done in the big data world. first started in the industry of the data that get generated I can run it Hadoop in the cloud. is that the world with containers You can do cloud-native, you can do I think there's-- concept is kind of going away. but take advantage of the next wave, connect it to the network, and on the integration side, put the stake in the ground integrate the new stuff with that. We believe that the ability to secure data in the Syncsort cadence of acquisitions,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JoshPERSON

0.99+

Josh RogersPERSON

0.99+

2013DATE

0.99+

JimPERSON

0.99+

Josh RogersPERSON

0.99+

20QUANTITY

0.99+

John RogersPERSON

0.99+

John FurrierPERSON

0.99+

Mike OlsonPERSON

0.99+

SyncsortORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

New York CityLOCATION

0.99+

New York CityLOCATION

0.99+

30 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

New YorkLOCATION

0.99+

KafkaTITLE

0.99+

an hourQUANTITY

0.99+

30QUANTITY

0.99+

both sidesQUANTITY

0.99+

bothQUANTITY

0.99+

NoSQLTITLE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.99+

40 yearsQUANTITY

0.98+

two years agoDATE

0.98+

first timeQUANTITY

0.98+

IBMORGANIZATION

0.98+

TodayDATE

0.98+

HadoopTITLE

0.98+

OracleORGANIZATION

0.98+

AmazonORGANIZATION

0.98+

ninth yearQUANTITY

0.97+

NYCLOCATION

0.97+

this weekDATE

0.96+

TrilliumORGANIZATION

0.96+

SQLTITLE

0.96+

this weekDATE

0.96+

50-year oldQUANTITY

0.96+

CUBEORGANIZATION

0.96+

OneQUANTITY

0.95+

a monthDATE

0.94+

EDWTITLE

0.92+

about a year agoDATE

0.91+

ClouderaORGANIZATION

0.91+

aboutDATE

0.9+

SLATITLE

0.84+

DB2TITLE

0.84+

oneQUANTITY

0.82+

CEOPERSON

0.81+

a year agoDATE

0.81+

theCUBEORGANIZATION

0.8+

about three months agoDATE

0.79+

AS400COMMERCIAL_ITEM

0.78+

waveEVENT

0.77+

past 10 yearsDATE

0.74+

two laterDATE

0.74+

two fundamental value propositionsQUANTITY

0.72+

KinesisTITLE

0.72+

couple a daysQUANTITY

0.71+

Cloudera 6TITLE

0.7+

big dataEVENT

0.64+

day oneQUANTITY

0.61+

2018DATE

0.57+

API 2.0OTHER

0.54+

TenduPERSON

0.51+