G37 Paul Duffy
(bright upbeat music) >> Okay, welcome back everyone to the live CUBE coverage here in Las Vegas for in-person AWS re:Invent 2021. I'm John Furrier host of theCUBE two sets, live wall to wall coverage, all scopes of the hybrid events. Well, great stuff online. That was too much information to consume, but ultimately as usual, great show of new innovation for startups and for large enterprises. We've got a great guest, Paul Duffy head of startups Solutions Architecture for North America for Amazon Web Services. Paul, thanks for coming on. Appreciate it. >> Hi John, good to be here. >> So we saw you last night, we were chatting kind of about the show in general, but also about start ups. Everyone knows I'm a big startup fan and big founder myself, and we talk, I'm pro startups, everyone loves startups. Amazon, the first real customers were developers doing startups. And we know the big unicorns out there now all started on AWS. So Amazon was like a dream for the startup because before Amazon, you had to provision the server, you put in the Colo, you need a system administrator, welcome to EC2. Goodness is there, the rest is history. >> Yeah. >> The legacy and the startups is pretty deep. >> Yeah, you made the right point. I've done it myself. I co-founded a startup in about 2007, 2008. And before we even knew whether we had any kind of product market fit, we were racking the servers and doing all that kind of stuff. So yeah, completely changed it. >> And it's hard too with the new technology now finding someone to actually, I remember when we stood with our first Hadoop and we ran a solar search engine. I couldn't even find anyone to manage it. Because if you knew Hadoop back then, you were working at Facebook or Hyperscaler. So you guys have all this technology coming out, so provisioning and doing the heavy lifting for start is a huge win. That's kind of known, everyone knows that. So that's cool. What are you guys doing now because now you've got large enterprises trying to beat like startups. You got startups coming in with huge white spaces out there in the market. Jerry Chen from Greylock, and it was only yesterday we talked extensively about the net new opportunities in the Cloud that are out there. And now you see companies like Goldman Sachs have super cloud. So there's tons of growth. >> Paul: Yeah. >> Take us through the white space. How do you guys see startups taking advantage of AWS to a whole another level. >> And I think it's very interesting when you look at how things have changed in those kind of 15 years. The old world's horrible, you had to do all this provisioning. And then with AWS, Adam Szalecki was talking in his keynote on the first day of the event where people used to think it was just good for startups. Now for startups, it was this kind of obvious thing because they didn't have any legacy, they didn't have any data centers, they didn't have necessarily a large team and be able to do this thing with no commitment. Spin up a server with an API call was really the revolutionary thing. In that time, 15 years later, startups still have the same kind of urgency. They're constrained by time, they're constrained by money, they're constrained by the engineering talent they have. When you hear some of the announcements this week, or you look what is kind of the building blocks available to those startups. That I think is where it's become revolutionary. So you take a startup in 2011, 2012, and they were trying to build something maybe they were trying to do image recognition on forms for example, and they could build that. But they had to build the whole thing in the cloud. We had infrastructure, we had database stuff, but they would have to do all of the kind of the stuff on top of that. Now you look at some of the kind of the AIML services we have things like Textract, and they could just take that service off the shelf. We've got one startup in Canada called Chisel AI. They're trying to disrupt the insurance industry, and they could just use these services like text extracts to just accelerate them getting into that product market fit instead of having to do this undifferentiated (indistinct). >> Paul, we talk about, I remember back in the day when Web Services and service oriented architecture, building blocks, decoupling APIs, all that's now so real and so excellent, but you brought up a great point, Glue layers had to be built. Now you have with the scale of Amazon Web Services, things we're learning from other companies. It reminds me of the open source vibe where you stand on the shoulders of others to get success. And there's a lot of new things coming out that startups don't have to do because startup before then did. This is like a new, cool thing. It's a whole nother level. >> Yeah, and I think it's a real standing on the shoulders of giants kind of thing. And if you just unpick, like in Verna's announcement this morning, his key to this one, he was talking about the Amplify Studio kind of stuff. And if you think about the before and after for that, front-end developers have had to do this stuff for a long period of time. And in the before version, they would have to do all that kind of integration work, which isn't really what they want to spend that time doing. And now they've kind of got that headstart. Andy Jassy famously would say, when he talked about building AWS, that there is no compression algorithm for experience. I like to kind of misuse that phrase for what we try to do for startups is provide these compression algorithms. So instead of having say, hire a larger engineering team to just do this kind of crafty stuff, they can just take the thing and kind of get from naught to 60 (indistinct). >> Gives some examples today of where this is playing out in real time. What kinds of new compression algorithms can startups leverage that they couldn't get before what's new that's available? >> I think you see it across all parts of the stack. I mean, you could just take it out of a database thing, like in the old days, if you wanted to start, and you had the dream that every startup has, of getting to kind of hyper scale where things bursting that seems is the problem. If you wanted to do that in the database layer back in the day, you would probably have to provision most of that database stuff yourself. And then when you get to some kind of limiting factor, you've got to do that work where all you're really wanting to do is try and add more features to your application. Or whether you've got services like Aurora where that will do all of that kind of scaling from a storage point of view. And it gives that startup the way to stand on the shoulders of giants, all the same kind of thing. You want to do some kind of identity, say you're doing a kind of a dog walking marketplace or something like that. So one of the things that you need to do for the kind of the payments thing is some kind of identity verification. In the old days, you would have to have gone pulled all those premises together to do the stuff that would look at people's ID and so on. Now, people can take things like Textracts for example, to look at those forms and do that kind of stuff. And you can kind of pick that story in all of these different stream lines whether it's compute stuff, whether it's database, whether it's high-level AIML stuff, whether it's stuff like amplify, which just massively compresses that timeframe for the startup. >> So, first of all, I'm totally loving this 'cause this is just an example of how evolution works. But if I'm a startup, one of the big things I would think about, and you're a founder, you know this, opportunity recognition is one thing, opportunity capture is another. So moving fast is what nimble startups do. Maybe there's a little bit of technical debt. There maybe a little bit of model debt, but they can get beach head quickly. Startups can move fast, that's the benefit. So where do I learn if I'm a startup founder about where all these pieces are? Is there a place that you guys are providing? Is there use cases where founders can just come in and get the best of the best composable cloud? How do I stand up something quickly to get going that I could regain and refactor later, but not take on too much technical debt or just actually have new building blocks. Where are all these tools? >> I'm really glad you asked that one. So, I mean, first startups is the core of what everyone in my team does. And most of the people we hire, well, they all have a passion for startups. Some have been former founders, some have been former CTOs, some have come to the passion from a different kind of thing. And they understand the needs of startups. And when you started to talk about technical debt, one of the balances that startups have always got to get right, is you're not building for 10 years down the line. You're building to get yourself often to the next milestone to get the next set of customers, for example. And so we're not trying to do the sort of the perfect anonymity of good things. >> I (indistinct) conception of startups. You don't need that, you just got to get the marketplace. >> Yeah, and how we try to do that is we've got a program called Activate and Activate gives startup founders either things like AWS credits up to a hundred thousand dollars in credits. It gives them other technical capabilities as well. So we have a part of the console, the management console called the Activate Console people can go there. And again, if you're trying to build a backend API, there is something that is built on AWS capability to be launched recently that basically says here's some templatized stuff for you to go from kind of naught to 60 and that kind of thing. So you don't have to spend time searching the web. And for us, we're taking that because we've been there before with a bunch of other startups, so we're trying to help. >> Okay, so how do you guys, I mean, a zillion startups, I mean, you and I could be in a coffee shop somewhere, hey, let's do a startup. Do I get access, does everyone gets access to this program that you have? Or is it an elite thing? Is there a criteria? Is it just, you guys are just out there fostering and evangelizing brilliant tools. Is there a program? How do you guys- >> It's a program. >> How do you guys vet startup's, is there? >> It's a program. It has different levels in terms of benefits. So at the core of it it's open to anybody. So if you were a bootstrap startup tomorrow, or today, you can go to the Activate website and you can sign up for that self-starting tier. What we also do is we have an extensive set of connections with the community, so T1 accelerators and incubators, venture capital firms, the kind of places where startups are going to build and via the relationships with those folks. If you're in one, if you've kind of got investment from a top tier VC firm for example, you may be eligible for a hundred thousand dollars of credit. So some of it depends on where the stock is up, but the overall program is open to all. And a chunk of the stuff we talked about like the guidance that's there for everybody. >> It's free, that's free and that's cool. That's good learning, so yeah. And then they get the free training. What's the coolest thing that you're doing right now that startups should know about around obviously the passionate start ups. I know for a fact at 80%, I can say that I've heard Andy and Adam both say that it's not just enterprising, well, they still love the startups. That's their bread and butter too. >> Yeah, well, (indistinct) I think it's amazing that someone, we were talking about the keynote you see some of these large customers in Adam's keynote to people like United Airlines, very, very large successful enterprise. And if you just look around this show, there's a lot of startups just on this expert floor that we are now. And when I look at these announcements, to me, the thing that just gets me excited and keeps me staying doing this job is all of these little capabilities make it in the environment right now with a good funding environment and all of these technical building blocks that instead of having to take a few, your basic compute and storage, once you have all of these higher and higher levels things, you know the serverless stuff that was announced in Adam's keynotes early, which is just making it easy. Because if you're a founder, you have an idea, you know the thing that you want to disrupt. And we're letting people do that in different ways. I'll pick one start up that I find really exciting to talk to. It's called Study. It's run by a guy called Zack Kansa. And he started that start up relatively recently. Now, if you started 15 years ago, you were going to use EC2 instances building on the cloud, but you were still using compute instances. Zack is really opinionated and a kind of a technology visionary in this sense that he takes this serverless approach. And when you talk to him about how he's building, it's almost this attitude of, if I've had to spin up a server, I've kind of failed in some way, or it's not the right kind of thing. Why would we do that? Because we can build with these completely different kinds of architectures. What was revolutionary 15 years ago, and it's like, okay, you can launch it and serve with an API, and you're going to pay by the hour. But now when you look at how Zack's building, you're not even launching a server and you're paying by the millions. >> So this is a huge history lesson slash important point. Back 15 years ago, you had your alternative to Amazon was provisioning, which is expensive, time consuming, lagging, and probably causes people to give up, frankly. Now you get that in the cloud either you're on your own custom domain. I remember EC2 before they had custom domains. It was so early. But now it's about infrastructures code. Okay, so again, evolution, great time to market, buy what you need in the cloud. And Adam talked about that. Now it's true infrastructure is code. So the smart savvy architects are saying, Hey, I'm just going to program. If I'm spinning up servers, that means that's a low level primitive that should be automated. >> Right. >> That's the new mindset. >> Yeah, that's why the fun thing about being in this industry is in just in the time that I've worked at AWS, since about 2011, this stuff has changed so much. And what was state of the art then? And if you take, it's funny, when you look at some of the startups that have grown with AWS, like whether it's Airbnb, Stripe, Slack and so on. If you look at how they built in 2011, because sometimes new startups will say, oh, we want to go and talk to this kind of unicorn and see how they built. And if you actually talked to the unicorn, some of them would say, we wouldn't build it this way anymore. We would do the kind of stuff that Zack and the folks studied are doing right now, because it's totally different (indistinct). >> And the one thing that's consistent from then to now is only one thing, it has nothing to do with the tech, it's speed. Remember rails front end with some backend Mongo, you're up on EC2, you've got an app, in a week, hackathon. Weekend- >> I'm not tying that time thing, that just goes, it gets smaller and smaller. Like the amplify thing that Verna was talking about this morning. You could've gone back 15 years, it's like, okay, this is how much work the developer would have to do. You could go back a couple of years and it's like, they still have this much work to do. And now this morning, it's like, they've just accelerated them to that kind of thing. >> We'll end on giving Jerry Chan a plug in our chat yesterday. We put the playbook out there for startups. You got to raise your focus on the beach head and solve the problem you got in front of you, and then sequence two adjacent positions, refactor in the cloud. Take that approach. You don't have to boil the ocean over right away. You get in the market, get in and get automating kind of the new playbook. It's just, make everything work for you. Not use the modern. >> Yeah, and the thing for me, that one line, I can't remember it was Paul Gray, or somehow that I stole it from, but he's just encouraging these startups to be appropriately lazy. Like let us do the hard work. Let us do the undifferentiated heavy lifting so people can come up with these super cool ideas. >> Yeah, just plugging the talent, plugging the developer. You got a modern application. Paul, thank you for coming on theCUBE, I appreciate it. >> Thank you. >> Head of Startup Solution Architecture North America, Amazon Web Services is going to continue to birth more startups that will be unicorns and decacorns now. Don't forget the decacorns. Okay, we're here at theCUBE bringing you all the action. I'm John Furrier, theCUBE. You're watching the Leader in Global Tech Coverage. We'll be right back. (bright upbeat music)
SUMMARY :
all scopes of the hybrid events. So we saw you last night, The legacy and the and doing all that kind of stuff. And now you see companies How do you guys see startups all of the kind of the stuff that startups don't have to do And if you just unpick, can startups leverage that So one of the things that you need to do and get the best of the And most of the people we hire, you just got to get the marketplace. So you don't have to spend to this program that you have? So at the core of it it's open to anybody. What's the coolest thing And if you just look around this show, Now you get that in the cloud And if you actually talked to the unicorn, And the one thing that's Like the amplify thing that Verna kind of the new playbook. Yeah, and the thing for me, Yeah, just plugging the bringing you all the action.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
Paul Duffy | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Jerry Chan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam Szalecki | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Goldman Sachs | ORGANIZATION | 0.99+ |
United Airlines | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
today | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Paul Gray | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
2008 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
2012 | DATE | 0.99+ |
Zack | PERSON | 0.99+ |
Hyperscaler | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
15 years | QUANTITY | 0.98+ |
EC2 | TITLE | 0.98+ |
2007 | DATE | 0.98+ |
15 years later | DATE | 0.98+ |
two sets | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
15 years ago | DATE | 0.97+ |
first day | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Hadoop | TITLE | 0.97+ |
60 | QUANTITY | 0.97+ |
Solutions Architecture | ORGANIZATION | 0.96+ |
this morning | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
hundred thousand dollars | QUANTITY | 0.95+ |
last night | DATE | 0.95+ |
Airbnb | ORGANIZATION | 0.95+ |
two adjacent positions | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Greylock | ORGANIZATION | 0.93+ |
one thing | QUANTITY | 0.93+ |
Colo | LOCATION | 0.93+ |
Stripe | ORGANIZATION | 0.91+ |
T1 | ORGANIZATION | 0.9+ |
one line | QUANTITY | 0.88+ |
up to a hundred thousand dollars | QUANTITY | 0.87+ |
Mongo | ORGANIZATION | 0.86+ |
Chisel AI | ORGANIZATION | 0.85+ |
UNLIST TILL 4/2 - Optimizing Query Performance and Resource Pool Tuning
>> Jeff: Hello, everybody and thank you for Joining us today for the virtual "Vertica VBC" 2020. Today's breakout session has been titled "Optimizing Query Performance and Resource Pool Tuning" I'm Jeff Ealing, I lead Vertica marketing. I'll be your host for this breakout session. Joining me today are Rakesh Banula, and Abhi Thakur, Vertica product technology engineers and key members of the Vertica customer success team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. We'll answer as many questions we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively, visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to Join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of your slides. And yes, this virtual session is being recorded, will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you Rakesh. >> Rakesh: Thank you, Jeff. Hello, everyone. My name is Rakesh Bankula. Along with me, we have Bir Abhimanu Thakur. We both are going to cover the present session on "Optimizing Query Performance and Resource Pool Tuning" In this session, we are going to discuss query optimization, how to review the query plans and how to get the best query plans with proper production design. Then discuss on resource allocations and how to find resource contention. And we will continue the discussion on important use cases. In general, to successfully complete any activity or any project, the main things it requires are the plan. Plan for that activity on what to do first, what to do next, what are things you can do in parallel? The next thing you need, the best people to work on that project as per the plan. So, first thing is a plan and next is the people or resources. If you overload the same set of people, our resources by involving them in multiple projects or activities or if any person or resource is sick in a given project is going to impact on the overall completion of that project. The same analogy we can apply through query performance too. For a query to perform well, it needs two main things. One is the best query plan and other is the best resources to execute the plan. Of course, in some cases, resource contention, whether it can be from system side or within the database may slow down the query even when we have best query plan and best resource allocations. We are going to discuss each of these three items a little more in depth. Let us start with query plan. User submits the query to database and Vertica Optimizer generates the query plan. In generating query plans, optimizer uses the statistics information available on the tables. So, statistics plays a very important role in generating good query plans. As a best practice, always maintain up-to-date statistics. If you want to see how query plan looks like, add explain keyword in front of your query and run that query. It displays the query plan on the screen. Other option is BC explained plans. It saves all the explained plans of the queries run on the database. So, once you have a query plan, once you're checking it to make sure plan is good. The first thing I would look for, no statistics are predicted out of range. If you see any of these, means table involved in the query, have no up to date statistics. It is now the time to update the statistics. Next thing to explain plans are broadcast, three segments around the Join operator, global re segments around a group by operators. These indicate during the runtime of the query, data flow between the nodes over the network and will slow down the query execution. As far as possible, prevent such operations. How to prevent this, we will discuss in the projection design topic. Regarding the Join order, check on inner side and outer side, which tables are used, how many rows each side processing. In (mumbles) picking a table, having smaller number of rows is good in case of as shown as, as Join built in memory, smaller the number of rows, faster it is to build the hash table and also helps in consuming less memory. Then check if the plan is picking query specific projection or default projections. If optimizer ignoring any query specific projection, but picking the default super projection will show you how to use query specific hints to follow the plant to pick query specific projections which helps in improving the performance. Okay, here is one example query plan of a query trying to find number of products sold from a store in a given state. This query is having Joins between store table, product table and group by operation to find the count. So, first look for no statistics particularly around storage access path. This plan is not reporting any no statistics. This means statistics are up to date and plan is good so far. Then check what projections are used. This is also around the storage access part. For Join orders check, we have Hash Join in path ID 4 having it In Path ID 6 processing 60,000 rows and outer is in Path ID 7 processing 20 million rows. Inner side processing last record is good. This helps in building hash table quicker by using less memory. Check if any broadcast re segments, Joins in Path ID 4 and also Path ID 3. Both are having inner broadcast, Inners are having 60,000 records are broadcasted to all nodes in the cluster. This could impact the query performance negatively. These are some of the main things which we normally check in the explained plans. Still now, We have seen that how to get good query plans. To get good query plans, we need to maintain up to date statistics and also discussed how to review query plans. Projection design is the next important thing in getting good query plans, particularly in preventing broadcasts re segments. Broadcast re segments happens during Join operation, random existing segmentation class of the projections involved in the Join not matching with the Join columns in the query. These operations causes data flow over the network and negatively impacts the query performance particularly when it transfers millions or billions of rows. These operations also causes query acquire more memory particularly in network send and receive operations. One can avoid these broadcast re segments with proper projection segmentation, say, Join involved between two fact tables, T1, T2 on column I then segment the projections on these T1, T2 tables on column I. This is also called identically segmenting projections. In other cases, Join involved between a fact table and a dimension table then replicate or create an unsegmented projection on dimension table will help avoiding broadcast re segments during Join operation. During group by operation, global re segment groups causes data flow over the network. This can also slow down the query performance. To avoid these global re segment groups, create segmentation class of the projection to match with the group by columns in the query. In previous slides, we have seen the importance of projection segmentation plus in preventing the broadcast re segments during the Join operation. The order by class of production design plays important role in picking the Join method. We have two important Join methods, Merge Join and Hash Join. Merge Join is faster and consumes less memory than hash Join. Query plan uses Merge Join when both projections involved in the Join operation are segmented and ordered on the Join keys. In all other cases, Hash Join method will be used. In case of group by operation too, we have two methods. Group by pipeline and group by Hash. Group by pipeline is faster and consumes less memory compared to group by Hash. The requirements for group by pipeline is, projection must be segmented and ordered by on grouping columns. In all other cases, group by hash method will be used. After all, we have seen importance of stats and projection design in getting good query plans. As statistics are based on estimates over sample of data, it is possible in a very rare cases, default query plan may not be as good as you expected, even after maintaining up-to-date stats and good projection design. To work around this, Vertica providing you some query hints to force optimizer to generate even better query plans. Here are some example Join hints which helps in picking Join method and how to distribute the data, that is broadcast or re segment on inner or outer side and also which group by method to pick. The table level hints helps to force pick query specific projection or skipping any particular projection in a given query. These all hints are available in Vertica documentation. Here are a few general hints useful in controlling how to load data with the class materialization et cetera. We are going to discuss some examples on how to use these query hints. Here is an example on how to force query plan to pick Hash Join. The hint used here is JTYPE, which takes arguments, H for HashJoin, M for MergeJoin. How to place this hint, just after the Join keyword in the query as shown in the example here. Another important Join in this, JFMT, Join For My Type hint. This hint is useful in case when Join columns are lost workers. By default Vertica allocates memory based on column data type definition, not by looking at the actual data length in those columns. Say for example, Join column defined as (mumbles) 1000, 5000 or more, but actual length of the data in this column is, say, less than 50 characters. Vertica going to use more memory to process such columns in Join and also slow down the Join processing. JSMP hint is useful in this particular case. JSMP parameter uses the actual length of the Join column. As shown in the example, using JFMP of V hint helps in reducing the memory requirement for this query and executes faster too. Distrib hint helps in how to force inner or outer side of the Join operator to be distributed using broadcast or re segment. Distrib takes two parameters. First is the outer site and second is the inner site. As shown in the example, DISTRIB(A,R) after Join keyword in the query helps to force re segment the inner side of the Join, outer side, leaving it to optimizer to choose that distribution method. GroupBy Hint helps in forcing query plan to pick Group by Hash or Group by Pipeline. As shown in the example, GB type or hash, used just after group by class in the query helps to force this query to pick Group by Hashtag. See now, we discussed the first part of query performance, which is query plans. Now, we are moving on to discuss next part of query performance, which is resource allocation. Resource Manager allocates resources to queries based on the settings on resource pools. The main resources which resource pools controls are memory, CPU, query concurrency. The important resource pool parameters, which we have to tune according to the workload are memory size, plan concurrency, mass concurrency and execution parallelism. Query budget plays an important role in query performance. Based on the query budget, query planner allocate worker threads to process the query request. If budget is very low, query gets less number of threads, and if that query requires to process huge data, then query takes longer time to execute because of less threads or less parallelism. In other case, if the budget is very high and query executed on the pool is a simple one which results in a waste of resources, that is, query which acquires the resources holds it till it complete the execution, and that resource is not available to other queries. Every resource pool has its own query budget. This query budget is calculated based on the memory size and client and currency settings on that pool. Resource pool status table has a column called Query Budget KB, which shows the budget value of a given resource pool. The general recommendation for query budget is to be in the range of one GB to 10 GB. We can do a few checks to validate if the existing resource pool settings are good or not. First thing we can check to see if query is getting resource allocations quickly, or waiting in the resource queues longer. You can check this in resource queues table on a live system multiple times, particularly during your peak workload hours. If large number of queries are waiting in resource queues, indicates the existing resource pool settings not matching with your workload requirements. Might be, memory allocated is not enough, or max concurrency settings are not proper. If query's not spending much time in resource queues indicates resources are allocated to meet your peak workload, but not sure if you have over or under allocated the resources. For this, check the budget in resource pool status table to find any pool having way larger than eight GB or much smaller than one GB. Both over allocation and under allocation of budget is not good for query performance. Also check in DC resource acquisitions table to find any transaction acquire additional memory during the query execution. This indicates the original given budget is not sufficient for the transaction. Having too many resource pools is also not good. How to create resource pools or even existing resource pools. Resource pool settings should match to the present workload. You can categorize the workload into well known workload and ad-hoc workload. In case of well-known workload, where you will be running same queries regularly like daily reports having same set of queries processing similar size of data or daily ETL jobs et cetera. In this case, queries are fixed. Depending on the complexity of the queries, you can further divide it into low, medium, high resource required pools. Then try setting the budget to 1 GB, 4 GB, 8 GB on these pools by allocating the memory and setting the plan concurrency as per your requirement. Then run the query and measure the execution time. Try couple UP iterations by increasing and then decreasing the budget to find the best settings for your resource pools. For category of ad-hoc workload where there is no control over the number of users going to run the queries concurrently, or complexity of queries user going to submit. For this category, we cannot estimate, in advance, the optimum query budget. So for this category of workload, we have to use cascading resource pool settings where query starts on the pool based on the runtime they have set, then query resources moves to a secondary pool. This helps in preventing smaller queries waiting for resources, longer time when a big query consuming all resources and rendering for a longer time. Some important resource pool monitoring tables, analyze system, you can query resource cues table to find any transaction waiting for resources. You will also find on which resource pool transaction is waiting, how long it is waiting, how many queries are waiting on the pool. Resource pool status gives info on how many queries are in execution on each resource pool, how much memory in use and additional info. For resource consumption of a transaction which was already completed, you can play DC resource acquisitions to find how much memory a given transaction used per node. DC resource pool move table shows info on what our transactions moved from primary to secondary pool in case of cascading resource pools. DC resource rejections gives info on which node, which resource a given transaction failed or rejected. Query consumptions table gives info on how much CPU disk network resources a given transaction utilized. Till now, we discussed query plans and how to allocate resources for better query performance. It is possible for queries to perform slower when there is any resource contention. This contention can be within database or from system side. Here are some important system tables and queries which helps in finding resource contention. Table DC query execution gives the information on transaction level, how much time it took for each execution step. Like how much time it took for planning, resource allocation, actual execution etc. If the time taken is more in planning, which is mostly due to catalog contentions, you can play DC lock releases table as shown here to see how long transactions are waiting to acquire global catalog lock, how long transaction holding GCL x. Normally, GCL x acquire and release should be done within a couple of milliseconds. If the transactions are waiting for a few seconds to acquire GCL x or holding GCL x longer indicates some catalog contention, which may be due to too many concurrent queries or due to long running queries, or system services holding catalog mutexes and causing other transactions to queue up. A query is given here, particularly the system tables will help you further narrow down the contention. You can vary sessions table to find any long-running user queries. You can query system services table to find any service like analyze row counts, move out, merge operation and running for a long time. DC all evens table gives info on what are slower events happening. You can also query system resource usage table to find any particular system resource like CPU memory, disk IO or network throughput, saturating on any node. It is possible once slow node in the cluster could impact overall performance of queries negatively. To identify any slow node in the cluster, we use queries. Select one, and (mumbles) Clearly key one query just executes on initiative node. On a good node, kV one query returns within 50 milliseconds. As shown here, you can use a script to run this, select kV one query on all nodes in the cluster. You can repeat this test multiple times, say five to 10 times then reveal the time taken by this query on all nodes in all tech (mumbles) . If there is any one node taking more than a few seconds compared to other notes taking just milliseconds, then something is wrong with that node. To find what is going on with the node, which took more time for kV one query, run perf top. Perf top gives info on stopped only lister functions in which system spending most of the time. These functions can be counter functions or Vertica functions, as shown here. Based on their systemic spending most of the time we'll get some clue on what is going on with that code. Abhi will continue with the remaining part of the session. Over to you Abhi. >> Bir: Hey, thanks, Rakesh. My name is Abhimanu Thakur and today I will cover some performance cases which we had addressed recently in our customer clusters which we will be applying the best practices just showed by Rakesh. Now, to find where the performance problem is, it is always easy if we know where the problem is. And to understand that, like Rakesh just explained, the life of a query has different phases. The phases are pre execution, which is the planning, execution and post execution which is releasing all the required resources. This is something very similar to a plane taking a flight path where it prepares itself, gets onto the runway, takes off and lands back onto the runway. So, let's prepare our flight to take off. So, this is a use case which is from a dashboard application where the dashboard fails to refresh once in a while, and there is a batch of queries which are sent by the dashboard to the Vertica database. And let's see how we can be able to see where the failure is or where the slowness is. To reveal the dashboard application, these are very shortly queries, we need to see what were the historical executions and from the historical executions, we basically try to find where is the exact amount of time spent, whether it is in the planning phase, execution phase or in the post execution and if they are pretty consistent all the time, which means the plan has not changed in the execution which will also help us determine what is the memory used and if the memory budget is ideal. As just showed by Rakesh, the budget plays a very important role. So DC query executions, one-stop place to go and find your timings, whether it is a timing extra or is it execute plan or is it an abandoned plan. So, looking at the queries which we received and the times from the scrutinize, we find most of the time average execution, the execution is pretty consistent and there is some time, extra time spent in the planning phase which users of (mumbles) resource contention. This is a very simple matrix which you can follow to find if you have issues. So the system resource convention catalog contention and resource contention, all of these contribute mostly because of the concurrency. And let's see if we can drill down further to find the issue in these dashboard application queries. So, to get the concurrency, we pull out the number of queries issued, what is the max concurrency achieved, what are the number of threads, what is the overall percentage of query duration and all this data is available in the V advisor report. So, as soon as you provide scrutinize, we generate the V advisor report which helps us get complete insight of this data. So, based on this we definitely see there is very high concurrency and most of the queries finish in less than a second which is good. There are queries which go beyond 10 seconds and over a minute, but so definitely, the cluster had concurrency. What is more interesting is to find from this graph is... I'm sorry if this is not very readable, but the topmost line what you see is the Select and the bottom two or three lines are the create, drop and alters. So definitely this cluster is having a lot of DDL and DMLs being issued and what do they contribute is if there is a large DDL and DMLs, they cause catalog contention. So, we need to make sure that the batch, what we're sending is not causing too many catalog contention into the cluster which delays the complete plan face as the system resources are busy. And the same time, what we also analyze is the analyze tactics running every hour which is very aggressive, I would say. It should be scheduled to be need only so if a table has not changed drastically that's not scheduled analyzed tactics for the table. A couple more settings has shared by Rakesh is, it definitely plays a important role in the modeled and mode operations. So now, let's look at the budget of the query. The budget of the resource pool is currently at about two GB and it is the 75 percentile memory. Queries are definitely executing at that same budget, which is good and bad because these are dashboard queries, they don't need such a large amount of memory. The max memory as shown here from the capture data is about 20 GB which is pretty high. So what we did is, we found that there are some queries run by different user who are running in the same dashboard pool which should not be happening as dashboard pool is something like a premium pool or kind of a private run way to run your own private jet. And why I made that statement is as you see, resource pools are lik runways. You have different resource pools, different runways to cater different types of plane, different types of flights which... So, as you can manage your resource pools differently, your flights can take off and land easily. So, from this we did remind that the budget is something which could be well done. Now let's look... As we saw in the previous numbers that there were some resource weights and like I said, because resource pools are like your runways. So if you have everything ready, your plane is waiting just to get onto the runway to take off, you would definitely not want to be in that situation. So in this case, what we found is the coolest... There're quite a bit number of queries which have been waited in the pool and they waited almost a second and which can be avoided by modifying the the amount of resources allocated to the resource pool. So in this case, we increase the resource pool to provide more memory which is 80 GB and reduce the budget from two GB to one GB. Also making sure that the plan concurrency is increased to match the memory budget and also we moved the user who was running into the dashboard query pool. So, this is something which we have gone, which we found also in the resource pool is the execution parallelism and how this affects and what what number changes. So, execution parallelism is something which allocates the plan, allocates the number of threads, network buffers and all the data around it before even the query executes. And in this case, this pool had auto, which defaults to the core count. And so, dashboard queries not being too high on resources, they need to just get what they want. So we reduced the execution parallelism to eight and this drastically brought down the amount of threads which were needed without changing the time of execution. So, this is all what we saw how we could tune before the query takes off. Now, let's see what path we followed. This is the exact path what we followed. Hope of this diagram helps and these are the things which we took care of. So, tune your resource pool, adjust your execution parallelism based on the type of the queries the resource pool is catering to and match your memory sizes and don't be too aggressive on your resource budget. And see if you could replace your staging tables with temporary tables as they help a lot in reducing the DDLs and DMLs, reducing the catalog contention and the places where you cannot replace them with the truncate tables, reduce your analyzed statics duration and if possible, follow the best practices for a couple more operations. So moving on, let's let our query take a flight and see what best practices can be applied here. So this is another, I would say, very classic example of query where the query has been running and suddenly stops to fail. And if there is... I think most of the other seniors in a Join did not fit in memory. What does this mean? It basically means the inner table is trying to build a large Hash table, and it needs a lot of memory to fit. There are only two reasons why it could fail. One, your statics are outdated and your resource pool is not letting you grab all the memory needed. So in this particular case, the resource pool is not allowing all the memory it needs. As you see, the query acquire 180 GB of memory, and it failed. When looking at the... In most cases, you should be able to figure out the issue looking at the explained plan of the query as shared by Rakesh earlier. But in this case if you see, the explained plan looks awesome. There's no other operator like in a broadcast or outer V segment or something like that, it's just Join hash. So looking further we find into the projection. So inner is on segmented projection, the outer is segmented. Excellent. This is what is needed. So in this case, what we would recommend is go find further what is the cost. The cost to scan this row seems to be pretty high. There's the table DC query execution in their profiles in Vertica, which helps you drill down to every smallest amount of time, memory and what were the number of rows used by individual operators per pack. So, while looking into the execution engine profile details for this query, we found the amount of time spent is on the Join operator and it's the Join inner Hash table build time, which has taking huge amount of time. It's just waiting basically for the lower operators can and storage union to pass the data. So, how can we avoid this? Clearly, we can avoid it by creating a segmented projection instead of unsegmented projection on such a large table with one billion rows. Following the practice to create the projection... So this is a projection which was created and it was segmented on the column which is part of the select clause over here. Now, that plan looks nice and clean still, and the execution of this query now executes in 22 minutes 15 seconds and the most important you see is the memory. It executes in just 15 GB of memory. So, basically to what was done is the unsegmented projection which acquires a lot of memory per node is now not taking that much of memory and executing faster as it has been divided by the number of nodes per node to execute only a small share of data. But, the customer was still not happy as 22 minutes is still high. And let's see if we can tune it further to make the cost go down and execution time go down. So, looking at the explained plan again, like I said, most of the time, you could see the plan and say, "What's going on?" In this case, there is an inner re segment. So, how could we avoid the inner re segments? We can avoid the inner re segment... Most of the times, all the re segments just by creating the projection which are identically segmented which means your inner and outer both have the same amount, same segmentation clause. The same was done over here, as you see, there's now segment on sales ID and also ordered by sales ID which helps us execute the query drop from 22 minutes to eight minutes, and now the memory acquired is just equals to the pool budget which is 8 GB. And if you see, the most What is needed is the hash Join is converted into a merge Join being the ordered by the segmented clause and also the Join clause. So, what this gives us is, it has the new global data distribution and by changing the production design, we have improved the query performance. But there are times when you could not have changed the production design and there's nothing much which can be done. In all those cases, as even in the first case of Vertica after fail of the inner Join, the second Vertica replan (mumbles) spill to this operator. You could let the system degrade by acquiring 180 GB for whatever duration of minutes the query had. You could simply use this hand to replace and run the query in the very first go. Let the system have all the resources it needs. So, use hints wherever possible and filter disk is definitely your option where there're no other options for you to change your projection design. Now, there are times when you find that you have gone through your query plan, you have gone through every other thing and there's not much you see anywhere, but you definitely look at the query and you feel that, "Now, I think I can rewrite this query." And how what makes you decide that is you look at the query and you see that the same table has been accessed several times in my query plan, how can I rewrite this query to access my table just once? And in this particular use case, a very simple use case where a table is scanned three times for several different filters and then a union in Vertica union is kind of costly operator I would say, because union does not know what's the amount of data which should be coming from the underlying query. So we allocate a lot of resources to keep the union running. Now, we could simply replace all these unions by simple "Or" clause. So, simple "Or" clause changes the complete plan of the query and the cost drops down drastically. And now the optimizer almost know the exact amount of rows it has to process. So change, look at your query plans and see if you could make the execution in the profile or the optimizer do better job just by doing some small rewrites. Like if there are some tables frequently accessed you could even use a "With" clause which will do an early materialization and make use the better performance or for the union which I just shared and replace your left Joins with right Joins, use your (mumbles) like shade earlier for you changing your hash table types. This is the exact part what we have followed in this presentation. Hope this presentation was helpful in addressing, at least finding some performance issues in your queries or in your class test. So, thank you for listening to our presentation. Now we are ready for Q&A.
SUMMARY :
and key members of the Vertica customer success team. and other is the best resources to execute the plan. and the most important you see is the memory.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rakesh Banula | PERSON | 0.99+ |
Rakesh | PERSON | 0.99+ |
Abhi Thakur | PERSON | 0.99+ |
Jeff Ealing | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
two GB | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
one GB | QUANTITY | 0.99+ |
180 GB | QUANTITY | 0.99+ |
80 GB | QUANTITY | 0.99+ |
Rakesh Bankula | PERSON | 0.99+ |
1 GB | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
8 GB | QUANTITY | 0.99+ |
forum.vertica.com | OTHER | 0.99+ |
One | QUANTITY | 0.99+ |
22 minutes | QUANTITY | 0.99+ |
60,000 records | QUANTITY | 0.99+ |
15 GB | QUANTITY | 0.99+ |
4 GB | QUANTITY | 0.99+ |
10 GB | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
20 million rows | QUANTITY | 0.99+ |
less than a second | QUANTITY | 0.99+ |
two methods | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
less than 50 characters | QUANTITY | 0.99+ |
Abhi | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Abhimanu Thakur | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
eight minutes | QUANTITY | 0.99+ |
one billion rows | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three lines | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
one example | QUANTITY | 0.98+ |
each side | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
5000 | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
over a minute | QUANTITY | 0.98+ |
60,000 rows | QUANTITY | 0.98+ |
2020 | DATE | 0.98+ |
Path ID 3 | OTHER | 0.98+ |
1000 | QUANTITY | 0.98+ |
first part | QUANTITY | 0.98+ |
Path ID 7 | OTHER | 0.98+ |
10 seconds | QUANTITY | 0.98+ |
two reasons | QUANTITY | 0.97+ |
three items | QUANTITY | 0.97+ |
each resource pool | QUANTITY | 0.97+ |
about 20 GB | QUANTITY | 0.97+ |
GCL x | TITLE | 0.97+ |
both projections | QUANTITY | 0.97+ |
two parameters | QUANTITY | 0.97+ |
more than a few seconds | QUANTITY | 0.97+ |
Path ID 4 | OTHER | 0.97+ |
T2 | OTHER | 0.97+ |
75 percentile | QUANTITY | 0.97+ |
Bir Abhimanu Thakur | PERSON | 0.97+ |
both | QUANTITY | 0.96+ |
50 milliseconds | QUANTITY | 0.96+ |
each execution | QUANTITY | 0.96+ |
about two GB | QUANTITY | 0.96+ |
Path ID 6 | OTHER | 0.95+ |
this week | DATE | 0.95+ |
two main things | QUANTITY | 0.93+ |
eight | QUANTITY | 0.93+ |
eight GB | QUANTITY | 0.93+ |
two | QUANTITY | 0.93+ |
Tom Joyce | Mobile World Congress 2017
(upbeat music) >> [Announcer] Live, from Silicon Valley, it's theCUBE. Covering Mobile World Congress 2017. Brought to you by Intel. >> 'Kay, welcome back everyone. We are here live in Palo Alto, theCUBE studios, our new 4,500 square foot studio. We just moved in in January. We're covering Mobile World Congress for two days, 8 a.m. to six, every day today, Monday the 27th, and Tuesday, the 28th. That's Pacific Standard Time, of course. Barcelona's ending their day, people are at their dinners right now going to the after hour parties. Really getting into the evening festivities, the business development, and we're going to break down all the news with that. And we have Tom Joyce here for reaction. But first, my talking point for this segment. Tom, I want to get your reaction to this, is Mobile World Congress is going through a massive change as a show. CES became an automotive show, you saw that show. Mobile World Congress used to be a Telco show, device show. Now you're seeing Internet of Things, and Internet things and people, as Peter Burris from Wikibon pointed out on our opening today, where people are now the device, the phone, and the watches and the wearables, and the things are sensors, cars, cities, towns, homes, devices. Now you have this new connected Internet that goes to an extreme edge to wherever there's a digital signal and connection, powered by 5G. 5G is the big story at Mobile World Congress, certainly the glam is the devices, but those devices becoming more powerful with chips from Intel and Qualcomm and others. And as those devices become more powerful, the connected device, thing, or people, become much more powerful equation. The data behind it is a tsunami, and this, to me, is a step up in a game changing wealth creation, a value creation opportunity for the society or around the world, and for companies. So the question is, can this be the kind of change significantly impacting the world similar to the iPhone in 2007 when Steve Jobs announced the iPhone it changed the entire mobile landscape, and even Blackberry's making a comeback, and they were decimated by the iPhone. Can this 5G, this Internet of Things, Internet of Things of people, change the game, and what will happen? We believe it'll be massive. Tom, your reaction to this new change? >> Well, you know, I think you hit on a lot of it. I come at it from a different perspective, right? I spent 30 years in infrastructure, systems and software. So, when you're coming at it from that side, and you see this mobile world exploding, and Internet of Things starting to take off, and changes in terms of how the connectivity at the edge works, and this massive evolution, you can think about it from one of two ways. On one level, you can be terrified, you know? Cause it's all going away, (John laughs) all the stuff we built is going away. And, on the other hand, it creates a tremendous number of new opportunities. And I think we're only really just starting to see the creativity come out of the enterprise side in terms of not adapting to this change in mobile, but actually starting to invent things that will enable it and make Internet of Things possible. And, you know, new approaches to how silicon should be developed for those applications. New applications altogether. You know, I spent a lot of time recently looking at a number of different storage companies, and, you know, something fundamentally needs to change there in order for that technology to adapt, and guess what? It's now starting to really really come forward, and so, yeah, I think that what we're starting to see is the big engines, the big historical engines of innovation, starting to catch up to this big trend, and it's only the very beginning. >> We have Tom Joyce, who's an industry executive in the infrastructure area. Worked at EMC back in the early days, and then HPE and a variety of other companies. Tom, you're an expert in infrastructure, and this is what's interesting to me, as a technical person. You have the glam and the flair of mobile, the devices and the awesome screen capabilities, the size of the devices, the role of the tablet's now changing where it's going to become an entertainment device in home, and a companion to mobile. That's what people see. They see the virtual reality. They see the augmented reality. The coolness of some of this awesome software that needs all that 5G bandwidth. But, there's an under the hood kind of engine, on the infrastructure side that's going through a transformation. It's called network transformation because you have networks that move the data around. You have the compute power, the cloud, that computes on things. The data, and the storage that (laughs) stores it all. It's getting better on the device side, the handset, but also the stuff going on in the cloud. So I got to get your take. Why is now the time for the key network transformation? What are the key things happening now that really make this concept of network transformation so compelling? >> Well, you know, again, I'll take it from kind of a non-technical perspective, looking at it from an infrastructure guy standpoint, alright? What we've been looking at is, in the transformation of the compute platform, from inside your data center to the cloud, you know, all the things we've known kind of changing and going away. But, the parts the cloud hasn't really touched yet, or really transformed yet, are the piece between the cloud itself and that end user, or that remote office. You know, if you've got offices in far off lands, or, you know, small cities, getting the connectivity to be able to enable that new infrastructure platform is a challenge. And so, for a couple of years, there's been a lot of work done in network virtualization, and network functions virtualization, for ways to kind of break the stranglehold that a lot of these old proprietary technologies have had on that problem. And now we're starting to see new approaches to how you do WAN management across those, especially long distances. And I think that, especially with the growth in capacity demand from things like 5G, from things like Internet of Things, from the many different kinds of mobile devices we now have, it creates a forcing function on IT managers, and especially on telcos to say "geez, you know, we can't keep doing it with T1s. We can't wait 90 days to put in a T1 every time we open up a new building. We can't, you know, use the same old hardware because the cost model needs to change." And so there are, you know, quite a few companies, and by my count about a dozen of em that are looking at completely virtualized software ways to break that down. Do it flexibly, nine minutes instead of 90 days, a lot more performance. And so, you know, it's the demand is creating the opportunity but now you're starting to see innovators adapt and deliver new stuff to solve this problem. >> Tom, for the folks watching, I'll share some props for you. You've obviously been an executive in the infrastructure, but also at HPE prior to your role, and after your role they did some other things. But at HPE you were doing some mergers and acquisitions with Meg Whitman so you have a view of looking at the entrepreneurial landscape. So kind of, with that kind of focus, and also the infrastructure knowledge, what is some of the opportunities that the service providers in these telcos have? Because the network transformation that's happening with 5G and the software can give them a business model opportunity now that they have to seize on. This is the time. It seems like now the acceleration for those guys, and you can also apply it to say the enterprises as well, but there are opportunities out there. What are those opportunities for these service providers? >> Well, you know, I think if you're an established business there's a trade off between the bird in the hand and something's that disruptive but I might have to do anyway? And so I think some of these opportunities actually could potentially degrade profitability in the short term and that's, I think, where these guys kind of figure out "do I hold onto the old vine, or do I swing to the new vine?" And, it's a tough set of problems, but I think there is clearly opportunity to go completely software based, virtualize, around how you managed Wide Area Networking traffic. And I think some customers are starting to kind of force the telco providers to do that, by-- >> Andy Roe was the one who coined the term "eat your own before the competition does", but that's the dilemma, the innovative dilemmas that these telcos have and the service providers. If they don't reinvent the future, and hold onto the past-- >> They'll get disrupted. >> [John] They'll be extincted. >> Yeah. >> [John] They'll be extinct. So that's interesting. So I got to get your take. With that premise, it's pretty obvious what's happening. >> Yeah. >> Faster networks are happening. You want low latency, faster bandwidth on wireless, that's happening. >> Yeah. >> What does this mean for the new kind of networks? Because that seems to be a theme coming out of Mobile World Congress on day one, that's going to probably be big tomorrow on the news, is this network transformation. This new kind of network, where you have to have fast storage, you got to have low latency data flying around. >> Yeah, I think there are many different parts of it, and you could talk all afternoon about it. But on that one part we were just talking about, and I don't know this company very deeply, but a company like Viptela, right? Viptela is going up against those big T1 sales models and saying "we're going to do it a different way". And it's about speed, it's about performance, about capacity, latency, cost. It's also about flexibility. Like, what if I could kind of totally re-engineer how everything's wired up right now on Tuesday, and do it differently on Wednesday? You know, what if I could set up entirely new business models on the fly as opposed to having to plan it months and months in advance? In that, the word agility is overused, but that's what that is. And so, I think as you move more and more into software for every one of these functions in the network, it brings with it this benefit of agility. And I think that's under-measured in terms of how people value that. You know, the velocity being able to change your business it's a lot more than what the gear cost, what the depreciation it was, you know, what the pipe cost. And so, I think as folks make those moves, and they can go faster and do more than their competition, it's a game changer. You know there's a big discussion about the, you mentioned, the compute layer, and the storage layer. The kinds of storage systems you need if you're going to deploy services as a service provider. Whether it's a telco, or a small VAR that's acting like a service provider. If you're going to compete there, you need stuff that's a lot more flexible, again, a lot more agile, than the traditional storage systems. Now, I think, the notion of software defined storage has been around for a while. Figuring out how you make money at that? That's still a work in progress. But, as folks move towards more of a service model for their business it's not going to look like-- >> So it sounds like what you're saying is, the first wave of that is, from a table stakes standpoint is, speed and scale are kind of the first foundational thing that the storage guys have to get going. >> Yeah, I think, and storage is still the same. It has to be cost per, you know, cost per megabyte, gigabyte, terabyte. You need to have low latency, high I/O. Those are like the three things. And then the additional things are the services. Is it resilient? Now we're at a point where I think agility matters more than ever, right? If I can reconfigure everything and build a new service and I can do it today, versus plan for months, the benefit and the dollars around that are game changing. And the people they're game changing for are the service providers. >> Tom, I want to ask you a question from the mind of the average consumer out there, and we all have the relatives ask "hey, what's going on in your tech business?" Break it down for us. When people say "why can't my phone just go faster? Why can't I have better bandwidth?" They might not understand the complexities of what it takes to make all this stuff happen. What's holding back the acceleration in your mind? Is it the technology? Is it the personnel? Is there any kind of area out there that once that straw breaks the camel's back, what is that straw that breaks the camel's back to accelerate this production of great tech? >> Yeah, I mean look, I'm actually one of those grandparents that's asking how come it's not going faster, (John laughing) so I may not have the complete answer, but I'm that frustrated person. I will say that, you know, we're in an interesting period of time in terms of how investments get made in new technology. And if you kind of, somebody very smart said to me the other day "try to think of the pure innovations that came out of large, established companies in the last 10 years". And I've worked for a couple of em, right? But, the pure ground up innovations that became big, and you can't come up with a very long list right? It's been really driven through the venture community, certainly as Silicon Valley, you know, it's been an engine for decades now. But that's where it comes from. And we've kind of been in a limbo cycle, where folks have invested a lot of money in some areas that haven't paid off. So, I think we're in a little bit of a gap, where there's a lot of money going into obvious spaces. One of those obvious spaces is security. You know, before that it was all these apps that we use for social. But there hasn't been enough engineering and core hard tech silicon to drive these new apps. There hasn't been enough hard engineering and building entirely new, you know, storage platforms in software that scale at service provider levels, cause that's going to cost a lot of money. So I think we're starting to see the beginnings of that, but it takes time to play it all out. >> It's interesting the whole digital life thing is coming into the transformation, and Reuve Cohen, who was on earlier, said "Snapchat IPO is the big story", but if you look at it like say Snapchat, what they're doing, they're both a media company and a platform with the fake news from the Facebook platform in the previous election. You're seeing these platforms delivering the kind of value that they weren't really intending, the unintended consequences for these platforms is that they become other things too in digital. Like a media company when they weren't really trying to be that, and media comes in trying to be platforms. So, there seems to be a platform war going on around who is going to control the platforms. And the question that I always ask is, okay how does this work in a multi-company environment where composability is much more the new development philosophy than owning a stack, owning technology? >> Well, I agree with ya, and I think that, again, if you look at it from the standpoint of a customer that's going to buy a lot of their services from the cloud and a lot of their services from other service providers, you have to hit the price points and the performance and the reliability. After that, it's how fast can you turn me on? How fast can you change? It's back to that software based reconfiguration on the fly. If you can then bring to bear the ability to do that with different qualities of service, and more automated control of those changes, that's gold right? But I don't think we have seen that actually implemented yet at scale, in ways that people can consume. So, again, I think you're seeing a wave of investment by the VC folks in a number of areas, one of em is new kinds of silicon, new kinds of next generation flash technologies, and things like that. I think you're seeing service provider scale storage technologies starting to emerge. You're starting to see fundamental changes in how Wide Area Networks are managed, all in software, right? So, I think you play that through in the next year or two, the demand from mobile usage, but especially from Internet of Things, and its related demand for data is going to create a new market, a new market opportunities, and who will win? I don't know, but there's a lot of smart investors making bets there. >> So certainly you see a lot of the old guard out there, Intel, for instance, sponsored this program, gave us the ability to do the programming thanks to Intel and also SAP contributed a little bit. But you got HPE out there, you see IBM, all these guys are out there, these traditional suppliers. What's the one thing that you can point to that in this new era of supplying technology to the new guard of winners, whether they're telcos, or providers, or enterprises. The game certainly changed with the cloud. What's the blind spot for some of these guys? And where should they be looking for MNA activity-- >> [Tom] Yeah. >> [John] If you're the CEO of a big company, and you "hey I got to pivot, I got to fill my product lines", where's the order of operations from a focus standpoint? >> Okay, well you take a couple of those companies, and I'd say that I've both observed and been guilty of some of (John laughing) what's not working now, right? And the instinct, if you're in one of those places, is to say "look, we've got all this technology. We've got servers, storage, networking. What if we just bundle up what we've got and point it at this new set of applications?" And I think you can make some ground up there, you can do some stuff, but at the end of the day the new requirements require new technology. And I think the larger companies haven't been successful at investing in new stuff in their own, like memristor, or some of these new technologies folks have talked about, the machine. They get announced, they come, they go, why? Because they're expensive, they're really hard. >> [John] It takes real R&D. >> It takes years. Yeah, years of real R&D. And it's difficult in the economic environment that we're in to sustain that. So the reality is, I think there needs to be a lot more aggressive focus on identifying hard technology that can feed the supply chain for some of those solutions. And, that's what I think-- >> [John] That's what a startup opportunity is too. >> Startup opportunity-- >> Those guys got to fill that void cause they're doing the R&D. >> But the startups that are going to succeed in the future that relate to this problem, they're not the guy building an app. That's not where it is. It's technology that's actually hard. That's why I think you see things like Nvidia, why is that stock so high? Well, they developed unique silicon, that was applicable to a whole bunch more areas than folks realized, right? >> So the difference is, if I hear what you're saying, is there's two approaches. Technology looking for a problem, and then a problem that's solved by technology. Kind of the different kind of mindset. >> Yeah, exactly right. And I think that if you take Tesla as a very well known example. The amount of demand for analytics data is just extraordinary there, right? And that will lead to more requirements to say "no, no, no. I can't use your old stuff. You can bundle up the crap you have. (John laughs) You need to give me something that's tuned for the scale I'm talking about now and next." And I think that we're starting to see the venture community, and certainly my travels around the Valley here over the last couple of months, saying "we're probably going to have to get in earlier, and we're probably going to have to invest more and longer. Because the payoff's there, but these problems are big and require real hard technology." >> Well, Tom Joyce, thanks for coming in and sharing the commentary and reaction to Mobile World Congress. Real quick, what are you up to these days? I know you're looking at a bunch of CEO opportunities. You've been talking to a lot of VC firms in the Valley. What are you poking at? What's getting your attention these days? >> Well, you know, part of what I was just talking about is exciting, you know? There's a bunch of new things out there. There's some young people that are investing in the next wave of infrastructure, and so I'm looking at some of those things. And, you know, I may do a CEO thing. I've had a few opportunities like that, and I may focus more on the business development side and the investing side cause I've got a lot of experience-- >> But you're looking for technology plays? >> Yeah. >> Not in the, say a me too, kind of thing-- >> No I want to do something fun and big and new. (John laughing) You know, something that has potential for super growth, and so there are a lot of those here now. So it's a-- >> Well I think you made a good observation, and I think this applies to Mobile World Congress. One is it's kind of turning into an app show on one level because apps are the top of the stack. That's where the action is, whether it's an IoT app or car or something. But then there's the hard problems under the hood. >> I think that's right. And I think that's where a lot of the money's going to be. >> Yeah, and Intel's certainly done a great job. We're on the ground with Intel. We're going to have some more call-ins to analysts, and our reports on the ground at Mobile World Congress. Stay with us here at theCUBE, in Palo Alto live, in studio coverage of Mobile World Congress. We're going to be doing call-ins, folks hitting the parties, certainly hope they're a little bit looser from a couple cocktails. And Tapas went later in the night. Hopefully he had them calling in and get all the dirt, and all the stories. And from Mobile World Congress, we'll be right back with more after this short break. Thanks to Tom Joyce for coming, appreciate it. >> [Tom] Thank you too. >> Taking the time. We'll be back. (upbeat music) (elated music)
SUMMARY :
Brought to you by Intel. and the things are sensors, cars, and changes in terms of how the connectivity You have the glam and the flair of mobile, because the cost model needs to change." and also the infrastructure knowledge, "do I hold onto the old vine, and hold onto the past-- So I got to get your take. You want low latency, faster bandwidth Because that seems to be a theme and the storage layer. that the storage guys have to get going. and the dollars around that are game changing. from the mind of the average consumer out there, and building entirely new, you know, storage platforms And the question that I always ask is, and the performance and the reliability. What's the one thing that you can point to And the instinct, if you're in one of those places, So the reality is, I think there needs to be Those guys got to fill in the future that relate to this problem, Kind of the different kind of mindset. And I think that if you take Tesla and sharing the commentary and reaction and I may focus more on the business development side and so there are a lot of those here now. and I think this applies to Mobile World Congress. And I think that's where a lot of the money's going to be. and get all the dirt, and all the stories. Taking the time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy Roe | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Joyce | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Reuve Cohen | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
30 years | QUANTITY | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Blackberry | ORGANIZATION | 0.99+ |
Viptela | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
John | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
90 days | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
nine minutes | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Kay | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Tuesday | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
4,500 square foot | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
8 a.m. | DATE | 0.99+ |
CES | EVENT | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
two approaches | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Mobile World Congress 2017 | EVENT | 0.98+ |
one level | QUANTITY | 0.98+ |
Snapchat | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
Telco | ORGANIZATION | 0.98+ |
telco | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
six | DATE | 0.98+ |
two ways | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
today | DATE | 0.97+ |
next year | DATE | 0.96+ |
SAP | ORGANIZATION | 0.93+ |
27th | DATE | 0.9+ |
Ross Rexer & Eli Lilly - ServiceNow Knowledge13 - theCUBE
okay we're back this is Dave vellante I'm with Wikibon organ this is silicon angles the cube the cube is a live mobile studio we come into events we're here at knowledge service now's big customer event we're here at the aria hotel in Las Vegas and we've got wall-to-wall coverage today tomorrow and part of thursday as many of you know we were at sapphire now the big SI p customer show were simulcasting that on SiliconANGLE too but we're here in Las Vegas the ServiceNow conference is all about transformation transforming from no to now we've kind of got a double whammy segment here virtually every industry is transforming and certainly Big Pharma is transforming quite dramatically as well as the IT components of many industries Ross rexer is here he's the managing director at kpmg the global consultancy and T Juan Lumpkin who's an IT practitioner for Eli Lilly gentlemen welcome to the cube okay thanks for you so Ross let's start with you at a high level what's happening in the pharmaceutical industry in general Big Pharma how is the industry itself transforming and then we'll get into the I TPS sure so many of the Big Pharma's find themselves today in a situation that is unique to their their business industry and market where a lot of blocked blockbuster drugs which have been significant sources of revenue over the years are starting to come on with that it brings competition and a loss of revenue so the big farmers are all in a very coordinated methodical process right now to resize their business and at the same time enable the R&D function to bring new drugs to market focusing on patient outcomes that will happen in different ways in them they probably ever done before so the business model itself has changed and along with it all the support functions like ITA of course too so in that so it's all about the pipeline right and and the challenge if I understand it is that historically you got the big pharma companies they would you know go do about go about their thing and develop these drugs and they get a blockbuster and it was a relative today a relatively slow paced environment that's that's changing if I understand it correctly what's driving that change so the the innovation around medicines today is much different than it has been over the last 10 20 years in that composition around in the use of different biotech components to create a to create medicines is now being sourced in different ways historically Pharma built itself and really invested and was really a research and development company almost entirely in-house right so all the support systems and everything the way that the business was run was around that nowadays these the farmers are collaborating with smaller providers many of them in ways that again they just historically have never done everything was done in house to build to bring drugs to market and now it's it's shifted absolute to the opposite side where big farmers are relying on these providers these third-party providers for all stages of R&D and ultimately FDA and the release of these so t1 I introduced you as an IT practitioner and Lily so talk about more specifically about your role there you focused on infrastructure I teach em a list or more about them yeah so my rules are about service integration think about those services that we deliver to our internal customers within lily and how do we do that across our complex ecosystem where you have multiple different IT departments you have multiple suppliers who have different rigs and complexities in that space and so our job is how do we minimize that complexity for our internal business partners and making sure that the way we build variety is seamless for our internal customers okay so we heard Ross talking about the the pressures in the in the industry from a from an IT practitioner standpoint what how does that change change your life what are the drivers and what's the business asking you to do but just like anyone we need more volume but we also have to do that under under constraint and so for us how do we get more fishing so you think about this basically gone under you can only do so much outsourcing you only do so much change and so you have to see how do I start running my business more efficiently and I think that's the big shift and I tias you're moving from a from an internal infrastructure towers are truly looking at how do we deliver IT services and part of living IT services and making sure that we're a value-added partner and also being assured that we're competitive with other sources of our businesses have to get services from an IT perspective yeah so 10 years ago we used to talk a lot about demand management and to me it's that's why i love this from now from know to now because demand management is actually ended up just being no we just can't handle the the volume so you mentioned constraints you've got constraints you've got to be more efficient so so talk a little bit about what you did to get more efficient for us it was all about standardization so how do we how do we build standardization across our IT infrastructure nikka system within our IT partners empower external partners what that does it gives us flexibility so that we can deliver our systems and be more agile they think about our internal space we had a lot of complexity we had multiple procedures multiple processes different business units operating or delivering IT services in a consistent manner what we've been able to do it being able to streamline that we've been able to be more consistent internally in a line on the comments that are processes and how we deliver those ikea services to our customers so Ross you're talking about the sort of changing dynamic of what I would call sort of the pharmaceutical ecosystem right so so that's that sounds like it's relatively new in pharma it used to be sort of a go-it-alone the big guys hey we're multi-billion dollar companies we don't need these little guys you see all these startups coming out there really innovative there faster so take us through sort of how that's evolving how companies are dealing with the ecosystem and what kind of pressures that puts on IT what are you seeing out there so as t1 was was mentioning as well this was pushing to IT service integration as a kind of one of the next frontiers of now right being able to have the single pane of glass single system of record of IT and our ability to bring standardized services up and down in a coordinating consistent way has allowed for the bigger more monolithic type companies in be able to interact with with these smaller more agile more tech-savvy appeal partners and be able to not overburden them so the little provider who has maybe less less overhead of IT infrastructure and their processes would find it hard to be able to collaborate electronically with a big pharma if we had to adopt the big pharma's old-style processes so service integration is all about allowing for the the easy plug-and-play of these providers and establishing the reference set of processes and the supporting data that's needed to govern those transactions or the length of the of the outsourcing arrangement with with that provider in a way that doesn't get overburden them but provides the company Big Pharma the ability to have transparency ability to see risks before they're happening and to enter manage the cause so talk about your practice a little bit how do you what's role do you play it's obviously you've got this increasingly complex ecosystem evolving they've definitely got different infrastructures um how do you sort of mediate all this so Kim G what are our go-to-market offering and our solution set is based around a set of leading practices that that we have established over the past 17 years for example that we've been in the IT service management consulting and advisory business so we have these accelerators that we can we bring to a project and engagement like like the one we're at Eli Lilly where we can quickly faster than ever establish a common ground for those processes the operational processes first and foremost that would don't require years and years of consultancy process engineering 20 years ago type of thing so our role in that is to provide the basis for the are the operating model that's going to go forward and allow the core customer as well as these other providers to get there fast to get operating faster so t1 we've been hearing a similar pattern from the customers that we've talked to a lot of stovepipes a lot of legacy you know tools a lot of uncoordinated sort of activities going going on is that what what Lily with you would you describe that as an accurate depiction of the pasture i think i think that i think you're being kind yeah I'm sure we kind on the cheer we don't like to feed our guests up what I think it not to over use the ERP for IT term but this is something I t we've done for our business partners over the years we haven't done for our so if you think about the essay peas of the world where you get your CI CFO a one-click look at the the financial assets of the company you think about from a CRM perspectively doing that for our sales force we've done that from an HR perspective but we haven't taken the time to look at from an IT perspective and how do i give the cio that same visibility across our portfolio services so that he can ask those same questions you can have that same visibility so i want to add a little color to this whole erp for IT though of course on the one hand you know the sort of single system of record that's a positive but when you think of erp i say we were at SI p sapphire there's a lot of complexity in erp and with that type of complexity you'd never succeed but so what's your experience been thus far with regard to you know the complexity in my senses it's not this big monolithic system it's a cloud-based SAS based system talk about that a little bit well for us it's getting to a set of standards it actually helped reduce the complexity where you have complexity when you have multiple business procedures across the organization delivering services and so to get to that single source that single record it is actually help to reduce a lot of complexity on our part help it make it easier for us to deliver customer service for customers the other piece of that to which is the the singularity of vision of how we deliver I team so right now within our business we're depending on what area in you may get IT servers that delivers slightly differently from each area we've been able to streamline that and say this is how you're going to receive IT services and make it a more predictable experience for our internal users I saw Rus I want to talk about this notion of a single system of record before I ask you why it's so important what are we talking about here because today you've got a single system of record for your transactions you might have a single system of record for your your data warehouse all these single systems are at a record so what do you mean by a single system of record so when we're talking with service now and specifically in the IT Service Management domain what we're talking about is having integrated the capability to see data across the different data domains if you like so operational data performance data service level data with that coupled with the IT finance data as well as a zesty one put 360-degree vision of your assets as well so linking all those sources of data together in a way that can be used for analytics maybe for the first time ever so we we we use the analogy of IT intelligence right so what we've given our business partners and business intelligence over the years mmm it's-- never had that so the ability to provide IT intelligence that allows for the leadership due to to have data have information that they can take decisions and then ultimately become predicted with that right so be able have the knowledge to know what we're doing to make the right choices and in the future be able to do some predictive analysis again back to the point about the demands really never got one hundred percent right over the years we've talked about a lot but having the ability to understand the consumption and have the levers to influence demand and see it grow I want to go back to this business process discussion you were sort of reference the 20 years ago the whole VPO of movement and you know business process reorganization it seems to me that what what occurred was you had let's say a database or some kind of system and maybe there was a module and then you build a business process around that and so you had relatively inflexible business process they were hard to change is are you seeing that change it we at the cusp of the dawn of a new era where I can actually create whatever business process I want to around that single system of record is that truly a vision that's coming to fruition we believe it is and our experience it is it is starting to happen and I think service now with their platform is one of the emerging leaders in this space that's allowing for that to happen percent of the day so you have you have a concise platform that allows you enough flexibility to build new processes but has the common data structure has the common user interface as the common workflow set in a and all wrapped in and easy to maintain type of platform is what I think 20 years ago we wished we had and we tried to build in many different ways and ended up mostly cobbling things together but we really believe that and again our starting to see success out there David the platform question is solved and that we're now able to get to the prosecutor historically we you know delivered value plenty of value the problem is so much of that value was sucked by the infrastructure and and and not enough went into the innovation around it do you want my question to you is so people don't like change naturally now maybe it's different and nit maybe they want change in IT but did you see initial resistance I'll know we have this way of doing it we don't want to change or are people enthusiastic about change talk about that a little quite you hit it spot-on and absolutely the technology is the easy part of it it's really the change part that that's the most difficult piece of it and I would say we've done to a lot of work just a line organization and we've had a lot of support for from not only our internal IT people but also our senior leadership team so we've gotten support we've seen a lot of buy-in not saying still them not going to be easy not gonna be easy but I feel that we've got the right momentum now to make this type of change to get the business volume part of its been able to articulate the value that we're going to receive from from from this initiative so it's early days for for Lily and you guys should just get started on this journey not yesterday but you know you you're in an inference perience to give some advice to your fellow practitioners so my ask you guys both start with t1 what advice would you give to fellow practitioners that are looking to move in this direction great I would say first of all you have to have the business alignment so I need to make sure that you can clearly articulate the value of the change of the company so I can I can talk not in terms of process but in terms of outcomes that we're going to drive for our business partners once you're able to describe those outcomes then you can have the conversation on what's the work it's going to take to get there it's not an easy journey to be able to paint that picture accurately for for our teams and also talk about how we're going to support them through the process and so we're going to talk about the value we're going to we're going to paint the picture the journey we're not going to tell you how I want to support you throughout that process okay Ross you're talking to CIOs what's your what's your main point of advice for CIOs in this regard is look at the transformation as transformational right so it's it's it can be a set of tactical projects and tactical wins based on outcomes that you're looking for however to in order to truly change the way your IT functions runs as a business do all these these great things that we're talking we're talking about today is you have to have the vision and understand that it is there are series of building blocks that we will get you incremental value along the way but this is not a quick you know product slam then again maybe 20 years ago was about let's swap this software for that software and we're going to be good it's not about that and that's not going to get you the transformation so it's about transformation it's about the metrics to be able to prove that you are transforming and continuous improvement Ross do you want thanks very much for coming on the cube and sharing your story we could go on forever we're getting the hook but really appreciate you guys coming up thanks thanks for having right thanks for watching everybody we right back with our next guest Chris Pope is here who's the director of product management for service now so we're going to double-click on the platform and share with you some greater information about that this is the cube I'm Dave vellante we're right back
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Pope | PERSON | 0.99+ |
Ross | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dave vellante | PERSON | 0.99+ |
Kim G | PERSON | 0.99+ |
Lily | PERSON | 0.99+ |
Eli Lilly | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
360-degree | QUANTITY | 0.99+ |
Dave vellante | PERSON | 0.99+ |
Big Pharma | ORGANIZATION | 0.99+ |
Eli Lilly | PERSON | 0.99+ |
T Juan Lumpkin | PERSON | 0.99+ |
today | DATE | 0.99+ |
Ross Rexer | PERSON | 0.99+ |
10 years ago | DATE | 0.99+ |
thursday | DATE | 0.98+ |
David | PERSON | 0.98+ |
first time | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
Eli Lilly | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
yesterday | DATE | 0.97+ |
20 years ago | DATE | 0.97+ |
one hundred percent | QUANTITY | 0.96+ |
single system | QUANTITY | 0.96+ |
both | QUANTITY | 0.94+ |
ikea | ORGANIZATION | 0.94+ |
Ross rexer | PERSON | 0.94+ |
one-click | QUANTITY | 0.94+ |
Wikibon | ORGANIZATION | 0.93+ |
single systems | QUANTITY | 0.93+ |
one | QUANTITY | 0.92+ |
multi-billion dollar | QUANTITY | 0.92+ |
ServiceNow | EVENT | 0.92+ |
single system | QUANTITY | 0.92+ |
single source | QUANTITY | 0.91+ |
kpmg | ORGANIZATION | 0.91+ |
single | QUANTITY | 0.9+ |
each area | QUANTITY | 0.9+ |
first | QUANTITY | 0.83+ |
aria hotel | ORGANIZATION | 0.83+ |
last 10 20 years | DATE | 0.78+ |
big pharma | ORGANIZATION | 0.73+ |
ServiceNow | ORGANIZATION | 0.72+ |
big pharma | ORGANIZATION | 0.72+ |
single record | QUANTITY | 0.7+ |
double-click | QUANTITY | 0.65+ |
FDA | TITLE | 0.6+ |
lot of work | QUANTITY | 0.6+ |
tomorrow | DATE | 0.59+ |
lot of | QUANTITY | 0.58+ |
SiliconANGLE | LOCATION | 0.56+ |
lot | QUANTITY | 0.56+ |
past 17 years | DATE | 0.55+ |
t1 | ORGANIZATION | 0.53+ |
Pharma | ORGANIZATION | 0.5+ |
Rus | PERSON | 0.47+ |
Ross | ORGANIZATION | 0.47+ |