Image Title

Search Results for Sparkcognition:

Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 10 2021

SUMMARY :

This is breaking analysis and did a lot of the charts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FoyerPERSON

0.99+

David MoschellaPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Patrick MoorheadPERSON

0.99+

Tom SiebelPERSON

0.99+

New EnglandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CNETORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MIT Technology ReviewORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10%QUANTITY

0.99+

five minutesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last weekDATE

0.99+

99%QUANTITY

0.99+

ETRORGANIZATION

0.99+

dave.vellante@siliconangle.comOTHER

0.99+

John FurrierPERSON

0.99+

EE TimesORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

10XQUANTITY

0.99+

last weekDATE

0.99+

MoschellaPERSON

0.99+

theCUBEORGANIZATION

0.98+

IntelORGANIZATION

0.98+

15 nanometerQUANTITY

0.98+

2015DATE

0.98+

todayDATE

0.98+

Seeing DigitalTITLE

0.98+

30%QUANTITY

0.98+

HPEORGANIZATION

0.98+

this weekDATE

0.98+

A14COMMERCIAL_ITEM

0.98+

higher than 118%QUANTITY

0.98+

5%QUANTITY

0.97+

10 years agoDATE

0.97+

EinORGANIZATION

0.97+

a buckQUANTITY

0.97+

64 bitQUANTITY

0.97+

C3.aiTITLE

0.97+

DatabricksORGANIZATION

0.97+

about 40%QUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

DataikuORGANIZATION

0.95+

siliconangle.comOTHER

0.94+

Eric Herzog & Mark Godard | IBM Interconnect 2017


 

>> Narrator: Live from Las Vegas, it's theCube. Covering Interconnect 2017. brought to you by IBM. >> Hey welcome back everyone. We're live here in Las Vegas for IBM Interconnect 2017. Siliconangle's theCube's Exclusive coverage of IBM Interconnect 2017, I'm John Furrier. My co-host Dave Vellante. Our next two guests, Eric Herzog, Vice President of Marketing for IBM Storage. Nice to see you again, you were on yesterday. And Mark Godard, Manager of Customer Success and Partnership at Sparkcognition, a customer. Guys, welcome to theCube, good to see you again. Welcome for the first time. >> Thank you. >> Thank you. >> Okay, so we're going to talk about some stories we did yesterday, but you've got the customer here. What's the relationship, why are you guys here? >> We provide the storage platform. They use our flash technology. Spark is a professional software company. It's not a custom house, they are a software company. >> And Spark, not related to Spark OpenSource. Just the name Spark, Sparkcognition. Make sure to get that out of the way. Go ahead, continue. >> So they're a hot startup. They have a number of different use case including cybersecurity, real-time IoT, predictive analytics and a whole bunch of other things that they do. When a customer goes on premise 'cause they deliver either through a service model or on premise, when it's in their service model they use our flash and our power servers. When it's on premise they recommend here's the hardware you should use to optimize the software if the customer buys a non-premised version. They offer it both ways, but part of the reason we thought it would be interesting is they're a professional software company. A lot of the people here as you know are regular developers, in-house developers. In this case these guys are a well-funded VC startup that delivers software to the end user base. >> Tell us more about Sparkcognition. Give us the highlights. >> Yeah, appreciate it. Sparkcognition, we're a cognitive algorithms company. We do data science, machine learning, natural language processing. Kind of the whole gambit there. Working, we have three products. SparkPredict is our predictive analytic, our predictive maintenance product. SparkSecure is our network log security product. And Deep Armor is a machine learning endpoint protection product. In that you kind of hear we're in the IoT, the industrial IoT, the IIoT of things. It also, in cybersecurity we've done use cases, other machine learning use cases as well. But the predictive maintenance and cybersecurity are two most, most advanced use cases, industrial areas. So we've been around about three years. We have around 100 people. Appreciate Eric talking about how well-financed we are and how our success really is budding this far. We're happy to be here. >> John: Where are you guys located? >> We're based out of Austin, Texas. >> John: Another Austin. >> Yeah Austin, Texas. >> Dominant with Austin. >> It's always good to have financing. You can't go out of business if you don't run out of money. Talk about the industrial aspect. One of the things that is hot, it's not a mainstream here, is blockchain is the big announcement. But IoT is the big one. But industrial IoT's interesting because now you have the digitization of business as a big factor. And that data is going to be throwing off massive analog digital data now. So analog to digital, what's going on there? What are you guys doing there to help and where does the storage fit in? >> Yeah, I appreciate that. So IIoT, industrial there's obviously there's big clients there. There's value in this information. For us it's predictive maintenance is the big play. A study I read the other day by a Boston consulting group talks about how its services and applications in the industrial internet of things. There's an $80 billion market in the next five years with predictive maintenance leading the way as the most mature application there. So we're happy to be kind of riding on the front of that wave, really pushing the state of the art there. Predictive maintenance is valuable to clients because the idea is to predict failures, do optimization of resources, so to get more energy out of your wind farm, get more gas out of the ground, you name it. Having the software that can provide those solutions efficiently to clients without a lot of start up, but each new iteration. So having a product that can deliver that intellectual property efficiently is important. The whole goal is to be able to reduce maintenance costs and extend the useful life of assets. So that's what SparkPredict is our product, SparkPredict our product, Sparkcognition has been laboring to do. We have a successful deployment of 1,100 turbines with Invenergy, which is the largest wind production company in the United States. We're doing work with Duke, Nexterra, several other large electrical production companies, oil and gas companies as well. In Austin we're near Houston, we have a lot of energy production opportunity there. So predictive maintenance for us is a big play. >> So you guys did a session this week. You hosted a panel, is that right? So I mean no offense, but what we're talking about now is really even more interesting than storage. But it's a storage panel you were hosting, right? So what was the conversation like? >> The conversation around that was we had three software companies, Sparkcognition and two other software companies. Then we had a federal integrator. All of them are doing cloud delivery. So for example, one of the other software companies Medicat, delivers medical record keeping as a service to hospitals. They're doing predictive analytics and predictive maintenance, and also some cybersecurity out. So there were three professional software companies, and integrator. And in each case the issues were A, we need to be up and going all the time and the user doesn't know what storage we're using. But we can never fail because we're real time. In fact, one of the customers is the IRS. So the federal integrator, the IRS cloud runs on IBM storage. The entire IRS runs under IBM cloud. On our storage, but it's their cloud. It's their private cloud that they put together, that the integrator put together. The idea was we've got a cloud deployment. There's two key things your storage has to do. A, it needs to be resilient as heck because these guys and the other two companies on the software side if they cannot serve it as a service then no one's going to buy the software, right? Because software is the service. So for them it's critical in their own infrastructure that it be resilient. Then the second thing, it needs to be fast. You've got to meet the SLAs, right? So when you're thinking the system's integrator at the IRS, what do you think the SLAs are and they've got like 14 petabytes of all flash. >> You forgot dirt cheap. You got resilient as heck, lightning fast, and it's got to be dirt cheap, too. >> Well of course. >> They want all three, right? >> You have this panelist, so what Jenny, what were Jenny's three? Industrial ready, cloud based, and cognitive to the core. So you guys are, I'm on your website. It's cognitive this, cognitive that. You're cognitive to the core. You're presumably you're using industrial ready infrastructure and it's all cloud based, right? Talk about that a little bit, then I've got a follow up. >> To tie into what Eric is saying about the premium hardware, the cloud opportunity, for us to be able to to AI software, to be able to do machine learning models, these are very intensive applications that require massive amounts of CPU, IO, fast storage. To be able to get the value from that data quickly so that it's useful and actionable takes that premium hardware. So that's why we've done testing with flash system, with our cybersecurity product. One of the most innovative things that we did in the previous year was to move from a traditional architecture using X86, 64 where we had a cluster of eight servers there. Brought that down to one flash system array and we're able to get up to 20 times the performance doing things like analyzing, sorting, and ingesting data with our cybersecurity platform. So in that regard we were very much tied closely to the flash system product. That was a very successful use case. We offered a white paper on that. If anyone wants to read more that's available on the IBM website. >> Where do you find that, search it? >> Yeah, it's on IBM.com and it's basically how they used it to deliver software as a service. >> What do I search? >> If you search Sparkcognition IBM you'll find it on Google. >> My other question, my follow up is you talk about these IoT apps which are distributed by their very nature. Can we talk about the data flow? What are you seeing in terms of where the data flows? Everybody wants to instrument the windmill. You've got to connect it then you've got to instrument it. Where's the data going? You're doing analytics locally, you're sending data back. What are you seeing in the client base? >> Yeah, that's a great question. Those in the field use cases for the wind turbines for example, most of our clients they already have a data storage solution. We're not a data storage provider. The reason, and someone asked me this yesterday in a different conversation. They said why are wind turbines so ripe for the picking? It's because they're relatively modern assets. They were built with the sensors onboard. The data, they have been collecting the data since the invention of the modern wind turbine, they've been collecting this data. Generally it's sent in from the field at 10 minute intervals, usually stored in some sort of large data center. For our purposes though, we collect a feed off that data of the important information, run our models, store a small data set a few months, whatever we think we need to train that machine learning model and to retrain and balance that model. That's sort of an example where we're doing the analysis in a data center or in the cloud sort of out of the field. The other approach is sort of an edge analytics approach, you might have heard that term. That's usually for smaller devices where the value of the asset doesn't justify the infrastructure to relay the information and then deploy this large scale solution. So we actually are developing edge analytic solution, a version of our product as well working with a company called Flowserve, their the world's largest pump manufacturing company. To be able to say how can we add some intelligence the to these pumps that may operate near a pipeline or out in the oil field and be able to make those machines smarter even though they don't necessarily justify the robust IT infrastructure of a full wind turbine fleet. >> Is there a best practice that you guys see in terms of the storage? Because you bring out edge and the network. Great point, lot of diversity at the edge now, from industrial to people. But the data's got to be stored somewhere. I mean, is there a best practice? Is there a pattern to developing that you're seeing in terms of how people are approaching the data problem and applying algorithms to it? Just talk, do I move the data? Do I push to compute to the data? Thoughts on what you guys are seeing in terms of best practices. >> One of the other companies that was on the panel also is doing predictive modeling. They take 600 different feeds in real time then munge it for mostly for industrial markets, but mostly for the goods. So the raw goods that they need to make a machine or make a table or make the paper that is used behind us, or make the lights that are used here, they look at all that commodities and then they feed it out to all these consumers, not consumers but the companies that build these products. So for them, they need it real time so they need storage that's incredibly fast because what they're doing is they're putting out on super powerful CPUs loaded with D-ram, but you can only put so much D-ram in a server. They're building these giant clusters to analyze all this data and everything else is sitting on the flash. Then they push that out to their customers. Slightly different model from what Sparkcognition does, but a slightly similar except their taking it from 600 constant data sources in real time, 24 by seven, 365 and then feeding it back out to these manufacturing companies that are looking to buy all these commodities. >> You have "software defined" in your title. That was kind of the big buzzwords a few years ago. Everybody wanted to replicate the public cloud on prem. We think of it as programmable infrastructure, right? Set it and then you can start making API calls and set SLAs and thresholds, etc. Where are we at with software defined? Do you guys, does it resonate with you or is it just an industry buzzword? I'll start with Eric. >> For us we're the largest provider of software defined storage in the world. Hundreds and hundreds and hundreds of millions of dollars every year. We don't sell any infrastructure. We just sell the raw software and they use commodity infrastructure, whatever they want: hard drives, flash drives, CPUs, anything they buy from their local reseller and then create basically high-performance arrays using that software. So they create on their own. Everything is built around automation so we automatically can replicate data, snapshot data, migrate data around from box to box, move it from on-premise to a cloud through what we call transparent cloud tiering. All of that in the software defined storage is done based on automation play. So the software defined storage allows them to if you will, build their own version of our flash system by just buying the raw software and buying flash from someone else, which is okay with us because the real value's in the software, obviously as you know. That allows them to then create infrastructure of their own, but they've got the right kind of software. They're not home brewing the software it's all built around automation. That's what we're seeing in the software defined space across a number of different industries, whether it be cloud providers, banks. We have all kinds of banks that used our software defined storage and don't buy the actual underlying storage from us, just the storage software. >> Do you, you may not have visibility in this, but getting kind of geeky on it. Do you guys adopt that sort of software defined mentality in your approach? >> Yeah, so for us software defined storage is something that we've deployed for our proof of concept evaluations. The nature of the work that we do is the solution is innovative to the point where everyone needs to have some sort of proof point for themselves before the company or the client will invest in a large scale. So software defined storage and embracing that perspective has allowed us to deploy a small scale implementation without having our own dedicated hardware, for example, at different clients. That's enabled us to spin up an instance quickly, to provision that small scale deployment, to be able to prove out results at a low cost to our client. That's where we really leverage that approach. We also have used a similar approach in the cloud where we've used multi-tenant environments to be able to support our cybersecurity product, SparkSecure in a multi-tenant cloud hosted environment which brings down delivery costs as well. It allows us to slice up that data and deliver it at a low cost. As far as our large scale physical deployments for the asset monitoring and such, we really, we generally end up with a piece of a flash system or flash storage, bare metal deployment because that speed is critical whether that's the client wants to have instant monitoring of a critical asset or they have a financial services use case where we're looking for anomalies or looking for threats in the cybersecurity landscape. Having that real-time model building and model result is very critical. So having that bare metal flash system type installation is kind of our preferred route. The only other thing I would say on that is you asked earlier about our approach. For us, the security data is very important. Most of our assets are what are called critical assets. So clients are very sensitive to the security of the data. Some are still uncomfortable with a cloud deployment. Another reason why we have an affinity for the hardware deployment with IBM. >> Why IBM? >> Our company has really deep roots with IBM. My founder Amir Hussein, was actually on the board of directors of the original IBM Watson Project as well as Manoj Saxena was the original GM of the IBM Watson program. We have just a long relationship with IBM. We have a lot of mutual interest and respect for the entity. We've also found that the products are superior in many ways. We are hardware agnostic and we're an independent advisor to our clients when it comes to how to deliver our solutions. But our professional opinion based on the testing that we've done is that IBM is a top-tier option. So we continue to prescribe that to our clients. When they feel that's appropriate they make that purchase through IBM. >> Great testimonial. Eric, excited to hear that nice testimonial for you guys? Congratulations. >> He's done several panels with us and again, part of the reason for here was A, all about IoT which they're all into. All about commo which they're all into. And to show that you can do a software as a service model even in-house. They happen to be a professional software company but if you're a giant global enterprise you may actually do software as a service to your remote branch offices which is very similar to what these guys to do other companies. This gives them an example, the other two software companies the same way, to show in-house developers if you're going to have a private cloud, not go public, you can deliver software as a service internally to your own company through the dev model and do it that way. Or you can use someone like Sparkcognition or Medicat or the other companies that we showed, Z-Power, all of which were using us to deliver their software as a service with IBM flash technology. >> Dave: And you're using Watson or Watson analytics? >> Yes, so we have done integrations with Watson for our cybersecurity product. We've also done integrations with Watson rank and retrieve using the NPL capabilities to advise the analysts both in the Predict space and in the Secure space. Sort of an advisor to say what a client user could see something happening on a turbine and say what does this mean? Using a Watson corpus. I was going to add one thing, we were talking about why IBM? IBM really has been a leader in the space of cognitive computing and they've invested in bringing and nurturing small companies and bringing up entrepreneurs in that space to build that out. So we appreciate that. I think it's important to mention that. >> All right Mark, thanks so much for joining in, the great testimonial, the great insight. Good luck with your business. Congratulations on the success startup taking names and kicking butt. Eric, great to see you again, thanks for the insight and congratulations on great, happy customers and see you again. Okay, we're watching theCube live here at Interconnect 2017. More great coverage, stay with us. There will be more after this short break. (upbeat instrumental music)

Published Date : Mar 22 2017

SUMMARY :

brought to you by IBM. Nice to see you again, you were on yesterday. What's the relationship, why are you guys here? We provide the storage platform. Just the name Spark, Sparkcognition. A lot of the people here as you know are regular developers, Give us the highlights. Kind of the whole gambit there. One of the things that is hot, it's not a mainstream because the idea is to predict failures, So you guys did a session this week. Then the second thing, it needs to be fast. and it's got to be dirt cheap, too. So you guys are, I'm on your website. One of the most innovative things that we did Yeah, it's on IBM.com and it's basically If you search Sparkcognition IBM you'll find it Where's the data going? or out in the oil field and be able to make those machines But the data's got to be stored somewhere. So the raw goods that they need to make a machine Set it and then you can start making API calls So the software defined storage allows them to Do you guys adopt that sort of software defined mentality The nature of the work that we do is the solution of directors of the original IBM Watson Project Eric, excited to hear that nice testimonial And to show that you can do a software as a service model Sort of an advisor to say what a client user Eric, great to see you again, thanks for the insight

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Amir HusseinPERSON

0.99+

Eric HerzogPERSON

0.99+

Mark GodardPERSON

0.99+

EricPERSON

0.99+

JennyPERSON

0.99+

JohnPERSON

0.99+

DukeORGANIZATION

0.99+

MarkPERSON

0.99+

HoustonLOCATION

0.99+

NexterraORGANIZATION

0.99+

10 minuteQUANTITY

0.99+

DavePERSON

0.99+

1,100 turbinesQUANTITY

0.99+

John FurrierPERSON

0.99+

AustinLOCATION

0.99+

$80 billionQUANTITY

0.99+

two companiesQUANTITY

0.99+

United StatesLOCATION

0.99+

SparkcognitionORGANIZATION

0.99+

14 petabytesQUANTITY

0.99+

oneQUANTITY

0.99+

Manoj SaxenaPERSON

0.99+

Austin, TexasLOCATION

0.99+

FlowserveORGANIZATION

0.99+

twoQUANTITY

0.99+

MedicatORGANIZATION

0.99+

each caseQUANTITY

0.99+

24QUANTITY

0.99+

second thingQUANTITY

0.99+

Las VegasLOCATION

0.99+

OneQUANTITY

0.99+

600 different feedsQUANTITY

0.99+

yesterdayDATE

0.99+

both waysQUANTITY

0.99+

IRSORGANIZATION

0.99+

two keyQUANTITY

0.99+

first timeQUANTITY

0.99+

InvenergyORGANIZATION

0.99+

this weekDATE

0.99+

around 100 peopleQUANTITY

0.98+

GoogleORGANIZATION

0.98+

three professional software companiesQUANTITY

0.98+

two other software companiesQUANTITY

0.98+

threeQUANTITY

0.98+

365QUANTITY

0.98+

eight serversQUANTITY

0.98+

three software companiesQUANTITY

0.98+

SparkPredictORGANIZATION

0.97+

two guestsQUANTITY

0.97+

bothQUANTITY

0.97+

Interconnect 2017EVENT

0.96+

two software companiesQUANTITY

0.96+

600 constant data sourcesQUANTITY

0.96+

Z-PowerORGANIZATION

0.96+

SparkORGANIZATION

0.95+

WatsonTITLE

0.94+

sevenQUANTITY

0.93+

Deep ArmorORGANIZATION

0.93+

three productsQUANTITY

0.93+

up to 20 timesQUANTITY

0.93+

one thingQUANTITY

0.92+