Image Title

Search Results for Benjamin Craig:

Brian Biles, Datrium & Benjamin Craig, Northrim Bank - #VMworld - #theCUBE


 

>> live from the Mandalay Bay Convention Center in Las Vegas. It's the king covering via World 2016 brought to you by IBM Wear and its ecosystem sponsors. Now here's your host stool minimum, >> including I Welcome back to the Q bomb stew. Minuteman here with my co host for this segment, Mark Farley, and we'll get the emerald 2016 here in Las Vegas. It's been five years since we've been in Vegas, and a lot of changes in five years back Elsa do this morning was talking about five years from now. They expect that to be kind of a crossover between public Cloud becomes majority from our research. We think that flash, you know, capacities. You know, you really are outstripping, You know, traditional hard disk drives within five years from now. So the two guests I have for this program, Brian Vials, is the CEO of Day Tree. Um, it's been a year since we had you on when you came out of stealth on really excited cause your customer along. We love having customers on down from Alaska, you know, within sight view of of of Russia. Maybe on Did you know Ben Craig, who's the c i O of Northern Bank. Thank you so much for coming. All right, so we want to talk a lot to you, but real quick. Ryan, why do you give us kind of the update on the company? What's happened in the last year where you are with the product in customer deployments? >> Sure. Last year, when we talked, daydream was just coming out of stealth mode. So we were introducing the notion of what we're doing. Starting in kind of mid Q. One of this year, we started shipping and deploying. Thankfully, one of our first customers was Ben. And, uh, you know, our our model of, ah, sort of convergence is different from anything else that you'll see a v m world. I think hearing Ben tell about his experience in deployment philosophy. What changed for him is probably the best way to understand what we do. >> All right, so and great leading. Start with first. Can you tell us a little bit about north from bank? How many locations you have your role there. How long you've been there? Kind of a quick synopsis. >> Sure. Where we're growing. Bank one of three publicly traded publicly held companies in the state of Alaska. We recently acquired residential mortgage after acquiring the last Pacific Bank. And so we have locations all the way from Fairbanks, Alaska, where it gets down to negative 50 negative, 60 below Fahrenheit down to Bellevue, Washington. And to be perfectly candid, what's helped propel some of that growth has been our virtual infrastructure and our virtual desktop infrastructure, which is predicated on us being able to grow our storage, which kind of ties directly into what we've got going on with a tree and >> that that that's great. Can you talk to you know what we're using before what led you to day tree? Um, you know, going with the startup is you know, it's a little risky, right? I thought, Cee Io's you buy on risk >> Well, and as a very conservative bank that serves a commercial market, risk is not something that way by into a lot. But it's also what propels some of our best customers to grow with us. And in this case, way had a lot of faith in the people that joined the company. From an early start, I personally knew a lot of the team from sales from engineering from leadership on That got us interested. Once we kind of got the hook way learned about the technology and found out that it was really the I dare say we're unicorn of storage that we've been looking for. And the reason is because way came from a ray based systems and we have the same revolution that a lot of customers did. We started out with a nice, cosy, equal logic system. We evolved into a nimble solution the hybrid era, if you will, of a raise. And we found that as we grew, we ran into scalability problems. A soon as we started tackling beady eye, we found that we immediately needed to segregate our workloads. Obviously, because servers and production beauty, I have a completely different read right profile. As we started looking at some of the limitations as we grew our video structure, we had to consider upgrading all our processors, all of our solid state drives, all of the things that helped make that hybrid array support our VD infrastructure, and it's costly. And so we did that once and then we grew again because maybe I was so darn popular. within our organization. At that time, we kind of caught wind of what was going on with the atrium, and it totally turned the paradigm on top of its head for what we were looking for. >> How did it? Well, I just heard that up, sir. How did the date Reum solution impact the or what did you talk about? The reed, Right balance? What was it about the day trim solution that solved what was the reed right? Balance you there for the >> young when we ran out of capacity with our equal logic, we had to go out and buy a whole new member when he ran out of capacity with are nimble, had to go out and buy a whole new controller. When we run out of capacity with day tree, um, solution, we literally could go out and get commoditized solid state drives one more into our local storage and end up literally impacting our performance by a magnifier. That's huge. So the big difference between day trim and these >> are >> my words I'm probably gonna screw this up, Bryant, So feel free to jump in, and in my opinion day trip starts out with a really good storage area network appliance, and then they basically take away all of you. I interface to it and stick it out on the network for durable rights. Then they move all of the logic, all of the compression, all of the D duplication. Even the raid calculations on to software that I call a hyper driver that runs the hyper visor level on each host. So instead of being bound by the controller doing all the heavy lifting, you now have it being done by a few extra processors, a few extra big of memory out on their servers. That puts the data as close as humanly possible, which is what hyper converging. But it also has this very durable back end that ensures that your rights are protected. So instead of having to span my storage across all of my hosts, I still have all the best parts of a durable sand on all the best parts of high performance. By bringing that that data closer to where the host. So that's why Atrium enabled us to be able to grow our VD I infrastructure literally overnight. Whenever we ran out of performance, we just pop in another drive and go and the performances is insane. We just finished writing a 72 page white paper for VM, where we did our own benchmarking. Um, using my OMETER sprayers could be using our secondary data center Resource is because they were, frankly, somewhat stagnant, and we knew that we'd be able to get with most level test impossible. And we found that we were getting insane amounts of performance, insane amounts of compression. And by that I can quantify we're getting 132,000 I ops at a little bit over a gig a sec running with two 0.94 milliseconds of late and see that's huge. And one of the things that we always used to compare when it came to performance was I ops and throughput. Whenever we talk to any storage vendor, they're always comparing. But we never talked about lately because Leighton See was really network bound and their storage bender could do anything about that. But by bringing the the brain's closer to the hosts, it solves that problem. And so now our latent C that was like a 25 minutes seconds using a completely unused, nimble storage sand was 2.94 milliseconds. What that translated into was about re X performance increase. So when we went from equal logic to nimble, we saw a multiplier. There we went from nimble toed D atrium. We saw three Export Supplier, and that translated directly into me being able to send our night processors home earlier. Which means less FT. Larger maintenance window times, faster performance for all of our branches. So it went on for a little bit there. But that's what daydreams done for us, >> right? And just to just to amplify that part of the the approached atrium Staking is to assume that host memory of some kind or another flash for now is going to become so big and so cheap that reads will just never leave the host at some point. And we're trying to make that point today. So we've increased our host density, for example, since last year, flash to 16 terabytes per host. Raw within line di Dupin compression. That could be 50 a 100 terabytes. So we have customers doing fairly big data warehouse operations where the reeds never leave the host. It's all host Flash Leighton see and they can go from an eight hour job to, ah, one hour job. It's, you know, and in our model, we sell a system that includes a protected repositories where the rights go. That's on a 10 big network. You buy hosts that have flash that you provisions from your server vendor? Um, we don't charge extra for the software that we load on the host. That does all the heavy lifting. It does the raid compression d do cloning. What have you It does all the local cashing. So we encourage people to put as much flash and as many hosts as possible against that repositories, and we make it financially attractive to do that. >> So how is the storage provisioned? Is it a They're not ones. How? >> So It all shows up, and this is one of the other big parts that is awesome for us. It shows up his one gigantic NFS datastore. Now it doesn't actually use NFS. Itjust presents that way to be anywhere. But previously we had about 34 different volumes. And like everybody else on the planet who thin provisions, we had to leave a buffer zone because we'd have developers that would put a bm where snapshot on something patches. Then forget about it, Philip. The volume bring the volume off lying panic ensues. So you imagine that 30 to 40% of buffer space times each one of those different volumes. Now we have one gigantic volume and each VM has its performance and all of its protection managed individually at the bm level. And that's huge because no longer do you have to set protection performance of the volume level. You can set it right in the B m. Um, >> so you don't even see storage. >> You don't ever have to log into the appliance that all you >> do serve earless storage lists. Rather, this is what we're having. It's >> all through the place. >> And because because all the rights go off, host the rights, don't interrupt each other the host on interrupt together. So we actually going to a lot of links to make sure that happens. So there's an isolation host, a host. That means if you want a provisional particular host for a particular set of demands, you can you could have VD I next door to data warehouse and you know the level of intensity doesn't matter to each other. So it's very specifically enforceable by host configuration or by managing the VM itself. Justus, you would do with the M where >> it gets a lot more flexibility than we would typically get with a hyper converge solution that has a very static growth and performance requirements. >> So when you talk about hyper convergence, the you know, number one, number two and number three things that we usually talk about is, you know, simplicity. So you're a pretty technical guy. You obviously understand this. Well, can you speak to beyond the, you know, kind of ecological nimble and how you scale that house kind of the day's your experience. How's the ongoing, how much you after, you know, test and tweak and adjust things? And how much is it? Just work? >> Well, this is one of the reasons that we went with the atrium is well, you know, when it comes down to it with a hyper converge solution, you're spanning all of your storage across your host, right? We're trying to make use of those. Resource is, but we just recently had one of our server's down because it had a problem with his bios for a little over 10 days. Troubleshooting it. It just doesn't want to stay up. If we're in a full hyper converged infrastructure and that was part of the cluster, that means that our data would've had to been migrated off of that hostess. Well, which is kind of a big deal. I love the idea of having a rock solid, purpose built, highly available device that make sure that my rights are there for me, but allows me to have the elastic configuration that I need on my host to be able to grow them as I see fit. And also to be able to work directly with my vendors to get the pricing points that I need for each. My resource is so our Oracle Servers Exchange Server sequel servers. We could put in some envy Emmy drives. It'll screen like a scalded dog, and for all of our file print servers, I t monitoring servers. We can go with Cem Samsung 8 50 e b o. Drives pop him in a couple of empty days, and we're still able to crank out the number of I ops that we need to be able. Thio appreciate between those at a very low cost point, but with a maximum amount of protection on that data. So that was a big song. Points >> are using both envy. Emmy and Block. >> We actually going through a server? Refresh. Right now, it's all part of the white paper that way. Just felt we decided to go with Internal in Vienna drives to start with two two terabyte internal PC cards. And then we have 2.5 inch in Vienna ready on the front load. But we also plumbed it to be able to use solid state drive so that we have that flexibility in the future to be able to use those servers as we see fit. So again, very elastic architecture and allows us to be kind of a control of what performance is assigned to each individual host. >> So what APS beyond VD? I Do you expect to use this for? Are you already deploying it further? >> VD I is our biggest consumer of resource is our users have come to expect that instant access to all of their applications eventually way have the ability to move the entire data center onto the day trim and so One of the things that we're currently completing this year is the rollout of beady eye to the remaining 40% of our branches. 60% of them are already running through the eye. And then after that, we're probably gonna end up taking our core servers and migrating them off and kind of through attrition, using some of our older array based technology for testing death. All >> right, so I can't let you go without asking you a bit. Just you're in a relationship with GM Ware House Veum. We're meeting your needs. Is there anything from GM wear or the storage ecosystem around them that would kind of make your job easier? >> Yes. If they got rid of the the Sphere Web client, that would be great. I am not a fan of the V Sphere Web client at all, and I wish they'd bring back the C Sharp client like to get that on the record because I tried to every single chance I could get. No, the truth is the integration between the day tree, um and being where is it's super tight. It's something I don't have to think about. It makes it easy for me to be able to do my job at the end of the day. That's what we're looking for. So I think the biggest focus that a lot of the constituents that air the Anchorage being where user group leader of said group are looking for stability and product releases and trying to make sure that there's more attention given to que es on some of the recent updates that they have. Hyper visor Weber >> Brian, I'll give you the final word takeaways that you want people to know about your company, your customers coming out. >> Of'em World. We're thrilled to be here for the second year, thrilled to be here with Ben. It's a It's a great, you know, exciting period for us. As a vendor, we're just moving into sort of nationwide deployment. So check us out of here at the show. If you're not, check us out on the Web. There's a lot of exciting things happening in convergence in general and atriums leading the way in a couple of interesting ways. All >> right, Brian and Ben, thank you so much for joining us. You know, I don't think we've done a cube segment in Alaska yet. so maybe we'll have to talk to you off camera about that. Recommended. All right. We'll be back with lots more coverage here from the emerald 2016. Thanks for watching the Cube. >> You're good at this. >> Oh, you're good.

Published Date : Aug 30 2016

SUMMARY :

It's the king covering We think that flash, you know, So we were introducing the notion of what we're doing. How many locations you have your role there. And so we have locations all the way from Fairbanks, Alaska, where it gets down to negative 50 negative, Um, you know, going with the startup is you know, it's a little risky, right? at some of the limitations as we grew our video structure, we had to consider How did the date Reum solution impact the or what we had to go out and buy a whole new member when he ran out of capacity with are nimble, had to go out and buy a whole new So instead of being bound by the controller doing all the heavy lifting, you now have it being You buy hosts that have flash that you provisions from your server vendor? So how is the storage provisioned? So you imagine that 30 to 40% of buffer space times Rather, this is what we're having. So we actually going to a lot of links to make sure that happens. it gets a lot more flexibility than we would typically get with a hyper converge solution that has a very static How's the ongoing, how much you after, you know, test and tweak and adjust things? Well, this is one of the reasons that we went with the atrium is well, you know, Emmy and Block. so that we have that flexibility in the future to be able to use those servers as we see fit. have the ability to move the entire data center onto the day trim and so One of the things that we're currently right, so I can't let you go without asking you a bit. focus that a lot of the constituents that air the Anchorage being where user group leader Brian, I'll give you the final word takeaways that you want people to know about your company, It's a It's a great, you know, exciting period for us. so maybe we'll have to talk to you off camera about that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark FarleyPERSON

0.99+

Brian VialsPERSON

0.99+

RyanPERSON

0.99+

AlaskaLOCATION

0.99+

ViennaLOCATION

0.99+

30QUANTITY

0.99+

VegasLOCATION

0.99+

Ben CraigPERSON

0.99+

one hourQUANTITY

0.99+

BenPERSON

0.99+

BrianPERSON

0.99+

RussiaLOCATION

0.99+

Last yearDATE

0.99+

132,000QUANTITY

0.99+

60%QUANTITY

0.99+

eight hourQUANTITY

0.99+

Las VegasLOCATION

0.99+

40%QUANTITY

0.99+

last yearDATE

0.99+

PhilipPERSON

0.99+

2.94 millisecondsQUANTITY

0.99+

50QUANTITY

0.99+

BryantPERSON

0.99+

Day TreeORGANIZATION

0.99+

72 pageQUANTITY

0.99+

16 terabytesQUANTITY

0.99+

two guestsQUANTITY

0.99+

Brian BilesPERSON

0.99+

2.5 inchQUANTITY

0.99+

25 minutes secondsQUANTITY

0.99+

Northern BankORGANIZATION

0.99+

GMORGANIZATION

0.99+

five yearsQUANTITY

0.99+

EmmyPERSON

0.98+

oneQUANTITY

0.98+

100 terabytesQUANTITY

0.98+

Mandalay Bay Convention CenterLOCATION

0.98+

Cee IoORGANIZATION

0.98+

second yearQUANTITY

0.98+

Pacific BankORGANIZATION

0.98+

ElsaPERSON

0.98+

each hostQUANTITY

0.98+

AtriumORGANIZATION

0.98+

10 big networkQUANTITY

0.98+

twoQUANTITY

0.98+

Leighton SeeORGANIZATION

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.97+

Northrim BankORGANIZATION

0.97+

OracleORGANIZATION

0.97+

first customersQUANTITY

0.96+

this yearDATE

0.96+

0.94 millisecondsQUANTITY

0.96+

60 below FahrenheitQUANTITY

0.96+

OneQUANTITY

0.96+

Bellevue, WashingtonLOCATION

0.96+

over 10 daysQUANTITY

0.96+

each VMQUANTITY

0.96+

eachQUANTITY

0.96+

todayDATE

0.95+

C SharpORGANIZATION

0.95+

2016DATE

0.95+

five years backDATE

0.95+

a yearQUANTITY

0.94+

GM Ware House VeumORGANIZATION

0.93+

World 2016EVENT

0.92+

about 34 different volumesQUANTITY

0.91+

two terabyteQUANTITY

0.91+

three publicly traded publicly held companiesQUANTITY

0.9+

threeQUANTITY

0.88+

mid Q. OneDATE

0.88+

DatriumORGANIZATION

0.88+

each individual hostQUANTITY

0.87+

MinutemanPERSON

0.86+

50 negativeQUANTITY

0.83+

VTITLE

0.83+

Flash LeightonORGANIZATION

0.83+

#VMworldORGANIZATION

0.82+

Fairbanks, AlaskaLOCATION

0.82+

Benjamin CraigPERSON

0.8+

single chanceQUANTITY

0.78+