Tech Roots » Daniel Sands http://blogs.ancestry.com/techroots Ancestry.com Tech Roots Blogs Fri, 19 Jun 2015 16:53:54 +0000 en-US hourly 1 http://wordpress.org/?v=3.5.2 Controlling Costs in a Cloudy Environmenthttp://blogs.ancestry.com/techroots/controlling-costs-in-a-cloudy-environment/ http://blogs.ancestry.com/techroots/controlling-costs-in-a-cloudy-environment/#comments Tue, 24 Jun 2014 20:11:03 +0000 Daniel Sands http://blogs.ancestry.com/techroots/?p=2500 From an engineering and development standpoint, one of the most important aspects of cloud infrastructure is the concept of unlimited resources. The idea of being able to get a new server to experiment with, or being able to spin up more servers on the fly to handle a traffic spike is a foundational benefit of… Read more

The post Controlling Costs in a Cloudy Environment appeared first on Tech Roots.

]]>
From an engineering and development standpoint, one of the most important aspects of cloud infrastructure is the concept of unlimited resources. The idea of being able to get a new server to experiment with, or being able to spin up more servers on the fly to handle a traffic spike is a foundational benefit of cloud architectures. This is handled in a variety of different ways with various cloud providers, but there is one thing that they all share in common:

Capacity costs money. The more capacity you use, the more it costs.

So how do we provide unlimited resources to our development and operations groups without it costing us an arm and a leg? The answer is remarkably simple. Visibility is the key to controlling costs on cloud platforms. Team leads and managers with visibility into how much their cloud based resources are costing them can make intelligent decisions with regard to their own budgets. Without decent visibility into the costs involved in a project, overruns are inevitable.

This kind of cost tracking and analysis has been the bane of accounting groups for years, but there are several projects that have cropped up to tackle the problem. Projects like Netflix ICE provide open source tools to track costs in public cloud environments. Private cloud architectures are starting to catch up to public clouds with projects like Ceilometer in Open Stack, but can be a bit trickier to determine accurate costs due to the variables involved in a custom internal architecture.

The most important thing in managing costs of any nature is to realistically know what the costs are. Without this vital information, effectively managing the costs associated with infrastructure overhead can be nearly impossible.

The post Controlling Costs in a Cloudy Environment appeared first on Tech Roots.

]]>
http://blogs.ancestry.com/techroots/controlling-costs-in-a-cloudy-environment/feed/ 0
Dealing with Your Team’s Bell Curvehttp://blogs.ancestry.com/techroots/dealing-with-your-teams-bell-curve/ http://blogs.ancestry.com/techroots/dealing-with-your-teams-bell-curve/#comments Fri, 06 Jun 2014 21:01:49 +0000 Daniel Sands http://blogs.ancestry.com/techroots/?p=2471 I recently came across this article on the INTUIT QuickBase blog and was intrigued by the premise. It asserts that inside any team or organization, you will have a bell curve of talent and intelligence – which most would agree to. It’s not a bad thing, it just happens. Regardless of how well staffed you… Read more

The post Dealing with Your Team’s Bell Curve appeared first on Tech Roots.

]]>
I recently came across this article on the INTUIT QuickBase blog and was intrigued by the premise. It asserts that inside any team or organization, you will have a bell curve of talent and intelligence – which most would agree to. It’s not a bad thing, it just happens. Regardless of how well staffed you are or how many experts you recruit, there will always be someone who stands out above the rest and someone who lags behind. Lagging behind is in this case a very relative matter and the so-called lagging individual may in fact be generating brilliant work. This curve seems to naturally exist.

While the article discusses how the groups respond to the least of the group, my interest was instead peaked by another thought. How do we each perceive ourselves within the group? From where I am standing, where do I think I am on the bell curve? In my own team, I know of individuals who depreciate their own perceived value, verbally expressing that others contribute more, have a better response time or whatever criteria you wish to judge on. That perspective can actually be quite dangerous as someone of great value may view themselves as insufficient. On the other hand, someone who views themselves as a rock star may be all flash and no substance.

More than anything, the concept triggered an awareness of my own team and helped me to think a little more about those around me and be more sensitive to issues and circumstances that I may not have otherwise thought about. All in all a good read if you have a few minutes.

I’ll echo the author’s question at the end of her article, how has the bell curve on your team affected business culture and team efficacy?

The post Dealing with Your Team’s Bell Curve appeared first on Tech Roots.

]]>
http://blogs.ancestry.com/techroots/dealing-with-your-teams-bell-curve/feed/ 0
Making Chef Dynamichttp://blogs.ancestry.com/techroots/making-chef-dynamic/ http://blogs.ancestry.com/techroots/making-chef-dynamic/#comments Thu, 02 Jan 2014 21:13:52 +0000 Daniel Sands http://blogs.ancestry.com/techroots/?p=1709 We’ve been working with Chef (formerly OpsCode) for a couple years now. I can safely say that I’ve spent more time crawling through forums and reviewing code documentation than I care to admit. In all that time, I’ve never seen anyone clearly document how to use Chef dynamically. I hope that I can share some… Read more

The post Making Chef Dynamic appeared first on Tech Roots.

]]>
We’ve been working with Chef (formerly OpsCode) for a couple years now. I can safely say that I’ve spent more time crawling through forums and reviewing code documentation than I care to admit. In all that time, I’ve never seen anyone clearly document how to use Chef dynamically. I hope that I can share some cool ideas in this post and maybe somebody out there can make some magic happen similar to what we’ve done.

For those of you who may not know, Chef is a framework that makes system configuration ridiculously easy. Their terminology is a little strange as it follows a cooking analogy (hence Chef, get it?). A recipe is essentially a script, or some form of code that yields a result. A cookbook is a collection of recipes. Tools to interact with Chef include knife, etc. The analogy only stretches so far and soon you get into more practical terms like node, role, and environment.

I mentioned using Chef dynamically. So here’s the problem. You build a role with some recipes and assign it to a node. The node then takes the role, downloads the recipes and acts on them. The result is a configured server. It’s very cool, but what happens when you have hundreds of services iterating through hundreds of code versions, each of which configure their servers slightly differently? Managing hundreds of roles with versioning control can quickly become a nightmare. The trick is to handle all of the code dynamically and in a single maintainable place. You have one master dynamic recipe that is capable of reading in a configuration file and acting on the settings inside. Then, you take the configuration file and push it up into a Data Bag (Chef’s generalized data storage system).

For example:

 

The data bag:

{

“Id”:”My_stack_1_0_0_0”,

“Install_apache”: true,

“apache_settings”:{

“settting”: “blah”,

“more_settings”:”blah”

}

}

 

In the recipe:

### Get the data bag

my_data_bag = Chef::DataBagItem.load(“#{my_data_bag_name}”, “#{my_data_bag_version}”)

If (my_data_bag[‘install_apache’] == true)

#Do everything you need to install apache.

#and use apache_settings to do it.

End

 

That’s a really rough, really quick example, but it covers the gist of the idea. You can even use off-the-shelf cookbooks by taking the settings specified in the data bag and assigning them to the node as attributes, which causes them to take precedence over cookbook default values.

We’ve had the discussion about just using role files instead of data bags a number of times. Using a role is the standard for everybody out there, so why don’t we go that way? The answer is version control. Data bags can be versioned.  When you’re dealing with individual specifics for each stack versioning the configuration data allows you to tie configuration directly to a version of code. This allows for dependency matching, as well as a myriad of other niceties that can be realized once you get into it. The data bags approach also allows you to easily roll back a configuration, meaning if somebody uploads a bad configuration and screws up their stack, we can just roll back to a previous data bag with no sweat off our backs. It adds a layer of complexity to the code so this isn’t an efficient answer for everything out there, but for configurations that are often changing, this makes server change control as easy as checking a file.

To make it even nicer, build a UI that creates the configuration file for you. This will allow your development teams to just click a couple boxes to indicate their desired settings, click a button, and have a pile of fully configured boxes ready to take live traffic. Just like magic.

The post Making Chef Dynamic appeared first on Tech Roots.

]]>
http://blogs.ancestry.com/techroots/making-chef-dynamic/feed/ 0
Foundations for a Platform: Infrastructure Worthy of DevOpshttp://blogs.ancestry.com/techroots/foundations-for-a-platform-infrastructure-worthy-of-devops/ http://blogs.ancestry.com/techroots/foundations-for-a-platform-infrastructure-worthy-of-devops/#comments Wed, 31 Jul 2013 20:59:35 +0000 Daniel Sands http://blogs.ancestry.com/techroots/?p=950 In the IT industry today, Virtualization is one of the hottest buzzwords around. It seems like everyone uses, supports, provides tools for, and/or sells something that has “virtual” in its name. Virtualization is a fun concept that has a lot of interesting ideas floating around it, but at its core, Virtualization is nothing more than… Read more

The post Foundations for a Platform: Infrastructure Worthy of DevOps appeared first on Tech Roots.

]]>
In the IT industry today, Virtualization is one of the hottest buzzwords around. It seems like everyone uses, supports, provides tools for, and/or sells something that has “virtual” in its name. Virtualization is a fun concept that has a lot of interesting ideas floating around it, but at its core, Virtualization is nothing more than software pretending to be hardware. That software, usually referred to as the hypervisor, can improve server density, lower overall power consumption, increase ease of manageability, and decrease total cost of ownership. A bad hypervisor does the opposite and can make you curse, scream, and sometimes even cry as you watch your data center crumble around you.

Here are a few tips to think about when building a virtualized infrastructure:

Start with a clear vision

What are you trying to accomplish? Before anything else, you need to at least attempt to spell out what you’re going for. Several possible aims, a few of which I mentioned above, should be clear before you get started. Anyone with more than an internship’s worth of experience knows that IT projects are typically moving targets. Many of us give up on attempting initial specs at one point or another, but the fact remains that if you don’t know where you want to get to, you’ll never get there.

Consider the payloads you plan on virtualizing

IIS and Apache payloads require very different hypervisor and hardware specs than a database payload. If you’re looking to increase server density, then you need to pick a hypervisor that can handle high densities. Over-subscription can be applied to better use the hardware resources available, but over-subscription planning is as much an art as a science. Or, if you’re looking for a fully automated infrastructure run completely through APIs (in my mind, an absolute necessity for DevOps infrastructure) then you need to make sure your hypervisor can handle that from inception.

Test, test, test

One of the toughest lessons to learn the hard way is to always, always test for what you will actually be using the virtualized servers for, and whatever you do, never take the salesperson’s word for it. If you were to review the marketing material for the top five hypervisors out there, you would be hard pressed to tell me the differences on paper. To make matters more difficult, even if a feature is supported on paper, some marketing teams have a habit of adding this little asterisk next to key features indicating that it costs extra, or is available in the next patch (or the next release). The only way to find out is to test the hypervisor exhaustively.

Mixed vendor solutions

It is also worthwhile to try out multiple hypervisors for your environment, especially if your environment has multiple payload types. In some circumstances, a mixed vendor infrastructure can have huge benefits for the user base, but you’ll never know unless you try.

At the end of the day, a well-designed infrastructure can make or break an organization. It doesn’t matter how good your product is, or how beautifully your marketing campaign is working, if your infrastructure can’t handle what you ask of it.  If you design the platform on top of your infrastructure correctly, your user base will only ever see a smoothly operating platform, and never think about the underlying infrastructure.

The post Foundations for a Platform: Infrastructure Worthy of DevOps appeared first on Tech Roots.

]]>
http://blogs.ancestry.com/techroots/foundations-for-a-platform-infrastructure-worthy-of-devops/feed/ 0
DevOps: Finding the right place for a new ideahttp://blogs.ancestry.com/techroots/devops-finding-the-right-place-for-a-new-idea/ http://blogs.ancestry.com/techroots/devops-finding-the-right-place-for-a-new-idea/#comments Thu, 02 May 2013 20:26:44 +0000 Daniel Sands http://blogs.ancestry.com/techroots/?p=598 For the last year and a half, we’ve been breaking in a new concept at Ancestry.com called a DevOps engineer. There is a ton of material on the internet about what DevOps means to various groups, and how they’ve implemented it. A lot of it revolves around SCRUM, Agile processes, and other approaches to increase… Read more

The post DevOps: Finding the right place for a new idea appeared first on Tech Roots.

]]>
For the last year and a half, we’ve been breaking in a new concept at Ancestry.com called a DevOps engineer. There is a ton of material on the internet about what DevOps means to various groups, and how they’ve implemented it. A lot of it revolves around SCRUM, Agile processes, and other approaches to increase productivity within a team, but the underlying premises are significantly different. As the name implies, DevOps is the combination of development (dev) and operations (ops). That statement alone conjures up all kinds of interesting images. Is it a developer that occasionally racks servers? Is it a network engineer that writes hyper-intelligent scripts? Is it an installation tech that manages to automate the entire installation process? Or is it simply someone who can translate why a development team suddenly needs twice the processing power because a new version of a software platform was released with a great new feature set? There are any number of combinations that could result from joining development and operations.

As a DevOps team our vision was to provide development with an API for operations. We quickly discovered that it wasn’t necessarily how we defined ourselves, but how other groups in the organization viewed us. Long story short, development thought we were operations, and operations thought we were development. As least at first.

From the developer’s perspective, we were an ops team that was there to help developers. At first we received all kinds of interesting requests. It seemed like development hoped that we’d be the avenue to deliver all the things that operations kept promising but never got around to actively delivering. As a newly formed team, we liked the feeling of being useful and productive, and tried to facilitate as many requests as possible, but quickly found ourselves overwhelmed. After all, if an entire ops organization couldn’t manage to accomplish the laundry list of developer requests, why would a single team, that was still getting its bearings, be able to?

Once we realized that we were getting nowhere quickly, we decided to try facilitating the needs of the developers instead of the actual requests. That isn’t to say that the developers didn’t know what they wanted, but rather, the developers may not have been aware of other avenues that could make life easier for everyone. The number one complaint from developers was that turn-around times for hardware were way too long. Lowering server delivery times was going to be an easy win, as the company had already begun experimenting with virtualization, and it was a short leap to go from experimentation to implementation. In the matter of a few months, we managed to drop the time for server delivery from months to days. Ops was able to monitor the virtual host load to know when more hardware was required, and order proactively. Developers got their servers much faster, and productivity increased as a result. Win-Win right? Except that DevOps suddenly became the man in the middle for every server configuration situation imaginable.

That kicked off our journey to create self-service tools to do all the manual yet very automatable processes. Everything from VM creation and configuration to deploying code and monitoring. These features, while independently insignificant, could add up to a ton of time that the developers didn’t want to spend on operational concerns. They wanted to develop and we were more than happy to facilitate. Once they realized that we were working in their best interest they were more than happy to switch over to our automated processes. That freed them up to focus on development and furthering the business goals.

From operations perspective, we initially were another demanding dev team that wanted more of their precious resources, which were often already spread thin. Coordinating with several different operations teams, each with their own field of responsibility, is a significant undertaking for any team. Initially the DevOps team would get resistance from various ops teams that felt we had no business trying to do their jobs when they were perfectly capable of doing it themselves. But we didn’t want to do their jobs for them. We wanted to give them tools to make their jobs easier. Why manually create 50 active directory objects, each with a slew of details, when a script could easily handle it for you? Why bother tracking static IP address allocation by hand when it could all be done in a database with a nice API? It took a while to find common ground, but eventually many of the time intensive tasks that consumed much of ops’ precious resources were scripted, allowing them to focus on improving infrastructure, and making the site run better.

As I said before, the team has been around for over a year now. Our vision was, and still is, to provide development with an API for operations. It will be pretty cool one day for a developer to be able to poke a single API and get everything they need to get their code from inception to delivery. On the operations side, it would be pretty cool if all the automatable minutia was all handled automatically, so they could completely focus on improvements. We’re getting there, and quickly.

The post DevOps: Finding the right place for a new idea appeared first on Tech Roots.

]]>
http://blogs.ancestry.com/techroots/devops-finding-the-right-place-for-a-new-idea/feed/ 0