For several years now Ancestry has been publishing collections of records from the U.S. that have been “transcribed” using a method we call Entity Extraction. One example is the U.S. City Directory collection. A precursor to modern telephone books, city directories listed all of the inhabitants of a city, along with their address, occupation, and Read More
At Ancestry we quickly analyze billions of rows of data to deliver insights from our massive database to internal and external audiences. To do this we need tools that have the right capabilities and can be easily adopted by our tech team. In a recent Q&A video Bill Yetman, VP of Commerce, Data and Analytics Read More
Technology evolution is a very common and natural process throughout human history. Some people like it while some people are hesitant. Despite attitudes towards technology evolution, new technology will eventually come. That is how technology trends advance, that is how quality of life advances and that is how human civilization advances. As a technology Read More
Ancestry science team gives two platform presentations on Friday, October 9th at human genetics conference ASHG With over one million customers who have submitted their DNA, AncestryDNA has the fastest-growing and one of the largest collections of human genetic data around the world. That amount of DNA data powers the AncestryDNA science team to perform Read More
Big Data has been all the craze. Business, marketing and project managers like it because they can plot out trends to make decisions. To us developers, Big Data is just a bunch of logs. In this blog post, I would like to point out that Big Data (or logs with context) can be leveraged by Read More
When interpreting historical documents for the intent of researching your ancestors, you are often presented with less than perfect data. Many of the records that are the backbone of family history research are bureaucratic scraps of paper filled out decades ago in some government building. We should hardly be surprised when the data entered is Read More
We have built out an initial logging framework with Kafka 0.7.2, a messaging system developed at LinkedIn. This blog post will go over some of the lessons we’ve learned by building out the framework here at Ancestry.com. Most of our application servers are Windows-based and we want to capture IIS logs from these servers. However, Read More
One of the real advantages of a system like Hadoop is that it runs on commodity hardware. This will keep your hardware costs low. But when that hardware fails at an unusually high rate it can really throw a wrench into your plans. This was the case recently when we set up a new cluster Read More
Interested in genealogy? Curious about DNA? Fascinated by the world of big data? If so, come check out my talk at the Global Big Data Conference on DNA day this Friday, April 25 at 4pmPT in the Santa Clara Convention Center! I’ll cover Jermline, our massively-scalable DNA matching application. I’ll talk about our business, give a run-through Read More
In my previous posts, I outlined how to import data into Hive tables using Hive scripts and dynamic partitioning. However, we’ve found that this only works for small batch sizes and it is not scalable for larger jobs. Instead, we found that it is faster and more efficient to partition the data as they are Read More