How Random Sampling in Hive Works, And How to Use It

Image Courtesy: https://faculty.elgin.edu/dkernler/statistics/ch01/images/srs.gif

Random sampling is a technique in which each sample has an equal probability of being chosen. A sample chosen randomly is meant to be an unbiased representation of the total population.

In the big data world, we have an enormous total population: a population that can prove tricky to truly sample randomly. Thankfully, Hive has a few tools for realizing the dream of random sampling in the data lake. Continue reading

Advertisements

Apache Crunch Tutorial 9: Reading & Ingesting Orc Files

LITTLE_CRUNCH

This post is the ninth in a hopefully substantive and informative series of posts about Apache Crunch, a framework for enabling Java developers to write Map-Reduce programs more easily for Hadoop.

Continue reading

Guide to Apache Falcon #5: Tighter Oozie Integration

falcon-logo

This series is designed to be the ultimate guide on Apache Falcon, a data governance pipeline for Hadoop. Falcon excels at giving you control over workflow scheduling, data retention and replication, and data lineage. This guide will (hopefully) excel at helping you understand and use Falcon effectively. Continue reading

Preparing for the HDPCD Exam: Data Ingestion

HWX_Badges_Cert_Color_Dev

In order to do big data, you need… DATA. No surprise there! Hadoop is a different beast than other environments and getting data into HDFS can be a bit intimidating if you’re not familiar. If only there were good documentation about these tasks…

Luckily there is good documentation! This post will cover the basics involved in ingesting data into a Hadoop cluster using the HDPCD Exam study guide.  Continue reading

What is a Hadoop Developer?

webHeaderHadoopstersNew

The Big Data industry has a problem: what makes a Hadoop Developer? Is it someone who has general knowledge about the many tools available in a typical Hadoop ecosystem? Or is it someone who regularly commits to the Apache projects and pushes Hadoop to new levels? I think it’s somewhere in the middle of both. Continue reading

Moving Data Within or Between Hadoop Clusters with DistCP

hdfs-logo

Copying chunks of data in and around Hadoop is relatively trivial. But moving larger chunks can be time-consuming or needlessly complicated. Sometimes you even want to move data between Hadoop clusters (if you have 2 or more). With this article, I’ll show you a great way to handle all of these scenarios. Continue reading

Pulling Data from Teradata to Hadoop with Apache Sqoop

sqoopflow

If you have a Hadoop cluster, it’s rare that you don’t have some traditional row-column data you want to query. To do queries on that RDBMS (Relational Database Management System) data, you’ll want to pull that data from its system (perhaps a SQL Server, Oracle Database, or Teradata warehouse), and store it on Hadoop. Continue reading