If your Hadoop cluster allows you to connect to Hive through the command line interface (CLI), you can very easily export a Hive table of data in Hadoop to a CSV.
It only takes a few small lines of code, which I’ve written into a few bash/shell scripts:
Approach One (Hive Insert Overwrite a Directory):
This approach writes the contents of a Hive table to a local path (linux) in as many files as it needs. It then uses a Linux “cat” command to merge all files to one csv.
Here’s what happened:
- shebang line (optional)
- Command issued to Hive that selects all records from a table in Hive, separates the fields/columns by a comma, and writes the file to a local directory (wiping anything previously in that path).
- Cat command issued to get/merge all part files (remember, the output was from a Map/Reduce job) in directory into a single .csv file.
Approach Two (Hive CSV Dump Internal Table):
This approach writes a table’s contents to an internal Hive table called csv_dump, delimited by commas — stored in HDFS as usual. It then uses a hadoop filesystem command called “getmerge” that does the equivalent of Linux “cat” — it merges all files in a given directory, and produces a single file in another given directory (it can even be the same directory).
In either approach, that .csv now lives on your local edge node, and can be placed into HDFS, used in other scripts, or SCP’d to your local desktop. It’s a very efficient and easy way to get the contents of a Hive table into a easily human and application-readable format.
For more Hadoop tutorials, subscribe to this blog (button in sidebar).