It's no secret that at bit.ly we enjoy working with large sets of data. we <3 data.
We like to store large amounts of data in JSON format because it's very compact, and yet is flexible so you can and/remove new data elements in the future without any extra work.
In order to facilitate working with large files of JSON data, I've released json2csv which will extract specified fields from JSON records into a CSV format.
This is useful because in CSV format, it becomes possible to use the standard linux tool chain of awk, grep, sort, uniq and others.
Since it's written in C, json2csv is fast enough to process 50k records a second.
usage: json2csv -k fields,to,output -i /path/to/input.json (optional; default is stdin) -o /path/to/output.csv (optional; default is stdout) -v verbose output (to stderr) -h this help
json2csv -k login,remote_ip -i input.json -o output.csv cat input.json | json2csv -k login,remote_ip > output.csv