Econometrics at Scale
The github repository contains the source code and dataset to reproduce the parallel computing exercise described in the paper:
Econometrics at Scale: Spark Up Big Data in Economics
Benjamin Bluhm & Jannic Cutura
This paper provides an overview of how to use ‘‘big data’’ for economic research. We investigate the performance and ease of use of different Spark applications running on a distributed file system to enable the handling and analysis of data sets which were previously not usable due to their size. More specifically, we explain how to use Spark to (i) explore big data sets which exceed retail grade computers memory size and (ii) run typical econometric tasks including microeconometric, panel data and time series regression models which are prohibitively expensive to evaluate on stand-alone machines. By bridging the gap between the abstract concept of Spark and ready-to-use examples which can easily be altered to suite the researchers need, we provide economists and social scientists more generally with the theory and practice to handle the ever growing datasets available. The ease of reproducing the examples in this paper makes this guide a useful reference for researchers with a limited background in data handling and distributed computing.
This repository contains all codes to replicate the results of our paper.
Link to the paper:
|Time series||See Chapter 4.4 for simulation code|
(1) The original data can be obtained from the FFIEC website.