Blog - Sunset Days

No, this is not about at which time the sun sets. It will set soon enough. No, this is about the world, about economics, politics, software, technology, and everything.

I saw a grafitti once on a rust-brown railway bridge, defiantly announcing in large, bright letters: "Sunrise people, sunset days", actually a quote from a song of the band Standstill. This was 2016. And while various social movements of all times harboured a gloomy view of their respective present, recent political events added an alarming sense of actuality to those letters. Sunset days indeed. Let us all be sunrise people (well, not literally perhaps); let us work towards a new dawn.

April 10 2019 03:31:14.

EAEPE 2019 submission deadline extended

The submission deadlines for this year's EAEPE sessions have been extended until April 15, Monday next week. So the chance to submit a contribution is still there, both for our regular Complexity Economics session and for our special session on "Machine learning, AI, and Robotization: Effects on socio-economic systems and opportunities for economic analysis".

Incidentally, arstechnica is running a special piece on this exact topic, the social effects of AI and machine learning. Of course, you can't expect too much from a popular science article, but that said, it is a pretty comprehensive introduction. I like that it goes all the way to Deep Fakes and Generative Adversarial Networks at the end of the article.

March 28 2019 00:33:22.

EAEPE 2019 Sessions: "Machine learning, AI, and Robotization" and "Complexity Economics"

At this year's EAEPE conference, we will have a special session on "Machine learning, AI, and Robotization: Effects on socio-economic systems and opportunities for economic analysis", organized together with Alessandro Caiani, Andrea Roventini, and Magda Fontana. Machine learning, AI, and robotics are a very powerful tools that may go a very long way in transforming our economy, our society, and our academic system including economics - for the better or for the worse. We do in any case live in a very exciting time - so many new methods become available, so many new data sources, so many new concepts of analysis and also of intervention become feasible.

All this and more will be discussed in the special session. For details, see the call for papers for the special session on Machine Learning.

In addition, we will also have our regular session in Research Area [Q] - Economic Complexity. This research area was created in 2017 and participated for the first time in the conference in 2017 in Budapest. This year, it is going to happen for the third time. We had very inspiring discussions in the past years and are looking forward to again providing a space to discuss the role of complex systems in the economy.

See the call for papers for the sessions of Research Area [Q] - EconomicComplexity for details.

The submission deadline in both cases is next Monday, April 1st. Abstracts should be submitted in the online submission system.

September 26 2018 10:30:26.

CCS 2018 Satellite Meeting

This week, we will hold our CCS 2018 satellite meeting on "Trade runner 2049: Complexity, development, and the future of the economy" (again co-organized with Claudius Gräbner).

The title is, of course, a tribute to a recent movie combined with the idea that we are working to understand the future of the economy. Nevertheless, the range of talks will actually be quite diverse, not just about development economics.

I am very excited about our keynote, which will be given by Magda Fontana with a focus on the geographical distribution of novel and interdisciplinary research - a topic that researchers can relate to at a personal level.

Update: The satellite meeting has been shifted to the afternoon session (Wednesday September 26 2018). The programme in detail:

Magda FontanaUniversity of TurinScience, novelty and interdisciplinarity: topics, networks and geographical evolution. (See conference website)
Coffee Break
Paolo Barucca, Piero Mazzarisi, Daniele Tantari and Fabrizio LilloUniversity College LondonA dynamic network model with persistent links and node-specific latent variables, with an application to the interbank market. (See conference website)
Jessica Ribas and Eli Hadad, Leonardo Fernando Cruz Basso, Pedro Schimit and Nizam OmarUniversidade Presbiteriana MackenzieA Case Study About Brazilian Inequality Income Using Agent-based Modelling and Simulation. (See conference website)
Sabine JeschonnekOhio State University at LimaThat Syncing Feeling - The Great Recession, US State Gross Domestic Product Fluctuations, and Industry Sectors. (See conference website)
František KalvasUniversity of West Bohemia in PilsenExperience in the Retirement Fund Problem. (See conference website)

July 22 2018 02:29:28.

ZIP to NUTS code correspondence files for Albania and Serbia

When dealing with economic micro-data, a consistent separation in geographical regions is very useful. For much of Europe - the EU, EU candidate countries and EFTA countries - Eurostat's Nomenclature of Territorial Units for Statistics (NUTS) provides exactly this. What is especially useful if you have data that does not include NUTS codes but, for instance, addresses, are the postcode to NUTS correspondence tables provided by Eurostat. Of course, they are not entirely consistent and may require some cleaning. For instance for Luxemburg and Latvia, they include the country code into the postcode (""); for Montenegro, several ZIP codes are listed twice (a bit silly, especially since all of Montenegro maps into the same NUTS3 region).

More annoying, however, is that no correspondence files are provided as yet for Albania and for Serbia. For Albania, this is relatively straightforward to compile as both postal codes and NUTS regions are nicely aligned with the country's administrative divisions. Here is a postal code to NUTS code correspondence file for Albania in the same format as the ones provided for other countries by Eurostat. Here is how to build it:

# assume the dataset is a pandas data frame named df with a column ZIPCODE
zipcodes = df.ZIPCODE.unique()
zipToNUTS3 = {"10": "AL022", \
              "15": "AL012", \
              "20": "AL012", \
              "25": "AL022", \
              "30": "AL021", \
              "33": "AL021", \
              "34": "AL021", \
              "35": "AL021", \
              "40": "AL015", \
              "43": "AL015", \
              "44": "AL015", \
              "45": "AL014", \
              "46": "AL014", \
              "47": "AL014", \
              "50": "AL031", \
              "53": "AL031", \
              "54": "AL031", \
              "60": "AL033", \
              "63": "AL033", \
              "64": "AL033", \
              "70": "AL034", \
              "73": "AL034", \
              "74": "AL034", \
              "80": "AL011", \
              "83": "AL011", \
              "84": "AL011", \
              "85": "AL013", \
              "86": "AL013", \
              "87": "AL013", \
              "90": "AL032", \
              "93": "AL032", \
              "94": "AL035", \
              "97": "AL035"}
outputfile = open("NUTS/pc_al_NUTS-2013.csv", "w")
for z in zipcodes:
     outputfile.write("{0:s};{1:s}\n".format(z, zipToNUTS3[z[:2]]))
Note that this compiles a short correspondence file with only the ZIP codes you actually have. The file below has all of them.

For Serbia it is a bit more complicated and has to be pieced together from the postal codes (see here on the Serbian language wikipedia) and the NUTS regions for Serbia. The NUTS regions actually are aligned with the oblasts, see here. Building the correspondence file is therefore similar to the code for Albania above, just a bit longer. Oh, and here are general correspondence files for Albania and Serbia.

June 23 2018 21:09:46.

Democracy in Europe

Yesterday, in what could be described as a last-ditch effort to save polio from eradication, Italy's deputy prime minister and far-right populist Matteo Salvini came out as a vaccination denier. He said he believes vaccinations to be useless and harmful.

Salvini is clearly in a great position. He can shoot against the establishment from the second row while completely controling Italy's politics. Giuseppe Conte, the powerless prime minister, and Luigi Di Maio, who allows himself to mostly be treated like a child by Salvini, will certainly not stop him.

The European far right clearly learned their lessons from Donald Trump's approach to multi-media politics very well: 1. The truth really doesn't matter. 2. Tell everyone what they want to hear. Consistency doesn't matter either. 3. Claim to be some kind of anti-establishment figure. 4. Foment hatred against minorities. 5. Create as much outrage as you can. 6. Seek alliances with anyone who also doesn't care about the truth; vaccination deniers, Neo-Nazis, climate change deniers, young earth creationists, theocons, Putin apologists.

Of course, all this is not exactly new. We have gotten used to the political escapades the Le Pen family blesses us with every few years. Some may still remember Pim Fortuyn. Or maybe the scandal-ridden ÖVP-FPÖ government of Wolfgang Schüssel in Austria, something Sebastian Kurz seems intent on repeating with a bit of personality cult added. Germany, in turn, recently repopularized the term "Lügenpresse" that even made it to Trump's 2016 electoral campaign. Fallen into disuse after the defeat of Nazi-Germany in 1945, the term was resurrected first by the local xenophobic protest movement of Lutz Bachmann, a notorious lyer and petty criminal, then by various AfD politicians; while the CSU still avoids using the term directly, they certainly took a page out of Lutz Bachmann's and the AfD's playbooks.

Beyond these more recent movements that rode the wave of twitter and social media driven politics, some older and more stoic types of far right politicians have already succeeded in practically taking over teir countries. Now, armed with new ideas of how to make use of social media, they are about to chop away at their countries' democratic institutions. Jarosław Kaczyński and Victor Orbán are creating far-rightist autocracies in Poland and Hungary. Vladimir Putin has succeeded in building a right wing authoritarian state in Soviet style and with just a pinch of religious conservativism. His approach too, ableit different from Trump's and Salvini's bombastic twitter rants, is succeeding prodigiously. The attitudes of the Russian public are changing; they are becoming progressively conservative, religious, and anti-gay. Gay people and liberal activists were the first to suffer but they will not be the last. Meanwhile with regard to Turkey, nobody really knows what Recep Tayyip Erdoğan has in mind for the country; it appears to be some kind of theocracy. (He would honestly strike a much better caliph than al-Baghdadi.) In any case, he appears to be set to win tomorrow's elections.

While as of yet all is not lost, and while there are good strategies to counter the surge of the far right, recent developments warrant a look at the map. Ian Goldin and Chris Kutarna famously offered a strangely optimistic view of how democracy is spreading around the world with two maps showing the world's democracies in 1988 and 2015 on pages 32 and 35 of their recent book "Age of discovery". The message of the comparison is seems clear and obvious: Everything is fine; we are entering a golden age. Well, here are two different maps.

State of democracies in Europe and surrounding regions in 1998 and in 2018

June 23 2018 18:52:27.

CCS 2018 Satellite Meeting on "Trade Runner 2049: Complexity, Development, and the Future of the Economy"

Together with Claudius Gräbner, we organized two very successful CCS satellite meetings in 2016 and 2017. I am very much looking forward to continuing this at the CCS conference in Thessaloniki in September. This year, we chose to place the focus on development and trade - hence the tongue-in-cheek title "Trade Runner 2049" - as well as machine learning and its challenges and potential for economics and the economy.

The abstract submission is still open, but the deadline is coming up rather soon - next week Tuesday, June 26. As always we are looking forward to receiving many inspiring and insightful papers and to a great satellite meeting in September. See the satellite meeting website for the call for papers and other details.

June 08 2018 02:02:43.

Economics LaTeX Bibliography (bibtex) styles

There are very many LaTeX bibliography styles out there. Most people use only a handful that are common in their field; some journals compile their own in order to avoid confusion and chaos. The various styles differ significalntly. No matter what kind of bibliography or citation layout you are looking for, most likely someone needed this before and has compiled a bibtex style file (bst). No matter how unusual a bibliography and citation layout you want, likely it already exists. So far so good.

Now you just have to find the bst that is to your liking. You can have a look around CTAN. If you have one of the more extensive LaTeX distributions installed, it likely comes with a great number of styles, just search your system for .bst files. And then you can try every one of them until you find what you need. Good luck with that; it is very tiresome.

Luckily there are a few comparisons out there, e.g. the one by Russ Lenth from the University of Iowa. (It is not the most extensive one but I always have trouble finding the others again.)

Sadly, nothing like this exists for economics packages. Until recently. Since I needed it a few months ago, here it comes.

A bit more explanation: Arne Henningsen (arnehe) has, a few years ago, thankfully, collected a few common bibtex styles used by economics journals. They can be found at CTAN and at Sourceforge or in your LaTeX distribution (depends on which one you have; texlive has it included). Also, see the website of the economtex project. Sadly, the project mostly comprises styles for fairly neoclassical journals, little in terms of the heterodox world. This may be because many heterodox journals are simply too small to maintain their own styles and rely on styles provided by the publishers or (as is the case for JoIE for instance) do not allow submission in LaTeX at all.

Specifically, the styles are The pdf document provided here shows in comparison what they look like. (Note that some of them require additional packages, such as \usepackage{natbib,har2nat} and \usepackage{ulem}.)

October 08 2017 00:14:08.

How to list most disk space consuming directories in UNIX/GNU/Linux

Yes, you can list files with ls -l with any number of options and see large files in one directory. But what if you want to have a sum over what hard drive space everything in a directory occupies? This would be very helpful when cleanung up your hard disk, wouldn't it? It turns out, this can be done (for the entries of the current directory) with du -sh *. But that generates an excessive amount of output, printing every single entry, no matter how small. And du does not sort. But with a bit of effort, we can clean that up. Let's write a script which we call (for list large and massive agglomerations):
    llama() { 
        du -sh "$@" | grep -e [0-9][0-9][0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e ^[0-9][0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e ^[0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e [0-9][0-9][0-9]M$'\t'|sort -r; 
    llama "$@"
This lists subdirectories and files with contents larger than 100MB for any given directory. We have to set the executable flag

    chmod +x
and can now use it to list large amounts of data in the current directory

    ./ *
or for another directory by preceeding * by the directory's path. We can even source the script to have it available as a command in our shell directly

It can then also be executed by simply stating *
Example (for the system root directory): /*
will show something like

746G	/home
28G	/mnt
15G	/usr
8.3G	/var
169M	/boot
161M	/opt
It will also give a large number of errors unless you are root ("cannot read directory ... Permission denied"), so you may want to reroute STDERR to /dev/null in order to actually get to see your output in a concise form without thousands of errors: /* 2>/dev/null
Note: If you also want to see data sizes between 10MB and 100MB or so, you can simply add another line. However, the script is not very efficient in terms of computation time as it executes du several times. du over large directory trees can take a substantial amount of time to complete.

September 21 2017 09:34:17.

CCS 2017 Satellite Meeting

Today, finally, is the day of our CCS 2017 satellite meeting on "Institutions, Industry Structure, Evolution - Complexity Approaches to Economics" (co-organized with Claudius Gräbner). In spite of a few last minute changes, we now have a great programme with a keynote by Hyejin Youn from Northwestern University, Chicago and many interesting contributions by, among others, Stojan Davidovic, Francesca Lipari, Neave O'Cleary, Ling Feng, and Carlos Perez. See the programme in pdf format here; details can be found on the website.

May 27 2017 20:48:34.

CCS Satellite Meeting on "Institutions, Industry Structure, Evolution - Complexity Approaches to Economics"

We, Claudius Gräbner and me, are again organizing a Satellite Meeting of the Conference on Complex Systems 2017 in Cancun. The broad topic is again complexity economics, but we are putting a focus on aspects of institutionalist economics, industrial organization and and evolutionary approaches. For details, see the call on the website of the satellite. As one of the more important interdisciplinary conferences on complex systems the gatheing, organized annually by the Complex Systems Society is of immense importance to complexity economics. With a moderate but increasing number of economists attending the CCS every year, we can hope that it contributes greatly to spreading the use of network theory, agent-based modelling, simulation and data sciences in general in economics. What is more, complexity economics complements the perspective of institutional economics, evolutionary economics, and industrial economics (to name a few) neatly.

May 21 2017 21:22:43.

R and Stata

Though R is not only open-source but arguably also the more powerful language for statistical computing, many statisticians and especially econometricians continue to use and teach Stata. One reason is that they were first introduced to Stata, giving rise to a classical case of a path dependent development. When I found myself in exactly that situation two years ago, I collected a number of R commands and their Stata equivalents (or vice versa) into a pdf. Maybe, it will be of use to some of you.

May 03 2017 06:07:20.

EAEPE Research Area for Complexity Economics established

The EAEPE (European Association for Evolutionary Political Economics) has established a new Research Area [Q] for complexity economics. Together with Magda Fontana from Torino and Wolfram Elsner from Bremen, I will serve as Research Area Coordinator for this branch.

It is our hope to promote complexity economics, to bring more scholars working in this exciting and promising field into the EAEPE, and to encourage cooperation with scholare in existing Research Areas such as simulation (RA [S]), networks (RA [X]), technological change (RA [D]).

Research Area [Q] will participate already in the annual conference 2017 in Budapest. Please see the research area specific call for papers.

May 02 2017 21:21:19.

Stack conversion for image files

Suppose, you need to convert a large number of images and do not want to do it by hand. This can be done with simple shell scripts using Imagemagick's convert Say you want to convert a stack of files from png to jpg (replace pattern appropriately):
    for f in `ls *.png`
        fnew=`echo $f|cut -d. -f1`
        convert $f -background white -flatten $fnew
... or the other way around (jpg to png). And maybe you also want to replace black areas in those files by transparent areas at the same time (may help make presentations look more fancy):
    for f in `ls *.jpg`
        fnew=`echo $f| cut -d. -f1`
        convert $f -transparent black $fnew
Or, maybe you just wanted to rename a stack of files (replace filename pattern and text_to_replace and replacement_text patterns appropriately)
    for f in `ls filename_pattern*.pdf`
        nf=$(echo $f | sed 's/text_to_replace/replacement_text/')
        mv $f $nf

May 02 2017 04:15:56.

How to join pdf files with a certain filename pattern

This can be done with a simple python script like so. Works with Linux, perhaps also with BSD, Mac, and, less likely, Windows:
    #!/usr/bin/env python
    import glob,os,sys

    # Import the appropriate pyPDF for python2 or python3 respectively
        from pyPdf import PdfFileReader, PdfFileWriter
        from PyPDF2 import PdfFileReader, PdfFileWriter
    # Attempt to apply pattern supplied as agrument and generate list of pdfs
    if len(sys.argv)>1:
        print("no pattern given, you may give a pattern for pdfs to include,\
                                    keep in mind to escape * etc. like so:")
        print("    python2 \"\*.pdf\"")
        print("    python2 \"pattern-\*-pattern.pdf\"")
    output = PdfFileWriter() 
    for pdf in pdflist:
        # Read pdf files one by one and collect all pages into output
        input = PdfFileReader(open(pdf, "rb")) 
        for i in range(input.getNumPages()):
            page = input.getPage(i)
    # Write joined pdf
    outputStream = open("joined.pdf", "wb") 

May 01 2017 11:56:27.

How to run a shell script in a loop until it succeeds

I admit, there is only a limited number of use cases for this. But it may sometimes happen that you want to run a script in a loop again and again until it succeeds. Say, you are on an unstable network and are trying to obtain an IP using dhcpcd:
    while [[ $exitcode -ne 0 ]] 
        echo $exitcode 
        dhcpcd wlan0