Blog - Sunset Days

No, this is not about at which time the sun sets. It will set soon enough. No, this is about the world, about economics, politics, software, technology, and everything.

I saw a grafitti once on a rust-brown railway bridge, defiantly announcing in large, bright letters: "Sunrise people, sunset days", actually a quote from a song of the band Standstill. This was 2016. And while various social movements of all times harboured a gloomy view of their respective present, recent political events added an alarming sense of actuality to those letters. Sunset days indeed. Let us all be sunrise people (well, not literally perhaps); let us work towards a new dawn.


October 08 2017 00:14:12.

How to list most disk space consuming directories in UNIX/GNU/Linux

Yes, you can list files with ls -l with any number of options and see large files in one directory. But what if youwant to have a sum over what everything hard drive space in a directory occupies? This would be very helpful when cleanung up your hard disk, wouldn't it? It turns out, this can be done (for the entries of the current directory) with du -sh *. But that generates an excessive amount of output, printing every single entry, no matter how small. And du does not sort. But with a bit of effort, we can clean that up. Let's write a script which we call llama.sh (for list large and massive agglomerations):
    #!/bin/bash
    
    llama() { 
        du -sh "$@" | grep -e [0-9][0-9][0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e ^[0-9][0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e ^[0-9]G$'\t' | sort -r &&\
        du -sh "$@" | grep -e [0-9][0-9][0-9]M$'\t'|sort -r; 
    }
    
    llama "$@"
This lists subdirectories with contents larger than 100MB for any given directory. We have to set the executable flag

    chmod +x llama.sh
and can now use it to list large amounts of data in the current directory

    ./llama.sh *
or for another directory by preceeding * by the directory's path. We can even source the script to have it available as a command in our shell directly

    source llama.sh
It can then also be executed by simply stating

    llama.sh *
Example:

    llama.sh /*
will show something like

746G	/home
28G	/mnt
15G	/usr
8.3G	/var
169M	/boot
161M	/opt
It will also give a large number of errors unless you are root ("cannot read directory ... Permission denied"), so you may want to reroute STDERR to /dev/null in order to actually get to see your output:

    llama.sh /* 2>/dev/null
Note: If you also want to see data sizes between 10MB and 100MB or so, you can simply add another line. However, the script is not very efficient in terms of computation time as it executes du several times. du over large directory trees can take a substantial amount of time to complete.


September 21 2017 09:34:17.

CCS 2017 Satellite Meeting

Today, finally, is the day of our CCS 2017 satellite meeting on "Institutions, Industry Structure, Evolution - Complexity Approaches to Economics" (co-organized with Claudius Gräbner). In spite of a few last minute changes, we now have a great programme with a keynote by Hyejin Youn from Northwestern University, Chicago and many interesting contributions by, among others, Stojan Davidovic, Francesca Lipari, Neave O'Cleary, Ling Feng, and Carlos Perez. See the programme in pdf format here; details can be found on the website.


May 27 2017 20:48:34.

CCS Satellite Meeting on "Institutions, Industry Structure, Evolution - Complexity Approaches to Economics"

We, Claudius Gräbner and me, are again organizing a Satellite Meeting of the Conference on Complex Systems 2017 in Cancun. The broad topic is again complexity economics, but we are putting a focus on aspects of institutionalist economics, industrial organization and and evolutionary approaches. For details, see the call on the website of the satellite. As one of the more important interdisciplinary conferences on complex systems the gatheing, organized annually by the Complex Systems Society is of immense importance to complexity economics. With a moderate but increasing number of economists attending the CCS every year, we can hope that it contributes greatly to spreading the use of network theory, agent-based modelling, simulation and data sciences in general in economics. What is more, complexity economics complements the perspective of institutional economics, evolutionary economics, and industrial economics (to name a few) neatly.


May 21 2017 21:22:43.

R and Stata

Though R is not only open-source but arguably also the more powerful language for statistical computing, many statisticians and especially econometricians continue to use and teach Stata. One reason is that they were first introduced to Stata, giving rise to a classical case of a path dependent development. When I found myself in exactly that situation two years ago, I collected a number of R commands and their Stata equivalents (or vice versa) into a pdf. Maybe, it will be of use to some of you.


May 03 2017 06:07:20.

EAEPE Research Area for Complexity Economics established

The EAEPE (European Association for Evolutionary Political Economics) has established a new Research Area [Q] for complexity economics. Together with Magda Fontana from Torino and Wolfram Elsner from Bremen, I will serve as Research Area Coordinator for this branch.

It is our hope to promote complexity economics, to bring more scholars working in this exciting and promising field into the EAEPE, and to encourage cooperation with scholare in existing Research Areas such as simulation (RA [S]), networks (RA [X]), technological change (RA [D]).

Research Area [Q] will participate already in the annual conference 2017 in Budapest. Please see the research area specific call for papers.


May 02 2017 21:21:19.

Stack conversion for image files

Suppose, you need to convert a large number of images and do not want to do it by hand. This can be done with simple shell scripts using Imagemagick's convert Say you want to convert a stack of files from png to jpg (replace pattern appropriately):
    #!/bin/bash
    for f in `ls *.png`
    do 
        fnew=`echo $f|cut -d. -f1`
        fnew=$fnew".jpg"
        convert $f -background white -flatten $fnew
    done
... or the other way around (jpg to png). And maybe you also want to replace black areas in those files by transparent areas at the same time (may help make presentations look more fancy):
    #!/bin/bash
    for f in `ls *.jpg`
    do 
        fnew=`echo $f| cut -d. -f1`
        fnew=$fnew.png
        convert $f -transparent black $fnew
    done
Or, maybe you just wanted to rename a stack of files (replace filename pattern and text_to_replace and replacement_text patterns appropriately)
    #!/bin/bash
    for f in `ls filename_pattern*.pdf`
    do 
        nf=$(echo $f | sed 's/text_to_replace/replacement_text/')
        mv $f $nf
    done


May 02 2017 04:15:56.

How to join pdf files with a certain filename pattern

This can be done with a simple python script like so. Works with Linux, perhaps also with BSD, Mac, and, less likely, Windows:
    #!/usr/bin/env python
    
    import glob,os,sys

    # Import the appropriate pyPDF for python2 or python3 respectively
    try:
        from pyPdf import PdfFileReader, PdfFileWriter
    except:
        from PyPDF2 import PdfFileReader, PdfFileWriter
    
    # Attempt to apply pattern supplied as agrument and generate list of pdfs
    if len(sys.argv)>1:
        pdflist=glob.glob(sys.argv[1])
        print(pdflist)
        print(sys.argv[1])
    else:
        pdflist=glob.glob("*.pdf")
        print(pdflist)
        print("no pattern given, you may give a pattern for pdfs to include,\
                                    keep in mind to escape * etc. like so:")
        print("    python2 joinpdf.py \"\*.pdf\"")
        print("or:")
        print("    python2 joinpdf.py \"pattern-\*-pattern.pdf\"")
    
    pdflist.sort()
    output = PdfFileWriter() 
    
    for pdf in pdflist:
        # Read pdf files one by one and collect all pages into output
        input = PdfFileReader(open(pdf, "rb")) 
        for i in range(input.getNumPages()):
            page = input.getPage(i)
            output.addPage(page)
    
    # Write joined pdf
    outputStream = open("joined.pdf", "wb") 
    output.write(outputStream) 
    outputStream.close()


May 01 2017 11:56:27.

How to run a shell script in a loop until it succeeds

I admit, there is only a limited number of use cases for this. But it may sometimes happen that you want to run a script in a loop again and again until it succeeds. Say, you are on an unstable network and are trying to obtain an IP using dhcpcd:
    #!/bin/bash
    
    false 
    exitcode=$? 
    while [[ $exitcode -ne 0 ]] 
    do
        echo $exitcode 
        dhcpcd wlan0 
        exitcode=$?
    done