Using Python’s “timeit” Module to Benchmark Functions Directly (Instead of Passing in a String to be Executed)

All the basic examples for Python's timeit module show strings being executed. This lead to, in my opinion, somewhat convoluted code such as:

#! /usr/bin/env python

import timeit

def f():
    pass

if __name__ == "__main__":
    timer = timeit.Timer("__main__.f()", "import __main__")
    result = timer.repeat(repeat=100, number=100000)
    print("{:8.6f}".format(min(result)))

For some reason, the fact that you can call a function directly is only (again, in my opinion) obscurely documented. But this makes things so much cleaner:

#! /usr/bin/env python

import timeit

def f():
    pass

if __name__ == "__main__":
    timer = timeit.Timer(f)
    result = timer.repeat(repeat=100, number=100000)
    print("{:8.6f}".format(min(result)))

Much more elegant, right?

One puzzling issue is a Heisenbug-ish issue (i.e., the observation affecting the outcome): the second version consistently and repeatedly results in faster performance timings. I can see differences in overall benchmark script execution time, due to differences in the way overhead resources are allocated/loaded, but I would hope that actual performance timing would be invariant to this. Maybe with "real" code, instead of the dummy execution body, things will be more consistent? Or is this a real issue?

One thought on “Using Python’s “timeit” Module to Benchmark Functions Directly (Instead of Passing in a String to be Executed)

Leave a Reply

Your email address will not be published. Required fields are marked *