Saturday, March 21, 2015

Generating full stack traces for exceptions in Python.

My last few blog posts have been about monkey patching. I will be continuing with more posts on that topic, but I am going to take a slight detour away from that topic in this post to talk about a problem related to capturing stack trace information for exceptions in Python applications.

This does still have some relevance to monkey patching, in as much as one of the reasons you may want to monkey patch a Python application is to add in wrappers which will intercept the the details of an exception raised deep with an application. You might want to do this for the case where the exception would otherwise be captured by an application and translated into a different exception, with loss of information about the inner exception, or in the case of a web application result in a generic HTTP 500 web page response with no useful information captured. So monkey patching can be a useful debugging tool where it may not be convenient to modify the original source code of the application.

Tracebacks for exceptions

To get started, lets consider the following Python script:

def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    function2()
def function4():
    function3()
def function5():
    function4()
function5()

If we run this script we will get:

Traceback (most recent call last):
  File "generate-1.py", line 16, in <module>
    function5()
  File "generate-1.py", line 14, in function5
    function4()
  File "generate-1.py", line 11, in function4
    function3()
  File "generate-1.py", line 8, in function3
    function2()
  File "generate-1.py", line 5, in function2
    function1()
  File "generate-1.py", line 2, in function1
    raise RuntimeError('xxx')
RuntimeError: xxx

In this case we have an exception occurring which was never actually caught within the script itself and is propagated all the way up to the top level, causing the script to be terminated and the traceback printed.

When the traceback was printed, it showed all stack frames from the top level all the way down to the point where the exception occurred.

Now consider the Python script:

import traceback
def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    try:
      function2()
    except Exception:
      traceback.print_exc()
def function4():
    function3()
def function5():
    function4()
function5()

In this script we have a 'try/except' block half way down the sequence of calls. The call to 'function2()' is made within the 'try' block and when the exception is raised, it is handled within the 'except' block. At that point we use the 'traceback.print_exc()' function to output the details of the exception, but then let the script continue on normally to completion.

For this Python script the output is:

Traceback (most recent call last):
  File "generate-2.py", line 11, in function3
    function2()
  File "generate-2.py", line 7, in function2
    function1()
  File "generate-2.py", line 4, in function1
    raise RuntimeError('xxx')
RuntimeError: xxx

What you see here though is that we loose information about the outer stack frames for the sequence of calls that led down to the point where the 'try/except' block existed.

When we want to capture details of an exception for logging purposes so as to later debug an issue, this loss of information can make it harder to debug a problem if the function containing the 'try/except' block could be called from multiple places.

How then can we capture the outer stack frames so we have that additional context?

Capturing the current stack

There are a number of ways of obtaining information about the current stack. If we are just wanting to dump out the current stack to a log then we can use 'traceback.print_stack()'.

import traceback
def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    try:
        function2()
    except Exception:
        traceback.print_stack()
        print '--------------'
        traceback.print_exc()
def function4():
    function3()
def function5():
    function4()
function5()

Run this variant of the Python script and we now get:

  File "generate-3.py", line 23, in <module>
    function5()
  File "generate-3.py", line 21, in function5
    function4()
  File "generate-3.py", line 18, in function4
    function3()
  File "generate-3.py", line 13, in function3
    traceback.print_stack()
--------------
Traceback (most recent call last):
  File "generate-3.py", line 11, in function3
    function2()
  File "generate-3.py", line 7, in function2
    function1()
  File "generate-3.py", line 4, in function1
    raise RuntimeError('xxx')
RuntimeError: xxx

So we now have the inner stack frames corresponding to the exception traceback, as well as those outer stack frames corresponding to the current stack. From this we can presumably now join these two sets of stack frames together and get a complete stack trace for where the exception occurred.

If you look closely though you may notice something. That is that there is actually an overlap in the stack frames which are shown for each, plus that the function we have called to print the current stack is also shown.

In the case of the overlap the issue is that in the inner stack frames from the traceback, it shows an execution point in 'function3()' of line 11. This corresponds to the point where 'function2()' was called within the 'try' block and in which the exception occurred.

At the same time, the outer stack frames from the current execution stack show line 13 in 'function3()', which is the point within the 'except' block where we called 'traceback.print_stack()' to display the current stack.

So the top most stack frame from the traceback is actually want we want and we need to ignore the bottom most two stack frames from the current stack if we were to join these together.

Now although the output of these two functions can be directed to any file like object and thus an instance of 'StringIO' could be used to capture the output, we would still need to break apart the formatted text output, drop certain parts and rearrange others to get the final desired result.

Dealing with such pre formatted output could therefore be a pain, especially if what we really wanted was the raw information about the filename, line number, function and potentially the code snippet. What other options therefore exist for getting such raw information?

Using the inspect module

When needing to perform introspection or otherwise derive information about Python objects, the module you want to use is the 'inspect' module. For the case of getting information about the current exception and current stack, the two functions you can use are 'inspect.trace()' and 'inspect.stack()'. Using these we can rewrite our Python script as:

import inspect
def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    try:
        function2()
    except Exception:
        for item in reversed(inspect.stack()):
            print item[1:]
        print '--------------'
        for item in inspect.trace():
            print item[1:]
def function4():
    function3()
def function5():
    function4()
function5()

This time we get:

('generate-4.py', 25, '<module>', ['function5()\n'], 0)
('generate-4.py', 23, 'function5', [' function4()\n'], 0)
('generate-4.py', 20, 'function4', [' function3()\n'], 0)
('generate-4.py', 13, 'function3', [' for item in reversed(inspect.stack()):\n'], 0)
--------------
('generate-4.py', 11, 'function3', [' function2()\n'], 0)
('generate-4.py', 7, 'function2', [' function1()\n'], 0)
('generate-4.py', 4, 'function1', [" raise RuntimeError('xxx')\n"], 0)

So these functions provide us with the raw information rather than pre formatted text, thus making it easier to process. For each stack frame we also get a reference to the frame object itself, but since we didn't care about that we skipped it when displaying each frame.

Because though we might want to generate such a combined stack trace in multiple places we obviously separate this out into a function of its own.

import inspect
def print_full_stack():
    print 'Traceback (most recent call last):'
    for item in reversed(inspect.stack()[2:]):
        print ' File "{1}", line {2}, in {3}\n'.format(*item),
    for line in item[4]:
        print ' ' + line.lstrip(),
    for item in inspect.trace():
        print ' File "{1}", line {2}, in {3}\n'.format(*item),
    for line in item[4]:
        print ' ' + line.lstrip(),
def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    try:
        function2()
    except Exception:
        print_full_stack()
def function4():
    function3()
def function5():
    function4()
function5()

The final result would now be:

Traceback (most recent call last):
  File "generate-5.py", line 32, in <module>
    function5()
  File "generate-5.py", line 30, in function5
    function4()
  File "generate-5.py", line 27, in function4
    function3()
  File "generate-5.py", line 22, in function3
    function2()
  File "generate-5.py", line 18, in function2
    function1()
  File "generate-5.py", line 15, in function1
    raise RuntimeError('xxx')

Using the exception traceback

We are done right? No.

In this case we have relied on functions from the 'inspect' module that rely on being called directly from within the 'except' block.

That is, for generating the outer stack frames for the current stack we always assume that we need to drop two stack frames from the result of calling 'inspect.stack()'.

For the inner stack frames from the exception, the 'inspect.trace()' function relies on there being an exception which is currently being handled.

That we are assuming we should skip two stack frames for the current stack is a little bit fragile. For example, consider the case where we don't actually call 'print_full_stack()' within the 'except' block itself.

def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3a():
    print_full_stack()
def function3():
    try:
        function2()
    except Exception:
        function3a()
def function4():
    function3()
def function5():
    function4()
function5()

The result here is:

Traceback (most recent call last):
  File "generate-6.py", line 35, in <module>
    function5()
  File "generate-6.py", line 33, in function5
    function4()
  File "generate-6.py", line 30, in function4
    function3()
  File "generate-6.py", line 27, in function3
    function3a()
  File "generate-6.py", line 25, in function3
    function2()
  File "generate-6.py", line 18, in function2
    function1()
  File "generate-6.py", line 15, in function1
    raise RuntimeError('xxx')

As can be seen, we actually end up with an additional stack frame being inserted corresponding to 'function3a()' which we called within the 'except' block and which in turn called 'print_full_stack()'.

To ensure we do the right thing here we need to look at what 'inspect.stack()' and 'inspect.trace()' actually do.

def stack(context=1):
    """Return a list of records for the stack above the caller's frame."""
    return getouterframes(sys._getframe(1), context)
def trace(context=1):
    """Return a list of records for the stack below the current exception."""
    return getinnerframes(sys.exc_info()[2], context)

So the problem we have with the extra stack frame is that 'inspect.stack()' uses 'sys._getframe()' to grab the current stack. This is correct and what it is intended to do, but not really what we want. What we instead want is the outer stack frames corresponding to where the exception was caught.

As it turns out this is available as an attribute on the traceback object for the exception called 'tb_frame'. Learning from how these two functions are implemented, we can therefore change our function to print the full stack.

import sys
import inspect
def print_full_stack(tb=None):
    if tb is None:
        tb = sys.exc_info()[2]
    print 'Traceback (most recent call last):'
    for item in reversed(inspect.getouterframes(tb.tb_frame)[1:]):
        print ' File "{1}", line {2}, in {3}\n'.format(*item),
        for line in item[4]:
            print ' ' + line.lstrip(),
        for item in inspect.getinnerframes(tb):
            print ' File "{1}", line {2}, in {3}\n'.format(*item),
        for line in item[4]:
            print ' ' + line.lstrip(),
def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3a():
    print_full_stack()
def function3():
    try:
        function2()
    except Exception:
        function3a()
def function4():
    function3()
def function5():
    function4()
function5()

We are now back to the desired result we are after.

Traceback (most recent call last):
  File "generate-7.py", line 39, in <module>
    function5()
  File "generate-7.py", line 37, in function5
    function4()
  File "generate-7.py", line 34, in function4
    function3()
  File "generate-7.py", line 29, in function3
    function2()
  File "generate-7.py", line 22, in function2
    function1()
  File "generate-7.py", line 19, in function1
    raise RuntimeError('xxx')

Using a saved traceback

In making this last modification we actually implemented 'print_full_stack()' to optionally accept an existing traceback. If none was supplied then we would instead use the traceback for the current exception being handled.

It is likely a rare situation where it would be required, but this allows one to pass in a traceback object which had been saved away and retained beyond the life of the 'try/except' block which generated it.

Be aware though that doing this can generate some surprising results.

def function1():
    raise RuntimeError('xxx')
def function2():
    function1()
def function3():
    try:
        function2()
    except Exception:
        return sys.exc_info()[2]
def function4():
     tb = function3()
     print_full_stack(tb)
def function5():
     function4()
function5()

In this case we return the traceback to an outer scope and only within that outer function attempt to print out the full stack for the exception.

Traceback (most recent call last):
  File "generate-8.py", line 37, in <module>
    function5()
  File "generate-8.py", line 35, in function5
    function4()
  File "generate-8.py", line 32, in function4
    print_full_stack(tb)
  File "generate-8.py", line 26, in function3
    function2()
  File "generate-8.py", line 22, in function2
    function1()
  File "generate-8.py", line 19, in function1
    raise RuntimeError('xxx')

The problem here is that in 'function4()' rather than seeing the line where the call to 'function3()' was made, we see where the call to 'print_full_stack()' is made.

The reason for this is that although the traceback contains a snapshot of information from the current stack at the time of the exception, this only extends back as far as the 'try/except' block.

When we are accessing 'tb.tb_frame' and getting the outer frames, it is still accessing potentially active stack frames for any currently executing code.

So what has happened is that in looking at the stack frame for 'function4()' it is picking up the current execution line number at that point in time, which has shifted from the time when the original exception occurred. That is, control returned back into 'function4()' and execution progressed to the next line and the call 'print_full_stack()'.

Although the ability to print the full stack trace for the exception is useful, it is only reliable if called within the same function that the 'try/except' block existed. If you return the traceback object to an outer function and then try and produce the full stack, line number information in the outer stack frames can be wrong, due to the code execution point within those functions shifting if subsequent code in those functions had been executed since the exception occurred.

If anyone knows a way around this, beyond creating a snapshot of the full stack within the same function as where the 'try/except' occurred, I would be interested to hear how. My current understanding is that there isn't any way and it is just a limitation one has to live with in that case.

Wednesday, March 18, 2015

Ordering issues when monkey patching in Python.

In my recent post about safely applying monkey patches in Python, I mentioned how one of the issues that arises is when a monkey patch is applied. Specifically, if the module you need to monkey patch has already been imported and was being used by other code, that it could have created a local reference to a target function you wish to wrap, in its own namespace. So although your monkey patch would work fine where the original function was used direct from the module, you would not cover where it was used via a local reference.

Coincidentally, Ned Batchelder recently posted about using monkey patching to debug an issue where temporary directories were not being cleaned up properly. Ned described this exact issue in relation to wanting to monkey patch the 'mkdtemp()' function from the 'tempfile' module. In that case he was able to find an alternate place within the private implementation for the module to patch so as to avoid the problem. Using some internal function like this may not always be possible however.

What I want to start discussing with this post is mechanisms one can use from wrapt to deal with this issue of ordering. A major part of the solution is what are called post import hooks. This is a mechanism which was described in PEP 369 and although it never made it into the Python core, it is still possible to graft this ability into Python using existing APIs. From this we can then add additional capabilities for discovering monkey patching code and automatically apply it when modules are imported, before other modules get the module and so before they can create a reference to a function in their own namespace.

Post import hook mechanism

In PEP 369, a primary use case presented was illustrated by the example:

import imp

@imp.when_imported('decimal')
def register(decimal):
    Inexact.register(decimal.Decimal)

The basic idea is that when this code was seen it would cause a callback to be registered within the Python import system such that when the 'decimal' module was imported, that the 'register()' function which the decorator had been applied to, would be called. The argument to the 'register()' function would be the reference to the module the registration had been against. The function could then perform some action against the module before it was returned to whatever code originally requested the import.

Instead of using the decorator '@imp.when_imported' decorator, one could also explicitly use the 'imp.register_post_import_hook()' function to register a post import hook.

import imp

def register(decimal):
    Inexact.register(decimal.Decimal)

imp.register_post_import_hook(register, 'decimal')

Although PEP 369 was never incorporated into Python, the wrapt module provides implementations for both the decorator and the function, but within the 'wrapt' module rather than 'imp'.

Now what neither the decorator or the function really solved alone was the ordering issue. That is, you still had the problem that these could be triggered after the target module had already been imported. In this case the post import hook function would still be called, albeit for our case too late to get in before the reference to the function we want to monkey patch had been created in a different namespace.

The simplest solution to this problem is to modify the main Python script for your application and setup all the post import hook registrations you need as the absolute very first thing that is done. That is, before any other modules are imported from your application or even modules from the standard library used to parse any command line arguments.

Even if you are able to do this, because though the registration functions require an actual callable, it does mean you are preloading the code to perform all the monkey patches. This could be a problem if they in turn had to import further modules as the state of your application may not yet have been setup such that those imports would succeed.

They say though that one level of indirection can solve all problems and this is an example of where that principle can be applied. That is, rather than import the monkey patching code, you can setup a registration which would only lazily load the monkey patching code itself if the module to be patched was imported, and then execute it.

import sys
from wrapt import register_post_import_hook
def load_and_execute(name):
    def _load_and_execute(target_module):
        __import__(name)
        patch_module = sys.modules[name]
        getattr(patch_module, 'apply_patch')(target_module)
    return _load_and_execute
register_post_import_hook(load_and_execute('patch_tempfile'), 'tempfile')

In the module file 'patch_tempfile.py' we would now have:

from wrapt import wrap_function_wrapper
def _mkdtemp_wrapper(wrapped, instance, args, kwargs):
    print 'calling', wrapped.__name__
    return wrapped(*args, **kwargs)
def apply_patch(module):
    print 'patching', module.__name__
    wrap_function_wrapper(module, 'mkdtemp', _mkdtemp_wrapper)

Running the first script with the interactive interpreter so as to leave us in the interpreter, we can then show what happens when we import the 'tempfile' module and execute the 'mkdtemp()' function.

$ python -i lazyloader.py
>>> import tempfile
patching tempfile
>>> tempfile.mkdtemp()
calling mkdtemp
'/var/folders/0p/4vcv19pj5d72m_bx0h40sw340000gp/T/tmpfB8r20'

In other words, unlike how most monkey patching is done, we aren't forcibly importing a module in order to apply the monkey patches on the basis it might be used. Instead the monkey patching code stays dormant and unused until the target module is later imported. If the target module is never imported, the monkey patch code for that module is itself not even imported.

Discovery of post import hooks

Post import hooks as described provide a slightly better way of setting up monkey patches so they are applied. This is because they are only activated if the target module containing the function to be patched is even imported. This avoids unnecessarily importing modules you may not even use, and which otherwise would increase memory usage of your application.

Ordering is still important and as a result it is important to ensure that any post import hook registrations are setup before any other modules are imported. You also need to modify your application code every time you want to change what monkey patches are applied. This latter point could be inconvenient if only wanting to add monkey patches infrequently for the purposes of debugging issues.

A solution to the latter issue is to separate out monkey patches into separately installed modules and use a registration mechanism to announce their availability. Python applications could then have common boiler plate code executed at the very start which discovers based on supplied configuration what monkey patches should be applied. The registration mechanism would then allow the monkey patch modules to be discovered at runtime.

One particular registration mechanism which can be used here is 'setuptools' entry points. Using this we can package up monkey patches so they could be separately installed ready for use. The structure of such a package would be:

setup.py
src/__init__.py
src/tempfile_debugging.py

The 'setup.py' file for this package will be:

from setuptools import setup
NAME = 'wrapt_patches.tempfile_debugging'
def patch_module(module, function=None):
    function = function or 'patch_%s' % module.replace('.', '_')
    return '%s = %s:%s' % (module, NAME, function)
ENTRY_POINTS = [
    patch_module('tempfile'),
]
setup_kwargs = dict(
    name = NAME,
    version = '0.1',
    packages = ['wrapt_patches'],
    package_dir = {'wrapt_patches': 'src'},
    entry_points = { NAME: ENTRY_POINTS },
)
setup(**setup_kwargs)

As a convention so that our monkey patch modules are easily identifiable we use a namespace package. The parent package in this case will be 'wrapt_patches' since we are working with wrapt specifically.

The name for this specific package will be 'wrapt_patches.tempfile_debugging' as the theoretical intent is that we are going to create some monkey patches to help us debug use of the 'tempfile' module, along the lines of what Ned described in his blog post.

The key part of the 'setup.py' file is the definition of the 'entry_points'. This will be set to a dictionary mapping the package name to a list of definitions listing what Python modules this package contains monkey patches for.

The 'src/__init__.py' file will then contain:

import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__)

as is required when creating a namespace package.

Finally, the monkey patches will actually be contained in 'src/tempfile_debugging.py' and for now is much like what we had before.

from wrapt import wrap_function_wrapper
def _mkdtemp_wrapper(wrapped, instance, args, kwargs):
    print 'calling', wrapped.__name__
    return wrapped(*args, **kwargs)
def patch_tempfile(module):
    print 'patching', module.__name__
    wrap_function_wrapper(module, 'mkdtemp', _mkdtemp_wrapper)

With the package defined we would install it into the Python installation or virtual environment being used.

In place now of the explicit registrations which we previously added at the very start of the Python application main script file, we would instead add:

import os
from wrapt import discover_post_import_hooks
patches = os.environ.get('WRAPT_PATCHES')
if patches:
    for name in patches.split(','):
        name = name.strip()
        if name:
            print 'discover', name
            discover_post_import_hooks(name)

If we were to run the application with no specific configuration to enable the monkey patches then nothing would happen. If however they were enabled, then they would be automatically discovered and applied as necessary.

$ WRAPT_PATCHES=wrapt_patches.tempfile_debugging python -i entrypoints.py
discover wrapt_patches.tempfile_debugging
>>> import tempfile
patching tempfile

What would be ideal is if PEP 369 ever did make it into the core of Python that a similar bootstrapping mechanism be incorporated into Python itself so that it was possible to force registration of monkey patches very early during interpreter initialisation. Having this in place we would have a guaranteed way of addressing the ordering issue when doing monkey patching.

As that doesn't exist right now, what we did in this case was modify our Python application to add the bootstrap code ourselves. This is fine where you control the Python application you want to be able to potentially apply monkey patches to, but what if you wanted to monkey patch a third party application and you didn't want to have to modify its code. What are the options in that case?

As it turns out there are some tricks that can be used in that case. I will discuss such options for monkey patching a Python application you can't actually modify in my next blog post on this topic of monkey patching.

Thursday, March 12, 2015

Using wrapt to support testing of software.

When talking about unit testing in Python, one of the more popular packages used to assist in that task is the Mock package. I will no doubt be labelled as a heretic but when I have tried to use it for things it just doesn't seem to sit right with my way of thinking.

It may also just be that what I am trying to apply it to isn't a good fit. In what I want to test it usually isn't so much that I want to mock out lower layers, but more that I simply want to validate data being passed through to the next layer or otherwise modify results. In other words I usually still need the system as a whole to function end to end and possibly over an extended time.

So for the more complex testing I need to do I actually keep falling back on the monkey patching capabilities of wrapt. It may well just be that since I wrote wrapt that I am more familiar with its paradigm, or that I prefer the more explicit way that wrapt requires you to do things. Either way, for me at least wrapt helps me to get the job done quicker.

To explain a bit more about the monkey patching capabilities of wrapt, I am in this blog post going to show how some of the things you can do in Mock you can do with wrapt. Just keep in mind that I am an absolute novice when it comes to Mock and so I could also just be too dumb to understand how to use it properly for what I want to do easily.

Return values and side effects

If one is using Mock and you want to temporarily override the value returned by a method of a class when called, one way is to use:

from mock import Mock, patch
class ProductionClass(object):
    def method(self, a, b, c, key):
        print a, b, c, key
@patch(__name__+'.ProductionClass.method', return_value=3)
def test_method(mock_method):
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')
    mock_method.assert_called_with(3, 4, 5, key='value')
    assert result == 3

With what I have presented so far of the wrapt package, an equivalent way of doing this would be:

from wrapt import patch_function_wrapper
class ProductionClass(object):
    def method(self, a, b, c, key):
        print a, b, c, key
@patch_function_wrapper(__name__, 'ProductionClass.method')
def wrapper(wrapped, instance, args, kwargs):
    assert args == (3, 4, 5) and kwargs.get('key') == 'value'
    return 3
def test_method():
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')
    assert result == 3

An issue with this though is that the 'wrapt.patch_function_wrapper()' function I previously described applies a permanent patch. This is okay where it does need to survive for the life of the process, but in the case of testing we usually want to only have a patch apply to the single unit test function being run at that time. So the patch should be removed at the end of that test and before the next function is called.

For that scenario, the wrapt package provides an alternate decorator '@wrapt.transient_function_wrapper'.  This can be used to create a wrapper function that will only be applied for the scope of a specific call that the decorated function is applied to. We can therefore write the above as:

from wrapt import transient_function_wrapper
class ProductionClass(object):
    def method(self, a, b, c, key):
        print a, b, c, key
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    assert args == (3, 4, 5) and kwargs.get('key') == 'value'
    return 3
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')
    assert result == 3

Although this example shows how to return a substitute for the method being called, the more typical case is that I still want to call the original wrapped function. Thus, perhaps validating the arguments being passed in or the return value being passed back from the lower layers.

For this blog post when I tried to work out how to do that with Mock the general approach I came up with was the following.

from mock import Mock, patch
class ProductionClass(object):
    def method(self, a, b, c, key):
    print a, b, c, key
def wrapper(wrapped):
    def _wrapper(self, *args, **kwargs):
assert args == (3, 4, 5) and kwargs.get('key') == 'value'
        return wrapped(self, *args, **kwargs)
    return _wrapper
@patch(__name__+'.ProductionClass.method', autospec=True,
        side_effect=wrapper(ProductionClass.method))
def test_method(mock_method):
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')

There were two tricks here. The first is the 'autospec=True' argument to '@Mock.patch' to have it perform method binding, and the second being the need to capture the original method from the 'ProductionClass' before any mock had been applied to it, so I could then in turn call it when the side effect function for the mock was called.

No doubt someone will tell me that I am doing this all wrong and there is a simpler way, but that is the best I could come up with after 10 minutes of reading the Mock documentation.

When using wrapt to do the same thing, what is used is little different to what was used when mocking the return value. This is because the wrapt function wrappers will work with both normal functions or methods and so nothing special has to be done when wrapping methods. Further, when the wrapt wrapper function is called, it is always passed the original function which was wrapped, so no magic is needed to stash that away.

from wrapt import transient_function_wrapper
class ProductionClass(object):
    def method(self, a, b, c, key):
        print a, b, c, key
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    assert args == (3, 4, 5) and kwargs.get('key') == 'value'
    return wrapped(*args, **kwargs)
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')

Using this ability to easily intercept a call to perform validation of data being passed, but still call the original, I can relatively easily create a whole bunch of decorators for performing validation on data as is it is passed through different parts of the system. I can then stack up these decorators on any test function that I need to add them to.

Wrapping of return values

The above recipes cover being able to return a fake return value, returning the original, or some slight modification of the original where it is some primitive data type or collection. In some cases though I actually want to put a wrapper around the return value to modify how subsequent code interacts with it. 

The first example of this is where the wrapped function returns another function which would then be called by something higher up the call chain. Here I may want to put a wrapper around the returned function to allow me to then intercept when it is called.

In the case of using Mock I would do something like:

from mock import Mock, patch
def function():
    pass
class ProductionClass(object):
    def method(self, a, b, c, key):
        return function
def wrapper2(wrapped):
    def _wrapper2(*args, **kwargs):
        return wrapped(*args, **kwargs)
    return _wrapper2
def wrapper1(wrapped):
    def _wrapper1(self, *args, **kwargs):
        func = wrapped(self, *args, **kwargs)
        return Mock(side_effect=wrapper2(func))
    return _wrapper1
@patch(__name__+'.ProductionClass.method', autospec=True,
        side_effect=wrapper1(ProductionClass.method))
def test_method(mock_method):
    real = ProductionClass()
    func = real.method(3, 4, 5, key='value')
    result = func()

And with wrapt I would instead do:

from wrapt import transient_function_wrapper, function_wrapper
def function():
    pass
class ProductionClass(object):
    def method(self, a, b, c, key):
        return function
@function_wrapper
def result_function_wrapper(wrapped, instance, args, kwargs):
    return wrapped(*args, **kwargs)
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    return result_function_wrapper(wrapped(*args, **kwargs))
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    func = real.method(3, 4, 5, key='value')
    result = func()

In this example I have used a new decorator called '@wrapt.function_wrapper'. I could also have used '@wrapt.decorator' in this example. The '@wrapt.function_wrapper' decorator is actually just a cut down version of '@wrapt.decorator', lacking some of the bells and whistles that one doesn't generally need when doing explicit monkey patching, but otherwise it can be used in the same way.

I can therefore apply a wrapper around a function returned as a result. I could could even apply the same principal where a function is being passed in as an argument to some other function.

A different scenario to a function being returned is where an instance of a class is returned. In this case I may want to apply a wrapper around a specific method of just that instance of the class.

With the Mock library it again comes down to using its 'Mock' class and having to apply it in different ways to achieve the result you want. I am going to step back from Mock now though and just focus on how one can do things using wrapt.

So, depending on the requirements there are a couple of ways one could do this with wrapt.

The first approach is to replace the method on the instance directly with a wrapper which encapsulates the original method.

from wrapt import transient_function_wrapper, function_wrapper
class StorageClass(object):
    def run(self):
        pass
storage = StorageClass()
class ProductionClass(object):
    def method(self, a, b, c, key):
        return storage
@function_wrapper
def run_method_wrapper(wrapped, instance, args, kwargs):
    return wrapped(*args, **kwargs)
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    storage = wrapped(*args, **kwargs)
    storage.run = run_method_wrapper(storage.run)
    return storage
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    data = real.method(3, 4, 5, key='value')
    result = data.run()

This will create the desired result but in this example actually turns out to be a bad way of doing it.

The problem in this case is that the object being returned is one which has a life time beyond the test. That is, we are modifying an object stored at global scope and which might be used for a different test. By simply replacing the method on the instance, we have made a permanent change.

This would be okay if it was a temporary instance of a class created on demand just for that one call, but not where it is persistent like in this case.

We can't therefore modify the instance itself, but need to wrap the instance in some other way to intercept the method call.

To do this we make use of what is called an object proxy. This is a special object type which we can create an instance of to wrap another object. When accessing the proxy object, any attempts to access attributes will actually return the attribute from the wrapped object. Similarly, calling a method on the proxy will call the method on the wrapped object.

Having a distinct proxy object though allows us to change the behaviour on the proxy object and so change how code interacts with the wrapped object. We can therefore avoid needing to change the original object itself.

For this example what we can therefore do is:

from wrapt import transient_function_wrapper, ObjectProxy
class StorageClass(object):
    def run(self):
        pass
storage = StorageClass()
class ProductionClass(object):
    def method(self, a, b, c, key):
        return storage
class StorageClassProxy(ObjectProxy):
    def run(self):
        return self.__wrapped__.run()
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    storage = wrapped(*args, **kwargs)
    return StorageClassProxy(storage)
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    data = real.method(3, 4, 5, key='value')
    result = data.run()

That is, we define the 'run()' method on the proxy object to intercept the call of the same method on the original object. We can then proceed to return fake values, validate arguments or results, or modify them as necessary.

With the proxy we can even intercept access to an attribute of the original object by adding a property to the proxy object.

from wrapt import transient_function_wrapper, ObjectProxy
class StorageClass(object):
    def __init__(self):
        self.name = 'name'
storage = StorageClass()
class ProductionClass(object):
    def method(self, a, b, c, key):
        return storage
class StorageClassProxy(ObjectProxy):
    @property
    def name(self):
        return self.__wrapped__.name
@transient_function_wrapper(__name__, 'ProductionClass.method')
def apply_ProductionClass_method_wrapper(wrapped, instance, args, kwargs):
    storage = wrapped(*args, **kwargs)
    return StorageClassProxy(storage)
@apply_ProductionClass_method_wrapper
def test_method():
    real = ProductionClass()
    data = real.method(3, 4, 5, key='value')
    assert data.name == 'name'

Building a better Mock

You might be saying at this point that Mock does a lot more than this. You might even want to point out how Mock can save away details about the call which can be checked later at the level of the test harness, rather than having to resort to raising assertion errors down in the wrappers themselves which can be an issue if code catches the exceptions before you see them.

This is all true, but the goal at this point for wrapt has been to provide monkey patching mechanisms which do respect introspection, the descriptor protocol and other things besides. That I can use it for the type of testing I do is a bonus. 

You aren't limited to using just the basic building blocks themselves though and personally I think wrapt could be a great base on which to build a better Mock library for testing.

I therefore leave you with one final example to get you thinking about the ways this might be done if you are partial to the way that Mock does things.

from wrapt import transient_function_wrapper
class ProductionClass(object):
    def method(self, a, b, c, key):
        pass
def patch(module, name):
    def _decorator(wrapped):
        class Wrapper(object):
            @transient_function_wrapper(module, name)
            def __call__(self, wrapped, instance, args, kwargs):
                self.args = args
                self.kwargs = kwargs
                return wrapped(*args, **kwargs)
        wrapper = Wrapper()
        @wrapper
        def _wrapper():
            return wrapped(wrapper)
        return _wrapper
    return _decorator
@patch(__name__, 'ProductionClass.method')
def test_method(mock_method):
    real = ProductionClass()
    result = real.method(3, 4, 5, key='value')
    assert real.method.__name__ == 'method'
    assert mock_method.args == (3, 4, 5)
    assert mock_method.kwargs.get('key') == 'value'

So that is a quick run down of the main parts of the functionality provided by wrapt for doing monkey patching. There are a few others things, but that is in the main all you usually require. I use monkey patching for actually adding instrumentation into existing code to support performance monitoring, but I have shown here how the same techniques can be used in writing tests for your code as an alternative to a package like Mock.

As I mentioned in my previous post though, one of the big problems with monkey patching is the order in which modules get imported relative to when the monkey patching is done. I will talk more about that issue in the next post.

Wednesday, March 11, 2015

Safely applying monkey patches in Python.

Monkey patching in Python is often see as being one of those things you should never do. Some do regard it as a useful necessity you can't avoid in order to patch bugs in third party code. Others will argue though that with so much software being Open Source these days that you should simply submit a fix to the upstream package maintainer.

Monkey patching has its uses well beyond just patching bugs though. The two most commonly used forms of monkey patching in Python which you might not even equate with monkey patching are decorators and the use of mocking libraries to assist in performing unit testing. Another not some common case of monkey patching is to add instrumentation to existing Python code in order to add performance monitoring capabilities.

On the issue of decorators I wrote a quite detailed series of blog posts at the start of last year about where decorators can cause problems. The primary problem there was decorators which aren't implemented in a way which preserve proper introspection capabilities, and which don't preserve the correct semantics of the Python descriptor protocol when applied to methods of classes.

When one starts to talk about monkey patching arbitrary code, rather than simply applying decorators to your own code, both of these issues become even more important as you could quite easily interfere with the behaviour of the existing code you are monkey patching in unexpected ways.

This is especially the case when monkey patching methods of a class. This is because when using decorators they would be applied while the class definition is being constructed. When doing monkey patching you are coming in after the class definition already exists and as a result you have to deal with a number of non obvious problems.

Now when I went and wrote the blog posts last year on decorators it was effectively the result of what I learnt from implementing the wrapt package. Although that package is known as providing a way for creating well behaved decorators, that wasn't the primary aim in creating the package. The real reason for creating the package was actually to implement robust mechanisms for monkey patching code. It just so happened that the same underlying principles and mechanism required to safely do monkey patching apply to implementing the function wrappers required for decorators.

What I am going to do with this blog post is start to explain the monkey patching capabilities of the wrapt package.

Creating a decorator

Before we jump into monkey patching of arbitrary code we first need to recap how the wrapt package could be used to create a decorator. The primary pattern for this was:

import wrapt
import inspect 
@wrapt.decorator
def universal(wrapped, instance, args, kwargs):
    if instance is None:
        if inspect.isclass(wrapped):
            # Decorator was applied to a class.
            return wrapped(*args, **kwargs)
        else:
            # Decorator was applied to a function or staticmethod.
            return wrapped(*args, **kwargs)
    else:
        if inspect.isclass(instance):
            # Decorator was applied to a classmethod.
            return wrapped(*args, **kwargs)
        else:
            # Decorator was applied to an instancemethod.
            return wrapped(*args, **kwargs)

A special feature of the decorators that could be created by the wrapt package was that within the decorator you could determine the context the decorator was used in. That is, whether the decorator was applied to a class, a function or static method, a class method or an instance method.

For the case where the decorator was applied to an instance method you are provided a separate argument to the instance of the class. For a class method the separate argument is a reference to the class itself. In both cases these are separated from the 'args' and 'kwargs' argument so you do not need to fiddle around with extracting it yourself.

A decorator created using wrapt is therefore what I call a universal decorator. In other words, it is possible to create a single decorator implementation that can be used across functions, methods and classes and you can tell at the time of the call the scenario and adjust the behaviour of the decorator accordingly. You no longer have to create multiple implementations of a decorator and ensure that you are using the correct one in each scenario.

Using this decorator is then no different to any other way that decorators would be used.

class Example(object):

    @universal
    def name(self):
        return 'name'

For those who have used Python long enough though, you would remember that the syntax for applying a decorator in this way hasn't always existed. Before the '@' syntax was allowed you could still create and use decorators, but you had to be more explicit in applying them. That is, you had to write:

class Example(object):

    def name(self):
        return 'name'
name = universal(name) 

This can still be done and when written this way it makes it clearer how decorators are in a way a form of monkey patching. This is because often all they are doing is introducing a wrapper around some existing function which allows the call to the original function to be intercepted. The wrapper function then allows you to perform actions either before or after the call to the original function, or allow you to modify the arguments passed to the wrapped function, or otherwise modify the result in some way, or even substitute the result completely.

What is an important distinction though with decorators is that the wrapper function is being applied at the time the class containing the method is being defined. In contrast more arbitrary monkey patching involves coming in some time later after the class definition has been created and applying the function wrapper at that point.

In effect you are doing:

class Example(object):
def name(self):
return 'name'
Example.name = universal(Example.name)

Although a decorator function created using the wrapt package can be used in this way and will still work as expected, in general I would discourage this pattern for monkey patching an existing method of a class.

This is because it isn't actually equivalent to doing the same thing within the body of the class when it is defined. In particular the access of 'Example.name' actually invokes the descriptor protocol and so is returning an instance method. We can see this by running the code:

class Example(object):
    def name(self):
        return 'name'
    print type(name)
print type(Example.name)

which produces:

<type 'function'>
<type 'instancemethod'>

In general this may not matter, but I have seen some really strange corner cases where the distinction has mattered. To deal with this therefore, the wrapt package provides an alternate way of applying wrapper functions when doing monkey patching after the fact. In the case of adding wrappers to methods of class, this will use a mechanism which avoids any problems caused by this subtle distinction.

Adding function wrappers

For general monkey patching using the wrapt package, rather than using the decorator factory to create a decorator and then apply that to a function, you instead define just the wrapper function and then use a separate function to apply it to the target function.

The prototype for the wrapper function is the same as before, but we simply do not apply the '@wrapt.decorator' to it.

def wrapper(wrapped, instance, args, kwargs):
    return wrapped(*args, **kwargs)

To add the wrapper function to a target function we now use the 'wrapt.wrap_function_wrapper()' function.

class Example(object):
def name(self):
return 'name'
import wrapt

wrapt.wrap_function_wrapper(Example, 'name', wrapper)

In this case we had the class in the same code file, but we could also have done:

import example

import wrapt
wrapt.wrap_function_wrapper(example, 'Example.name', wrapper)

That is, we provide the first argument as the module the target is defined in, with the second argument being the object path to the method we wished to apply the wrapper to.

We could also skip importing the module altogether and just used the name of the module.

import wrapt
wrapt.wrap_function_wrapper('example', 'Example.name', wrapper)

Just to prove that just about anything can be simplified by the user of a decorator, we finally could write the whole thing as:

import wrapt

@wrapt.patch_function_wrapper('example', 'Example.name')
def wrapper(wrapped, instance, args, kwargs):
return wrapped(*args, **kwargs)

What will happen in this final example is that as soon as the module this is contained in is imported, the specified target function defined in the 'example' module will automatically be monkey patched with the wrapper function.

Delayed patching is bad

Now a very big warning is required at this point. Applying monkey patches after the fact like this will not always work.

The problem is that you are trying to apply a patch after the module has been imported. In this case the 'wrapt.wrap_function_wrapper()' call will ensure the module is imported if it wasn't already, but if the module had already been imported previously by some other part of your code or by a third party package you may have issues.

In particular, it the target function you were trying to monkey patch was a normal global function of the module, some other code could have grabbed a direct reference to it by doing:

from example import function

If you come along later and have:

import wrapt
@wrapt.patch_function_wrapper('example', 'function')
def wrapper(wrapped, instance, args, kwargs):
    return wrapped(*args, **kwargs)

then yes the copy of the function contained in the target module will have the wrapper applied, but the reference to it created by the other code will not have the wrapper.

To ensure that your wrapper is always used in this scenario you would need to patch it not just in the original module, but in any modules where a reference had been stored. This would only be practical in very limited circumstances because in reality you are not going to have any idea where the function might be getting used if it is a common function.

This exact problem is one of the shortcomings in the way that monkey patching is applied by packages such as gevent or eventlet. Both these packages do delayed patching of functions and so are sensitive to the order in which modules are imported. To get around this problem at least for modules in the Python standard library, the 'time.sleep()' function which they need to monkey patch, has to be patched not only in the 'time' module, but also in the 'threading' module.

There are some techniques one can use to try and avoid such problems but I will defer explaining those to some time down the track.

Instead for my next blog post I want to move onto some examples for where monkey patching could be used by looking at how wrapt can be used as alternative to packages such as the mock package when doing testing.