This is an introduction of how to leverage different binding times for designing decouple code. What is binding time? Here is a definition from Wikibooks
A source file has many names whose properties need to be determined. The meaning of these properties might be determined at different phases of the life cycle of a program. Examples of such properties include the set of values associated with a type; the type of a variable; the memory location of the compiled function; the value stored in a variable, and so forth. Binding is the act of associating properties with names. Binding time is the moment in the program’s life cycle when this association occurs.Wikibooks
The general idea behind binding time is that you want to wait as long as possible before binding data or functions to names. The ability to defer binding times is yet another hammer in your toolbox for designing loosely coupled software. To be more exact in the definition, binding is the association of named identifiers with data or code. For example, the binding of the function named
main to its implementation code for a C/C++ executable is a binding that occurs when the executable is linked.
Two major binding times are static and dynamic. Static bindings are name bindings that occur before a program is run. Dynamic bindings are name bindings that occur as the program executes. In the context of C/C++ programming, static bindings occur during the compile and link phases. Dynamic bindings are the C++ polymorphic bindings that occur at runtime when a virtual method is called. In a perfect world,
everyone would be programming everything in a language that supports runtime polymorphic bindings, and every module or interface would have pure abstract interfaces that decouple them from the rest of the universe. However, polymorphic bindings are not free; they can result in poorer runtime performance and require an object-oriented programming language for a type-safe implementation.
Again, what I’m trying to convince you to do here is to decouple things. And the later the binding happens, the more decoupled a name is. A name that is decoupled from its implementation (e.g., a pure virtual class definition) has minimal to no dependencies. And, when it comes to unit testing, refactoring, reuse, and so on, the fewer the dependencies, the easier the task is. Deferred bindings can be done with static binding without resorting to runtime polymorphic bindings. Static bindings can be broken down into the following subcategories:
- Source Time
- Compile Time
- Link Time
Source Time Binding
Source time bindings are made when you edit a source code file. This is reflected primarily in what your
#include statements are and the definitions of numeric and string constants. As source code bindings are bindings that cannot be changed or undone later without editing the file, you want to minimize source time bindings.
So what does this mean in practice? First, never include header files that are not a direct dependency. Second, design your modules and interfaces in a way where they do not directly rely on any explicit or “magic” constants. The following is an example of defining a constant value for a buffer size while still allowing an application to provide a different value at compile time. This construct defers the binding time of the buffer size from source time to compile time.
#define OPTION_FOO_BAR_MAX_BUFFER_SIZE 128
Compile Time Binding
Compile time bindings are bindings that are made during the compilation stage. The primary mechanisms involved with compile time bindings are the specification of preprocessor symbols when the compiler is invoked and setting the compiler’s header file search paths. I have two patterns – LHeader and LConfig – that leverage the header search path mechanism to provide concrete definitions for preprocessor symbols that were declared without any definition provided (I will go into details of the LHeader and LConfig in my next blog posting). For example, here is how you can defer the binding of the mutex data type until compile time:
// This is a project specific header that will resolve the _MAP symbols
/** This symbol defines the structure for a Mutex. The concrete definition
of the mutex type is deferred to the application's 'platform'
#define Cpl_System_Mutex_T Cpl_System_Mutex_T_MAP
The delayed binding in this example could also have been done using a forward declaration. For example, I could have used a statement like
typedef struct Cpl_System_Mutex Cpl_System_Mutex_T
and then provided the concrete definition for
struct Cpl_System_Mutex in a platform specific .c|.cpp file. The compiler allows clients (or consumers) to pass around a pointer to a mutex (e.g.,
Cpl_System_Mutex_T*) without a concrete type definition because all pointers have the same known storage size. The disadvantage of using the forward declaration approach is that only platform-specific code can instantiate a mutex instance. This restriction is not inherently bad, but it does bring up the issue of how memory will be allocated for the mutex instance. Will it be dynamically allocated from the heap on demand? Is there a statically allocated pool of mutexes? If so, how many instances can be created? Also, what happens when the heap/memory pool is exhausted?
The advantage of using the LHeader pattern and doing a compile time binding for the mutex type is that it allows the client (i.e., consumer) to take over the memory management for mutex instances. The client code can statically allocate as many mutexes as it needs without having to add runtime checks for possible out-of-memory conditions when creating a mutex. Sidebar: My day job is as an embedded developer, so anything that does not require dynamic memory allocation is good thing.
Link Time Binding
Link time binding is what most developers typically think of as static bindings1. These are bindings that are made during the link stage of the build process. The linker binds names to addresses or binds code with a specific function name. Link time binding allows a developer to define a function (or set of functions) and then have multiple implementations for those functions. The selection of which implementation to use is made when the build script for a project is written. Taking advantage of link time binding is a very simple mechanism for supporting multiple variants. For example, link time binding is how you can have an implementation of a function
foo() for Linux and a different
foo() implementation for Windows. Some of the limitations you will encounter trying to leverage link time bindings are:
- The organization of the source code. I encourage that each implementation resides in its own file and that the preprocessor
#ifdef/#elseconstructs are not used for separating the different implementations.
- It works best when used with C functions. Link binding can be used with classes (i.e., a single class definition with multiple implementations), but this provides minimal value over using the traditional object-oriented approach of inheritance.
I have been using compile and link time bindings to decouple code from the underlying hardware platforms for years. If you feel the need to add
#ifdef/#else inside a function to account for differences in the target platforms (or anytime) – stop and think about the alternative of using a compile/link binding strategy instead. The binding time strategy scales,
only lead to the dark side…
- Link time binding describes the process for using statically linked images. An image can also be dynamically linked at runtime when the image is loaded. Conceptually, using link time binding as a deferred binding time is the same concept as using a statically linked vs. dynamically linked image just with different implementation details.