C/C++ loops and macros

In C/C++ you often break from the middle of a loop instead of from the top/bottom. There are arguments against this — it doesn’t respect block structure and it isn’t a functional style. A break is just a goto. But it’s common, easy to understand, and a very useful pattern.

    // General loop micro-pattern:
    for ( ; ; ) {
        .. do some stuff ..
        if ( .. test .. ) break;
        .. do some stuff ..
        if ( .. test .. ) break;
        .. do some stuff ..
// I've seen programmers define LOOP and LOOP_forever macros.
# define LOOP          for(;;)
# define LOOP_forever  for(;;)

    LOOP {
        if ( .. ) break;
        .. etc ..

You should be careful when defining keyword and structure macros like LOOP — they interfere with editors and other tools and they can hurt readability. But if you like LOOP, you can go further:

// Getting carried away with macros.
# define LOOP             for(;;)
# define LOOP_exit        break
# define LOOP_next        continue
# define LOOP_while( x )  if ( ! x ) break
# define LOOP_until( x )  if ( x ) break

    LOOP {
        LOOP_until( test );
        .. do something ..
        LOOP_while( test );
        .. do something ..
        if ( .. ) {
        if ( .. ) {
            .. do something ..
        .. do something ..

Finally, let’s imagine taking the loop macro to the limit. The following isn’t real code so don’t try to compile it.

    // Fantasy loop macro.
    loop named top_level {
        int how_many_times = 0;
        int my_sum = 0;
        int my_sum2 = 0;
        std::vector<int> my_vector;
        std::stack<int> my_stack;
        loop {
            for x in { 1, 2, 4, 8 };
            for e in 1..7;
            for y = e * e; /* eval'd each time */
            init g = x * x; /* eval'd once */
            for z = 1 to 10 by 2;
            for a = 10 downto 1;
            for b = 2 inc by 2;
            for c = 2 dec by 2;
            for d = 2 then 6;
            while my_test( );
            until my_test( ) do (is_special = true) exit top_level;
            until my_test( ) exit 2 levels;
            next 2 levels;
            next top_level;
            collect x in my_vector;
            collect x in my_stack push;
            sum x in my_sum;
            reduce x in my_sum2 using plus;
            count my_test( ) in how_many_times;
            if ( ! LOOP_first_time ) {
                std::cout << LOOP_count << std::endl;
            finally {
                std::cout << "leaving loop" << std::endl;

Of course this is no longer C/C++ — you can’t do this with #define, you’d have to use a more powerful pre-processor. One that can manipulate code blocks and define nested macros in an enclosing parent. The standard #define macros are not even meant to take blocks as params.

// I've seen this but it may not be portable and I don't recommend it.
# define STANDARD_CATCH( block_ )           \
    try block_                              \
    catch( my::error::warning* w ) {        \
        my::error::log( w);                 \
    }                                       \
    catch ( ... ) {                         \
        my::error::log_unknown( );          \
        throw;                              \

void some_fn( )
    .. stuff ..
		// Does this macro expand correctly if a comma appears in the block?
        .. stuff ..
    } )

Macros like this are usually a bad idea in C/C++, but in Lisp they are common and a very powerful feature of the language. C/C++ has a lot of syntax that communicates structure and keeps the source compact. Lisp has a very simple syntax, which makes it wordy and full of deeply nested parentheses but also makes it easy to extend. Lisp has a programming model that makes it harder to write tight, efficient, close-to-the-metal code, so C and C++ are probably better choices for realtime and systems engineering. But you don’t have to have Lisp to get something like the Lisp macro facility — you could define a pre-processing facility for C/C++ that worked on the block level and added a lot of power. A simpler C++ sytax would help, but it’s not vital.

But this post is getting too long. I’ll talk about what these macros might look like later.


I ran into this in some of my old code today. It typedefs the different sized primitive integers (uint64, uint32, uint16, uint8, sint64, sint32, sint16, and sint8) and it declares min/max consts like max_uint64 and min_sint16. It’s a lot nicer to write uint64 than “unsigned __int64” or “unsigned long long int”. It assumes 2’s compliment and modulo arithmetic, which is why it gets min_sint32 from (max_sint32 + 1).

// Macro to typedef sized sint and uint and declare min/max const values.
# define DECLARE_INTS( N )                                                            \
    typedef unsigned __int ## N  uint ## N ;                                          \
    typedef   signed __int ## N  sint ## N ;                                          \
    static const uint ## N  max_uint ## N  = (uint ## N) ((sint ## N) -1);            \
    static const sint ## N  max_sint ## N  = (sint ## N) ((max_uint ## N >> 1)    );  \
    static const sint ## N  min_sint ## N  = (sint ## N) ((max_uint ## N >> 1) + 1)



Of course I prefer to just to #include <stdint.h> or <cstdint> and get typedefs like int8_t, uint64_t, int_least32_t, int_fast32_t, uintptr_t, and intmax_t, along with a bunch of min/max consts.

But Visual Studio 2008 doesn’t provide stdint.h. I can get stdint.h of course — I’ve got it now in cygwin. But I hate reaching around to there from a Visual Studio project, and I hate copying the file to my project.

So why isn’t stdint.h provided by Visual Studio? Is it because stdint.h, while part of standard C is not a part of C++? That seems kind of purist for Microsoft. Or is it because MS really wants you to use typedefs like ULONG and LONGLONG, which are not standard by the way. In fact “long long” is standard C99 but not C++. Although it probably will be next year.

(I’m not sure what they’re going to call 128-bit integers. LONGLONGLONGLONG maybe?)

Although I think uint64_t is easier to read than “unsigned long long int”, the later is more consistent with the standard literal-const suffixes and format strings.

// The suffixes and format strings are consistent with the types.
    long i1 = 123L;
    long long i2 = 123LL;
    unsigned long u1 = 123uL;
    unsigned long long u2 = 123uLL;

    printf( "%ld %lld %lu %llu", i1, i2, u1, u2);

Although there are suffixes and format strings consistent with types like uint64_t I don’t think they are standard C or C++, although they’re provided by MS.

// These suffixes are consistent too, although only i64 and ui64 are documented.
    __int64 i3 = 123i64;
    __int32 i4 = 123i32;
    __int16 i5 = 123i16;
    __int8  i6 = 123i8 ;
    unsigned __int64 u3 = 123ui64;
    unsigned __int32 u4 = 123ui32;
    unsigned __int16 u5 = 123ui16;
    unsigned __int8  u6 = 123ui8 ;

    // %I32d %I32u etc probably also work, at least with MS.
    printf( "%I64d %I64u", i3, u3);

stdint.h provides an answer to the const-suffix problem with macros like these:

#define INT16_C( x)  x
#define INT32_C( x)  x ## L
#define INT64_C( x)  x ## LL
#define UINT16_C( x) x
#define UINT32_C( x) x ## UL
#define UINT64_C( x) x ## ULL

As for the format directives, in C++ you’re supposed to avoid them anyway. Instead of sprintf(..) a C++ programmer is supposed to do this.

# include <sstream>

    __int64 value = ... something ...

    std::ostringstream value_stream;
    value_stream << value;
    std::string value_string = value_stream.str( );
    const char* value_buf = value_string.c_str( );

Of course sprintf(..) is faster because it doesn't malloc(..). If I wanted slow I'd be writing in Java.

Knol is not Wikipedia

I’ve been reading about Knol from Google. It’s supposed to be like Wikipedia except the articles have a single author who is identified (by their real name). The author controls all edits and can even choose to copyright an article (although it’s still free to read). Wikipedia articles are supposed to be neutral and encyclopedic, while Knol articles can be full of opinion, conjecture, argument, persuasion, and perhaps even libel.

You can see how this might appeal to an author, who gets credit and keeps control. If you post an article in Wikipedia you are setting it free. It’s no longer yours. Sometimes this matters.

If you’re writing about something where you have special knowledge, and you put a lot of time and get the words just right, then you probably don’t want someone messing it up. Especially since your name is on it. Or if you pen a clever op-ed full of inuendo and double meanings, you don’t want it dumbed down and neutered.

On the other hand, I think almost anything can be improved by a talented editor, and I’m wondering how this will develop. Will Knol set up some way to hook up writers with editors?

Knol also allows comments and has a rating system, so in some ways it is like a huge collective blog or a textual YouTube. Will some authors gain fame and become influential? Probably, just like bloggers do now.

And there’s the question, why bother with Knol? It’s easy to find a place to publish your writings for free so what does Knol add? Knol sounds like as good a place to publish as any, and better than most. It’s like YouTube, people know it’s the place to go for that kind of content and there is infrastructure and organization already set up. You can choose your license, and you can try to harvest ad revenue.

From a reader’s perspective Knol seems very different from Wikipedia, which will be obvious once articles about current events and politics appear. Knol started with a bunch of authoritative medical articles, which seems like Wikipedia content, but soon people will be writing about home remedies and personal experiences — the kind of stuff that’s not neutral enough for Wikipedia. You’ll go to Wikipedia to find out about tendinitis; you’ll go to Knol for hangover cures.

Next Page →