I fucking hate macro hell!



  • Working on some new USB code for a PIC and looking through some of Microchip's sample source code.

    There I find gems like this:

    mInitializeUSBDriver();

    Nothing wrong with that right? Right? It's a function, right? Then why the flying fuck isn't it to be found in any .c file??

    Oh..RIGHT!!! This might be why...

    #define mInitializeUSBDriver()      {UCFG = UCFG_VAL;                       \
                                         usb_device_state = DETACHED_STATE;     \
                                         usb_stat._byte = 0x00;                 \
                                         usb_active_cfg = 0x00;}

    People that write macros like that, in my opinion, need to have their balls cut off and force fed to them at the very minimum. Better yet, just shoot them.

    And of course, UCFG_VAL is defined not only in yet completely another header file, but also in a header file in a completely different directory. Oh and then UCFG_VAL contains other macros defined in yet another file! *GRRR*

     



  • What's it being compiled wit, for, and against? Seems like the macro hell, as bad as it is, might be necessary for really tiny code. Granted, I've never seen a situation where code like the posted didn't deserve it's own function, but there "could" be a reason.

    Perhaps.

    If you were in, ya know, MACRO HELL, or something...

    You have my sympathies.



  •  Your writing code for a microcontroller that has a small amount of memory space. You can't guarantee that a compiler will inline your code even if you ask it to nicely. So you write your code in a #define so it's guaranteed to be inlined. Since your not saving return address, processor registers, etc, the code will run faster as a bonus.

    I see code like this all the time from various big manufacturers (Microchip, ST, Atmel, ...). It's called modularisation.



  • @drachenstern said:

    What's it being compiled wit, for, and against? Seems like the macro hell, as bad as it is, might be necessary for really tiny code. Granted, I've never seen a situation where code like the posted didn't deserve it's own function, but there "could" be a reason.

    Perhaps.

    It's initialization code! It's called once! At startup! Any and all function call overhead would be completely irrelevant. I mean the thing has got 32kb of flash memory for crying out loud. You really don't need to save an instruction or two in initialization code.

    The entire codebase is a clusterfuck though. Spread out over several .c and .h files all in different directories. Line by line it is a search through file after file to put the puzzle together what the hell it actually does. It's like every line calls or uses something defined in a different file.

     



  • @Kermos said:

    @drachenstern said:
    What's it being compiled wit, for, and against? Seems like the macro hell, as bad as it is, might be necessary for really tiny code. Granted, I've never seen a situation where code like the posted didn't deserve it's own function, but there "could" be a reason.

    It's initialization code! It's called once! At startup! Any and all function call overhead would be completely irrelevant. I mean the thing has got 32kb of flash memory for crying out loud. You really don't need to save an instruction or two in initialization code.

    The entire codebase is a clusterfuck though. Spread out over several .c and .h files all in different directories. Line by line it is a search through file after file to put the puzzle together what the hell it actually does. It's like every line calls or uses something defined in a different file.

    Well, for starters, I don't know anything about the hardware you're talking about, but as for function calls, if it's an inline function, I thought those got treated pretty much like macros by most compilers. I thought that was the whole point to an "inline" in the first place. So no overhead.

    Next, I would say that "I" would think that it would just be in the manual "Make the first four lines of your code this... for initialization, or it's going to melt in your hands..." Seems like that would be saner at any point...



  • I feel your pain. A long time ago, I had to write a "C" program to drive a document scanner (this is 28 years ago), and drive it's ocr s/w to pick the page apart (identify tables, pictures, paragraphs, captions, etc), and mark it up (SGML).

    I wrote the program using garden variety "C" functions. My boss came along and decided that since it took about 60 seconds to process a single page (Sun 350), we could speed it up by inlining the whole thing, turning each function into a macro.

    What was a nice straightforward program of about 300 functions in numerous files got converted into a single file consisting of main() and 300 nested macros. While the code in still looked like normal "C" functions, it was all macros.

    Debugging was impossible, but it only took 59.999 seconds instead of 60 to process a page!

     



  •  I have nothing to add, I just wanted to tag this:



  •  @Kermos said:

    It's initialization code! It's called once! At startup! Any and all function call overhead would be completely irrelevant. I mean the thing has got 32kb of flash memory for 

    Your saying that now, just wait until your trying to squeeze your code in and desperately need those 4 - 6 bytes of flash :)



  • @drachenstern said:

    What's it being compiled wit, for, and against? Seems like the macro hell, as bad as it is, might be necessary for really tiny code.

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.



  • I've seen such macros. I've seen them work quite well.

    For example:

    [code]#define initLibUsb() { .. }[/code]
    is a line from LibMTP to kick LibUSB into action with Hotplug devices. Its used by (Le Gasp) most any LibUSB library. Same with BassLib, as it uses macros to kick Win32 into action.


  • Java Dev

    Doesn't compare to what I encountered in our codebase recently (built for recent multicore 64-bit servers)

    #define MBC_SAVESTATE(mbc) { MBCHAIN __mbc = *mbc
    #define MBC_RESTORESTATE(mbc) *mbc = __mbc }

    No comments with the definitions (or, for that matter, anywhere it is used).



  • @indrora said:

    I've seen such macros. I've seen them work quite well.
    For example:
    <font size="2" face="Lucida Console">#define initLibUsb() { .. }</font> is a line from LibMTP to kick LibUSB into action with Hotplug devices. Its used by (Le Gasp) most any LibUSB library. Same with BassLib, as it uses macros to kick Win32 into action.
     

    You're missing the point. While they might both work quite well: why use macro's as functions when functions are only such a tiny bit of overhead in exchange for a ****load of manageability and debugging advantages?

    Furthermore: There is no excuse for disguising a macro as a function by adding (). There simply isn't any... If you wanted to "make clear" that you're macro behaves like a function you shouldn't be using macro's but functions...



  • @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 



  • @Mole said:

    @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 

     

    QFT

     

    For example: a friend of mine once wrote a program for the same purpose, just to measure how speedy we could make it. He wrote the initial program, compiled it, and spent a few hours on optimising the assembly. I wrote the program, measured which parts took the most time and seperated those rigiously. I then inserted a few tricks I could think of and compiled them with gcc -O3 (the rest with O2). I then made one final adjustment in the assembler the compiler couldn't possibly know or think of.

     The result:  our original programs ran at the same speed within the margin of error. His final program did it in 90% of that time, mine took 50%. (altough mine used 20% more memory)

     Then again: we showed our code to a fulltime embedded programmer we know (you know: eats pointers for breakfast, assembler for lunch and custom gcc branches for dinner). He just shook his head, retired into his liar for about half an hour (we swear we heard him chanting black magic and sacrificing old pentiums) and returned again with a program that ran about 20% of the original speed...



  • @dtech said:

    @Mole said:

    @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 

     

    QFT

     

    For example: a friend of mine once wrote a program for the same purpose, just to measure how speedy we could make it. He wrote the initial program, compiled it, and spent a few hours on optimising the assembly. I wrote the program, measured which parts took the most time and seperated those rigiously. I then inserted a few tricks I could think of and compiled them with gcc -O3 (the rest with O2). I then made one final adjustment in the assembler the compiler couldn't possibly know or think of.

     The result:  our original programs ran at the same speed within the margin of error. His final program did it in 90% of that time, mine took 50%. (altough mine used 20% more memory)

     Then again: we showed our code to a fulltime embedded programmer we know (you know: eats pointers for breakfast, assembler for lunch and custom gcc branches for dinner). He just shook his head, retired into his liar for about half an hour (we swear we heard him chanting black magic and sacrificing old pentiums) and returned again with a program that ran about 20% of the original speed...

     

     

    Which hardware? Don't forget that GCC is about 4 years behind modern compiler design on SOME targets.



  • @dtech said:

    You're missing the point. While they might both work quite well: why use macro's as functions when functions are only such a tiny bit of overhead in exchange for a ****load of manageability and debugging advantages?

    How about this: For some reason you have ended up in an interrupt function (maybe its a timer, port interrupt, whatever) and you need to reinitialise the usb subsystem. So you call initUsb(). If initUsb() was a function you could have problems as the interrupt may have been fired whilst that function was executing, so you can't call it again from interrupt code. It is also considered bad practice to call functions from interrupts as they typically have very small stacks compared to non-interrupt code. We know you can't guarantee on the inline keyword, so the one thing left is that you turn small functions into macros. The code is guaranteed to be duplicated, there will be no re-entrancy problems, etc, etc. 

    On a PC running an OS there's really no advantage I can think of other than the guarantee of a inlined function for small code that called lots (which the compiler should sort out anyway if it's half-decent and not written by Microsoft), but for an embedded system, there are a lot of other advantages. 



  • @Mole said:

    How about this: For some reason you have ended up in an interrupt function (maybe its a timer, port interrupt, whatever) and you need to reinitialise the usb subsystem. So you call initUsb(). If initUsb() was a function you could have problems as the interrupt may have been fired whilst that function was executing, so you can't call it again from interrupt code. It is also considered bad practice to call functions from interrupts as they typically have very small stacks compared to non-interrupt code.

    I'm no embedded programmer, but on normal OSes isn't the practice to disable interrupts when in interrupt code?



  • @Mole said:

    @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 

    You have to do it on paper first and then write the machine codes into the computer (I have written some programs directly in machine code that would be much longer if they were written in a higher-level program language). You also have to be good at it (some people don't know programming very well). And it depends what program you wrote. You can also combine assembler/machine codes with C codes. But for GameBoy Advance (ARM7) it is much more complicated to write it in the machine codes. But for the old GameBoy, it is easier to figure out the machine codes in the program and I can modify them much more easily. But for larger software you would not write it in machine codes, you would use something else. Some people do write smaller and faster programs in machine codes or assembly codes but some people don't or can't.



  • @morbiuswilters said:

    I'm no embedded programmer, but on normal OSes isn't the practice to disable interrupts when in interrupt code?

    Depends on the application. I'm a full time software engineer and in the last 10 years I've never used an OS is any of my projects, and to get the most from the hardware, sometimes it's a requirement to have concurrent interrupts. 

    Also, just in case you misread what I typed above, I wasn't talking about concurrent interrupts, I was talking about main program flow being interrupted by an interrupt and requiring to run the same routine as the one that has just been interrupted. Not only would you have re-entrancy issues if that code was a function rather than a macro, you could also have premature interrupt termination due to the way the architecture works (I'm thinking Cortex-M3). 



  • @zzo38 said:

    @drachenstern said:

    What's it being compiled wit, for, and against? Seems like the macro hell, as bad as it is, might be necessary for really tiny code.

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Actually, inline assembly in C would do the trick.



    The wonders of 512 bytes AVR code: http://daid.mine.nu/wiki/?page=WiiExtController/BootloaderSmall



    Also, if your IDE cannot find a macro like that then you should look for a new IDE, even code::blocks finds stuff like that, a quick right mouse button "definition" gives you the macro.



  • @Mole said:

     Your writing code for a microcontroller that has a small amount of memory space. You can't guarantee that a compiler will inline your code even if you ask it to nicely. So you write your code in a #define so it's guaranteed to be inlined. Since your not saving return address, processor registers, etc, the code will run faster as a bonus.

    I see code like this all the time from various big manufacturers (Microchip, ST, Atmel, ...). It's called modularisation.

    If by 'inlined' you mean won't require a function call, nothing prevents a compiler from putting any code it damn well pleases into a function. A compiler can, if it wants to, find code that's common to two functions and make it its own function, whether you use a macro, an inline, or anything else. Second, unless the compiler is so broken as to be useless, it will only not inline something if it's too complex to be inlined or the compiler knows a damn good reason not to inline it.

    The scarce memory argument is nonsense. Which makes more sense -- doing this everywhere or telling the compiler to inline where it saves memory?

    This is most likely cargo cult programming. At one time, compilers may have been hopelessly broken about inlining or some more complex function wasn't inlined in some version of the compiler, hence the macro. But I don't think it's excusable.

    As for this being modularization, it's completely the opposite. The macro ensures that details about this operation will land in every line of code that uses it. That's probably unavoidable in this case, because the function is so tiny there isn't much else to know. But I don't see how this modularizes anything.



  • @joelkatz said:

    If by 'inlined' you mean won't require a function call, nothing
    prevents a compiler from putting any code it damn well pleases into a
    function. A compiler can, if it wants to, find code that's common to two functions and make it its own function, whether you use a macro, an inline, or anything else. Second, unless the compiler is so broken as to be useless, it will only not inline something if it's too complex to be inlined or the compiler knows a damn good reason not to inline it.

    The scarce memory argument is nonsense. Which makes more sense -- doing this everywhere or telling the compiler to inline where it saves memory?

    Please read my comments above as to why, typically, compilers for embedded platforms do not do what you suggest, and why its more practical to use a macro than an inlined function. If the compiler decided to find code that's common to two functions and make it its own function, that could cause some seriously hard to find bugs.

    @joelkatz said:

    As for this being modularization, it's completely the opposite. The macro ensures that details about this operation will land in every line of code that uses it. That's probably unavoidable in this case, because the function is so tiny there isn't much else to know. But I don't see how this modularizes anything.

    I think you got the wrong end of the stick here. I was talking about modularizing where the original poster was complaining about the usb driver being distributed across multiple header and source files.



  • You have strange definition of "inline function".

    Expanded Inline function looks EXACTLY like an expanded macro in the resulting asm/binary. No stack frame, no CALL instruction.

    But it's expanded on the compiler level, not preprocessor level, and thus is easier to maintain and type-safe.



  •  You might want to tell that to some of the big name compiler vendors. For example, here's a snippet from the documentation of a compiler we use at work (which costs $$$$) and is recommended by various IC manufacturers such as Microchip and ST:

    "The inline directive advises the compiler that the function whose declaration follows immediately after the directive should be inlined—that is, expanded into the body of the calling function. Whether the inlining actually takes place is subject to the compiler's heuristics."

    Since the compilers heuristics are not documented anywhere, and there is not even a warning if the compiler decides to ignore a inline directive, the only method of guaranteeing that a function gets inlined is to use a macro instead. 

    The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels. 

    I can appreciate a compiler ignoring an attempted inline if you attempt to do stuff like varargs which require the use of the stack, but at least issue a warning rather than completely ignoring the directive! The compiler we have at the moment will happily ignore an inline directive for a function signature such as a "void foo (void)". 



  • All true, but also, all perfectly logical. Forcing the compiler to inline something is just not logical. If size is more important than speed, tell the compiler. But a keyword that makes the compiler inline even if there are both size and speed penalties is not useful. The 'inline' keyword is much like the 'register' keyword. It was useful once, when compilers were dumb. But now it just tends to make code less portable and worse. Compiler vendors are right to take the inline keyword as a hint.

    "The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels." Complaining about this is akin to saying one cannot have an optimization level that disables inline. Since inline is itself an optimization, this is a silly argument.

    What is your use case? Or are you just complaining because you have nothing better to do?



  • Except, the function definition has to be provided in the same source file for the compiler to be ABLE to inline it, so you still have to define which functions can be inlined because you have to treat them differently - most obviously by providing a definition in every file.

    How does either keyword make code less portable? The compiler is free to completely ignore the register keyword, and it's free to make every call to a function with the inline keyword call the non-inline version, and therefore never actually inline the function. All the register keyword is required to do is enforce the rule that you can't use the address operator on it.



  • @Random832 said:

    Except, the function definition has to be provided in the same source file for the compiler to be ABLE to inline it, so you still have to define which functions can be inlined because you have to treat them differently - most obviously by providing a definition in every file.

    There are conditions you have to meet for the compiler to be able to inline it. But you can comply with those conditions whether or not you intend to inline the function. In general, inlining is an implementation detail whose final decision should be made by the compiler based on your optimization flag choices.

    How does either keyword make code less portable? The compiler is free to completely ignore the register keyword, and it's free to make every call to a function with the inline keyword call the non-inline version, and therefore never actually inline the function. All the register keyword is required to do is enforce the rule that you can't use the address operator on it.

    Because compilers are designed to do the right thing by default. If you use keywords like 'register' and 'inline', you will influence those defaults. Whether or not the change is appropriate depends on the platform, so the more you try to control inlining and register usage, the less portable your code will become.

    Fortunately, most compilers are smart enough to ignore you -- but that's what was being complained about here. If you want to influence inlining and register usage, use optimization flags, because they're optimizations. Don't crap up your code with platform-specific tuning unless you have no choice. (And most of the time, you do.)

    Ironically, I still see people trying to force inlining even in cases where it's a clear pessimization. It tends to bloat code when forced, increasing cache footprint. (Though this is not usually an issue on small, embedded systems.)



  • @joelkatz said:

    "The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels." Complaining about this is akin to saying one cannot have an optimization level that disables inline. Since inline is itself an optimization, this is a silly argument.

    What is your use case? Or are you just complaining because you have nothing better to do?

    On a PC, I agree, but not on an embedded system. Having the compiler ignore your inline directive on an embedded system can be catastrophic, it can be the difference between your code working and not working (and even randomly crashing). If the compiler decides to ignore your directive, then the only way around this is to force inlining another way, and since you typically don't want to copy and paste the code in, a macro is usually the next best thing. Manufacturers know this, and hence how there code is written.


  • @Mole said:

    @joelkatz said:

    "The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels." Complaining about this is akin to saying one cannot have an optimization level that disables inline. Since inline is itself an optimization, this is a silly argument.

    What is your use case? Or are you just complaining because you have nothing better to do?

    On a PC, I agree, but not on an embedded system. Having the compiler ignore your inline directive on an embedded system can be catastrophic, it can be the difference between your code working and not working (and even randomly crashing). If the compiler decides to ignore your directive, then the only way around this is to force inlining another way, and since you typically don't want to copy and paste the code in, a macro is usually the next best thing. Manufacturers know this, and hence how there code is written.

    If embedded platforms are so sensitive, why would the compilers ignore directives if that is the wrong thing to do?  I just find it hard to believe that the people writing the compilers are bigger idiots than you.



  • I know its difficult to justify, but it's true. The people writing the compilers are bigger idiots. This is typically because they license or buy-in the compiler parser and throw there own back end to it tailored to the embedded platform. They don't think of the embedded platform properly. We have stated this fact to the compiler vendor numerous times, but it takes about 6 months of conversation before they listen and change things, and then they have the nerve to charge you an "upgrade fee" for fixing the problem! Now we just don't bother any more and work around the problem, and it seems most other companies do too. 

    Its the same in other places. Sometimes you need precise timing, so you include inline assembler which includes no-operation instructions for timing - it took us almost 2 months to receive a fix stopping the compiler from optimising them (either removing them completely, or placing them at different places in the code, totally killing the required timing).

    The person writing the code knows what he wants, and the compiler should not ignore there wishes. Sure, if the compiler thinks its knows best then by all means issue a warning or notice stating that, but don't completely ignore the user. If they want a function inlined it's for a reason, not just for fun.



  • @Mole said:

    I know its difficult to justify, but it's true. The people writing the compilers are bigger idiots. This is typically because they license or buy-in the compiler parser and throw there own back end to it tailored to the embedded platform. They don't think of the embedded platform properly. We have stated this fact to the compiler vendor numerous times, but it takes about 6 months of conversation before they listen and change things, and then they have the nerve to charge you an "upgrade fee" for fixing the problem! Now we just don't bother any more and work around the problem, and it seems most other companies do too. 

    Its the same in other places. Sometimes you need precise timing, so you include inline assembler which includes no-operation instructions for timing - it took us almost 2 months to receive a fix stopping the compiler from optimising them (either removing them completely, or placing them at different places in the code, totally killing the required timing).

    The person writing the code knows what he wants, and the compiler should not ignore there wishes. Sure, if the compiler thinks its knows best then by all means issue a warning or notice stating that, but don't completely ignore the user. If they want a function inlined it's for a reason, not just for fun.

    Your argument seems reasonable.



  • @morbiuswilters said:

     

    Your argument seems reasonable.

    Who are you and what have you done with morbiuswilters?


  • @bstorer said:

    Who are you and what have you done with morbiuswilters?

     

    He may have quit smoking.



  • @Mole said:

     You might want to tell that to some of the big name compiler vendors. For example, here's a snippet from the documentation of a compiler we use at work (which costs $$$$) and is recommended by various IC manufacturers such as Microchip and ST:

    "The inline directive advises the compiler that the function whose declaration follows immediately after the directive should be inlined—that is, expanded into the body of the calling function. Whether the inlining actually takes place is subject to the compiler's heuristics."

    Since the compilers heuristics are not documented anywhere, and there is not even a warning if the compiler decides to ignore a inline directive, the only method of guaranteeing that a function gets inlined is to use a macro instead. 

    The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels. 

    I can appreciate a compiler ignoring an attempted inline if you attempt to do stuff like varargs which require the use of the stack, but at least issue a warning rather than completely ignoring the directive! The compiler we have at the moment will happily ignore an inline directive for a function signature such as a "void foo (void)". 

     

     

    It then goes on:

    This is similar to the C++ keyword inline, but has the advantage of being available in
    C code.
    Specifying #pragma inline=forced disables the compiler’s heuristics and forces
    inlining. If the inlining fails for some reason, for example if it cannot be used with the
    function type in question (like printf), an error message is emitted.
    Note: Because specifying #pragma inline=forced disables the compiler’s
    heuristics, including the inlining heuristics, the function declared immediately after the
    directive will not be inlined on optimization levels None or Low. No error or warning
    message will be emitted.

     

     

     



  • Often the compiler is capable of inlining functions. This means that instead of calling a function, the compiler inserts the content of the function at the location where the function was called.

    The result is a faster, but often larger, application. Also, inlining might enable further optimizations. The compiler often inlines small functions declared static. The use of the #pragma inline directive and the C++ keyword inline gives you fine-grained control, and it is the preferred method compared to the traditional way of using preprocessor macros.

    Too much inlining can decrease performance due to the limited number of registers. This feature can be disabled using the --no_inline command line option.

     




  • @Mole said:

    @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 

     

    Depends honestly on the project. We have certain projects were timings are so critical in the microsecond range, that you just cannot write this in C. You'd be at the whim of whatever code the compiler decides to produce, while it may work today, tomorrow there's an update that changes compiler behavior and everything goes down the drain. YES, I have seen that happen. Suddenly the compiler optimizes things differently and everything is just off. And no, when you're dealing with timings below 1 millisecond, you can no longer call timer functions and such for timing as the overhead starts to matter.

    There's actually one such WTF in Microchip's USB code. They literally have a comment in a time critical (critical in the sense that a minimum amount of time has to pass) piece of code that states that they don't need to worry about the timing as the for-loop overhead exceeds the timing, even with optimizations enabled. While this *may* be true today, will it be true the next compiler version tomorrow?

     



  • @Mole said:

    @joelkatz said:

    "The amusing thing is, we also have a compiler which allows you to force the inlining of a function, but then says in the help file that the forced inline will be ignored without warning or error message on certain optimisation levels." Complaining about this is akin to saying one cannot have an optimization level that disables inline. Since inline is itself an optimization, this is a silly argument.

    What is your use case? Or are you just complaining because you have nothing better to do?

    On a PC, I agree, but not on an embedded system. Having the compiler ignore your inline directive on an embedded system can be catastrophic, it can be the difference between your code working and not working (and even randomly crashing). If the compiler decides to ignore your directive, then the only way around this is to force inlining another way, and since you typically don't want to copy and paste the code in, a macro is usually the next best thing. Manufacturers know this, and hence how there code is written.
     

     

    use #pragma inline=forced then



  • @Kermos said:

    @Mole said:

    @zzo38 said:

    If you need tiny codes you shouldn't be writing in C, you should writet it directly in machine codes or assembly codes.
    Which doesn't guarantee smaller code at all. Not only do you spend 3 months learning all the opcodes of the processor in question, compilers typically know a lot more tricks once they have been out a while. I've lost count of the number of people that have done this - I'll write it in assembler then it'll be smaller. They've done so and found out the resulting code was larger and far more buggier than the code produced by the compiler from the C code. People like to do one operation at a time when writing code, as its simpler to follow (obviously), but thats no longer how a lot of processors work - they can do several things at once. 

    Assembler is fine for small routines, but not for entire projects. A compiler can keep track of things far more easily than a person. 

     

    Depends honestly on the project. We have certain projects were timings are so critical in the microsecond range, that you just cannot write this in C. You'd be at the whim of whatever code the compiler decides to produce, while it may work today, tomorrow there's an update that changes compiler behavior and everything goes down the drain. YES, I have seen that happen. Suddenly the compiler optimizes things differently and everything is just off. And no, when you're dealing with timings below 1 millisecond, you can no longer call timer functions and such for timing as the overhead starts to matter.

     

     

     

    write in C - if you want the sensitive part set in stone, you can use an ASM module in your mixed project.  Also ALWAYS check the tools into the source code control system so you can always use the same compiler later and sometimes it is smart to use a #if def and guard check the build number of the tools (available as __BUILD_NUMBER__).  Linker can be just as important too

     

    Sheesh!



  • @Kermos said:


    ........when you're dealing with timings below 1 millisecond, you can no longer call timer functions and such for timing as the overhead starts to matter.

    There's actually one such WTF in Microchip's USB code. ........

     

     

     

    and this is why you should not use microchip - the banked memory structure alone should make you walk away.



  •  @Helix said:

    write in C - if you want the sensitive part set in stone, you can use an ASM module in your mixed project.  Also ALWAYS check the tools into the source code control system so you can always use the same compiler later and sometimes it is smart to use a #if def and guard check the build number of the tools (available as __BUILD_NUMBER__).  Linker can be just as important too
    We don't have (and can't use) source control as our manager doesn't believe in it ! (It's not necessary, it's overcomplicated, blah blah).Yeah, I know thats a MAJOR WTF, but what are you supposed to do?

    Well, actually, we do. The official source code control is called "Right click the directory and choose Add to archive, and then rename the archive filename to the version number." 



  • @dhromed said:

    @bstorer said:

    Who are you and what have you done with morbiuswilters?

     

    He may have quit smoking.

    Doesn't that usually make people pissy?  Maybe I re-started smoking.



  •  @Helix said:

    use #pragma inline=forced then

    Which is both compiler and compiler version dependant. No thanks.

     



  • @Mole said:

    We don't have (and can't use) source control as our manager doesn't believe in it ! (It's not necessary, it's overcomplicated, blah blah).Yeah, I know thats a MAJOR WTF, but what are you supposed to do?

    I could not work without source control.  I would refuse to work without it.

     

    Set up a local repo on your box, at the very least.  Commit your changes and any files you come across that were modified by co-workers.  Sure, it's not company policy to use source control but are they going to penalize you for keeping a repo on your box?  It will help you and one day you might be the "hero" that recovers some lost revision and results in your boss seeing the light.



  • @Helix said:

    and this is why you should not use microchip - the banked memory structure alone should make you walk away.
     

    Believe me, not my choice! The banked memory structure along with the ridiculously limited instruction set just drive me nuts. Operations that would be trival on an ARM can be a royal pain here.

     



  • @morbiuswilters said:

    I could not work without source control.  I would refuse to work without it.
    Whenever I have a project thats going to take more hour to do, I normally use TortoiseSVN for the simple reason that I can create an empty directory and tell it thats its repo. I love the fact that what I make a breaking change, I can easily check the modifications to find out exactly how I broke it. Everyone else thinks that is too complicated however.Each to there own I guess.



  • @Mole said:

     @Helix said:

    use #pragma inline=forced then

    Which is both compiler and compiler version dependant. No thanks.

     

     

     

     

    True but to be fair completely non-compiler dependant projects doesn't happen in embedded development.  Yes you can minimise it, but usually to a couple of tool chains by developing for both at the same time.  Since it sounds like you have more then one tool chain i guess you are doing this. Not sure why ... unless it's code for OEM or resell and you need to support multiple tools.  Do you work for an embedded RTOS/middleware vendor?

      

     



  •  yes true, but no embedded project is truly compiler independent, not even taking in to account the rest of the tool chain. At best you can develop for two or three different tool chains by developing for all of them in parallel.  Since you indicate that you have a couple of tool chains at your disposal and independence is obviously key to you, I guess you are planning to do this. Are you either writing RTOS, middleware or driver code as a vendor, or code for OEM or resell such as a chip or module vendor …. Or something else?



  • @Mole said:

     @Helix said:

    write in C - if you want the sensitive part set in stone, you can use an ASM module in your mixed project.  Also ALWAYS check the tools into the source code control system so you can always use the same compiler later and sometimes it is smart to use a #if def and guard check the build number of the tools (available as __BUILD_NUMBER__).  Linker can be just as important too
    We don't have (and can't use) source control as our manager doesn't believe in it ! (It's not necessary, it's overcomplicated, blah blah).Yeah, I know thats a MAJOR WTF, but what are you supposed to do?

    Well, actually, we do. The official source code control is called "Right click the directory and choose Add to archive, and then rename the archive filename to the version number." 

     

     

    TRWTF of your project - really really do set up your own personal SVN (CVS SSC whatever) and for god sake put the tool chain in as well



  • It's more along the lines of we have several big customers for some of our projects, some require much more functionality than others and so they have different hardware and different mcu's, so we use different compilers by different vendors (even though one vendor actually supports both platforms and would make things a lot easier as a result, but we have to work with what we've got. Maybe it's a cost issue considering it's $$$$ per platform). The code is 99% identical across all versions with the compiler detected and the appropriate compiler extensions used.

     Even if it was the same compiler though to be honest, it's still daft that the compiler treats "inline = forced" as optional. If we compiled it with low optimisation for debugging purposes the resulting code would randomly crash as some functions absolutely must be inlined no matter what. Its like the compiler thinks the only benefit of inlining is for optimisation when that clearly isn't the case. So still for us, theres no alternative I've found for macros yet.



  •  BTW, How do you put a tool chain into source code control when said tool chain only works when properly installed (ie, running the setup.exe app). You can't just copy the files and expect them to work like GCC. I've tried that before and the first thing it complains about is licensing which I assume has options in the registry (I've always hated microsoft for implementing that). 


Log in to reply