STL in embedded environment

0 votes
asked Oct 4, 2010 by alok-save

I am a C++ programmer and over the years have been subjected to hearing the notion that STL is not good for use in embedded environments and hence usually prohibited in usage for embedded environment based projects.I believe STL libraries like Boost are far more powerful and provide a much more faster & less error prone means of development(ofcourse the syntax is little intimidating but once past that i think it's a real treasure).Also, I find the claims that STL is heavy and increases final footprint of code absurd because since it is templatized one is only going to get compilable code which he asked for and not the entire STL.

My question is what are the reasons for this this populist(atleast most peeps around me think so) notion which calls STL is not for embedded enviornment?

I do see a question of similar nature but herein i am expecting help in pointing out the pros and cons in general about STL and embedded enviornment here.

Edit: so here I will add up the points as the replies come in:
1. Portability Issues
2. coping with huge dymanice allocations by STL containers
3. STL is hard to debug
4. Deep function calls in STL results in low performance for compilers weak with inlining (power of functors useless!)

11 Answers

0 votes
answered Jan 4, 2010 by luka-rahne

For me, only good reason, not to use some library if it does not fit in limited constraints or its size can be problem later. If that is not problem for you, go for it. In any case you can not get better.

0 votes
answered Jan 4, 2010 by expategghead

I haven't experienced any downside to using the STL in embedded systems and I plan to use it in my current project. Boost as well.

0 votes
answered Oct 4, 2010 by ofir

Many think that (for many reasons, such as portability) C++ isn't a good fit for an embedded environment. There are many types of embedded environments and STL is certainly OK for some of them.

In general, 'more powerful' is always a phrase to fear when you need to choose anything for a resource constrained environment, as you often want something less powerful and more controllable. Especially if 'more powerful' means the developer (or whoever maintains the code later) would have less understanding of the underlying implementation.

0 votes
answered Oct 4, 2010 by nikko

I think the choice depends on your targeted platform(s). If you have a correct C++ compiler and do not mind the dynamic allocated memory if you use containers, I don't see any problem.

0 votes
answered Oct 4, 2010 by singlenegationelimin

That depends on what you mean by embedded. On Atmel8 systems, there's very little ram. So little that you can't really have a reasonable malloc. You want, in this case, to manage memory very explicitly, probably with static arrays of the type you need. If you've got that, you basically have no need for most of the STL.

On arm systems, you've got plenty of ram. Use STL!

0 votes
answered Oct 4, 2010 by necrolis

STL has quite a few problems with it(as documented here by EASTL), on an embedded system, or small scale system, the main problem is generally the way in which it manages (its) memory. a good example of this was the PSP port of Aquaria.

My advise though is first test, before following assumptions, if the test are shows your using just too much space/processor cycles, then maybe an optimization or two could push it into the realm of 'usable'.

Finally, boost is template based, so if your looking at the size of generated template code, it'll suffer the same as STL.

Edit/Update:

To clear up my last statement (which was just refering to boost VS STL). In C, you can (ab)use the same code to do the same job on different structures sharing the same header (or layout), but with templates, each type might get its own copy (I've never test if any compilers are smart enough to do this if 'optimize for size' is enbaled), even though it exactly the same(on a machine/assembly level) as one thats just been generated. boost has the advantage of being a lot cleaner to read, and having far more things crammed into it, but that can lead to long compile times due to a copius amount of (somtimes huge) headers. STL gains because you can pass your project around and not require a download/accompanyment of boost.

0 votes
answered Oct 4, 2010 by kartheek

I came across this presentation: Standard C++ for Embedded Systems Programming

the bulk of the complexity with templates is with the compiler than in the runtime system and that's partly where the problem lies - as we don't know for sure how much of an optimization the compiler is able to accomplish. In fact C++ code based on STL is supposed to be more compact and faster than C++ code not using templates and even C code!

0 votes
answered Oct 4, 2010 by jerry-coffin

There is some logic behind the notion that templates lead to larger code. The basic idea is pretty simple: each instantiation of a template produces essentially separate code. This was particularly problematic with early compilers -- since templates (typically) have to be put into headers, all the functions in a template are inline. That means if you have (for example) vector<int> instantiated in 10 different files, you (theoretically) have 10 separate copies of each member function you use, one for each file in which you use it.

Any reasonably recent compiler (less than, say, 10 years old) will have some logic in the linker to merge these back together, so instantiating vector<int> across 10 files will only result in one copy of each member function you used going into the final executable. For better or worse, however, once it became "known" that templates produce bloated code, a lot of people haven't looked again to see whether it remained true.

Another point (that remains true) is that templates can make it easy to create some pretty complex code. If you're writing things on your own in C, you're generally strongly motivated to use the simplest algorithm, collection, etc. that can do the job -- sufficiently motivated that you're likely to check into details like the maximum number of items you might encounter to see if you can get away with something really simple. A template can make it so easy to use a general purpose collection that you don't bother checking things like that, so (for example) you end up with all the code to build and maintain a balanced tree, even though you're only storing (say) 10 items at most so a simple array with linear searches would save memory and usually run faster as well.

0 votes
answered Oct 4, 2010 by sbass

As people have said there is a wide range of "embedded" systems. I'll give my perspective, which focuses on safety critical and hard real time systems.

Most guidelines for safety critical systems simply forbid the use of dynamic memory allocations. It is simply much easier and safer to design the program if you never have to worry about a malloc/new call failing. And for long running systems where heap fragmentation can occur, you can't easily prove that the memory allocation won't fail, even on a chip / system with large amounts of memory (especially when the device must run for years without restarting).

In scenarios where there are tight timing deadlines, the uncertainties involved in dynamic memory allocation and instantiation of complex objects are frequently too large to deal with. This is why many programmers who work in these areas stick with C. You can look at C source and guess how long an operation takes. With C++, it is easier for simple looking code to take longer than it appears to. Those who use C++ in such systems tend to stick to simple plain vanilla code. And code which usually is fast, but occasionally takes a long time to execute is worse than code that is slower but consistent.

What I've done in larger projects is isolate the real time and critical functions from the rest. The non-critical stuff can be written using standard tools like the STL. That's okay as long as the OS doesn't get in the way of the critical parts. And if I can't guarantee that there are no such interactions, then don't use the tools at all.

0 votes
answered Oct 4, 2010 by jared-grubb

I was on an embedded project that used C++ and STL in a very constrained system (memory in a fraction of a megabyte, ARMv4 at low speed). For the most part, STL was great, but there were parts that we had to skip (for example, std::map required 2-4k of code per instantiation [which is a big number relative to our ROM size], and we had our own custom replacement for std::bitset [it was maybe ~1k ROM]). But, std::vector and std::list were very helpful, as was using boost::intrusive_ptr for reference counting (shared_ptr was way too big, about 40 bytes RAM per object!).

The one downside to using STL is that you have no error recovery when exceptions are turned off (which they were for us, as exceptions and RTTI were not cheap on our compiler). For example, if a memory allocation failed somewhere in the code in this line (std::map object):

my_map[5] = 66;

you wouldnt see it and the code would just silently keep moving forward; chances are the object is now in a broken state, but you wouldnt crash until much later on.

That being said, we had great success with C++ and STL. As another poster said, try it out on your system and measure which parts of STL work. As a side note, there's a great technical report on C++ performance in general that is a good read: http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf

Welcome to Q&A, where you can ask questions and receive answers from other members of the community.
Website Online Counter

...