[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Fastest C++ Compiler?
First to quote an excessive amount of text: (sorry but I think
pretty much all of it has to do with my response).
Which compiler makes the fastest code? This thread came up a
year or so ago,
with people saying either gcc or Watcom if I remember correctly.
I'm using Visual C++ 4.0, and wondered how easy it would be to
recompile
just certain files (ie. those that do search and analysis) in
gcc, but leave
all the interface-related code (which uses MFC) with visual c++.
Is it possible to link gcc object files using visual C++? I'm
thinking I'll
do the development and debugging in visual C++, and then once
I've got an
algorithm finished I'll recompile it with gcc.
Does anyone have any advice, or experience doing this?
Darren
Darren,
Well, First of all MSVC 5.0 is out and it's a bit faster than MSVC 4.2.
Secondly, Intel makes a drop-in replacement for that compiler (buy
Vtune 2.5, then download the upgraded compiler from Intel) that does
profile based optimizations. GCC is also pretty quick.
The problem is that I've seen code perform many different ways with all
three of these compilers. My main development environment for C is
MSVC. When I tried out the intel compiler it sped up some things, but
slowed down others. I develop compression engines for a living and in
particular, The intel C compiler sped up my compressor very marginally
and slowed down my decompressor significantly. I traced this down to
some smarts that MSVC uses optimizing sparse "switch" statements into
jump tables, and some bit shifting smarts.
Now, GCC on the same product sped up the decompressor and slowed down
the compressor.
Note, however, that the compression code is atypical of most
applications. It's data flow sensitive, and a compiler can easily make
a mistake that affects performance fairly drastically. It's also to
some extent "OBNF" (One big nasty function) for speeds sake, and that
really affects some compilers (dealing with very large functions are the
weak spot of some compilers--in the extremely highly optimized code that
I write making Big Nasty Functions is sometimes necessary or I take a
HUGE performance hit).
Now as far as linking GCC object files, I know it's possible. However,
it's not easy. Using GNU object utils you can pretty much convert from
almost any object file format to another (if you compile them with all
object file types supported). I've done it once before, "the other way
around", by taking MSVC code and linking it into gcc code. However, I
used a Linux box to do it, and I can't remember whether there were
calling convention problems. I may have had to write assembly language
"converters" between the two code bases. However, I'm pretty confident
that you should be able to take the "coff output" of DJGPP, and or GCC
on a unix box and just link it in using MSVC's coff support. However,
I'm not sure...it may be work. You may have to use object conversion
utilities, and you may run into calling convention trouble.
GCC also has (as I understand) incomplete support of the current Ansi
Draft Standard C++ templates. Since C++ templates are Turing complete
at compile time, and are (in my mind) monstrous constructs, this is no
huge surprise. I imagine that they will eventually be supported (if
they aren't already). Mayhaps after Ansi C++ becomes a true standard.
One last caveat: A compiler is not going to save your day as far as
performance. The only way to get true performance gains of the order of
magnitude that you need for Go is to change your algorithm. You can get
anywhere from 2-20 times speedup using assembly (most often around 2-5
unless you are somehow able to use some trickery in assembly that's very
difficult to represent in C). That's peanuts in compared to algorithmic
gains. Compiler differences peak out at about 50% more effectiveness,
but between the ones mentioned...the differences are going to be less.
About 15%. That's much less than peanuts. A much better time
investment may be improving your algorithm.
If you ever plan to go to assembly for Intel platforms, I highly
recommend the use of VTUNE. It will spot things you never would've
seen--partial register stalls, adress generation interlocks, cache
boundary problems, non-pipelineable code, etc.... I also recommend that
you not use DJGPP or GCC variants because I find GNU AT&T style assembly
to be horrific (it's made for a compiler to write, not a human--if you
get used to it its marginally okay, you may want to use a macro language
on top of it, however), and it's very hard to port source in that code
base to another assembler. Turbo assembler in ideal mode is very nice.
Masm (Microsoft's macro assembler) is bug-ridden. Nasm (Netwide
assembler) is feature poor--and I won't consider it seriously till they
add some better debugging support.
-Scott Dossey