ARM vs Thumb performance on iPhone 3GS, non floating point code
Asked Answered
E

3

24

I was wondering if anyone had any hard numbers on ARM vs Thumb code performance on iPhone 3GS. Specifically for non-floating point (VFP or NEON) code - I'm aware of the issues with floating point performance in Thumb mode.

Is there a point where the extra code size of bigger ARM instructions becomes a performance hazard? In other words, if my executable code is relatively small compared to available memory, is there any measured performance difference to turning Thumb mode on?

The reason I ask is that while I can enable ARM for the NEON specific source files in Xcode using the "-marm" option, this breaks the Simulator build because GCC is building x86. I was wondering whether I should just turn off "compile as thumb" and be done with it.

Extravasate answered 29/7, 2009 at 5:25 Comment(3)
Ooh Random -1 vote with no explanation. Nice one.Extravasate
Wow another one. Classy effort people - we're all learning a lot.Extravasate
+1 - Seems like a reasonable question to me (only gets you back up to zero though I'm afraid...)Fatma
S
13

I dont know about the iPhone but a blanket statement that thumb is slower than ARM is not correct at all. Given 32 bit wide zero wait state memory, thumb will be a little slower, numbers like 5% or 10%. Now if it is thumb2 that is a different story, it is said that thumb2 can run faster, I dont know what the iPhone has my guess is that it is not thumb2.
If you are not running out of zero-wait-state 32 bit memory then your results will vary. One big thing is 32 bit wide memory. If you are running on a 16 bit wide bus like the GameBoy Advance family, and there are some wait states on that memory or ROM, then thumb can easily out run ARM for performance even though it takes more thumb instructions to perform the same task.

Test your code! It is not hard to invent a test that provides the results you are interested in or not. It is as easy to show arm blows away thumb as it is thumb blows away arm. Who cares what the dhrystones are, it is how fast does it run YOUR code TODAY that matters.

What I have found over the years in testing code performance for ARM is that your code and your compiler are the big factor. So thumb is a few percent slower in theory because it uses a few percent more instructions to peform the same task. But did you know that your favorite compiler could be horrible and by simply switch compilers you could run several times faster (gcc falls into that category)? Or using the same compiler and mixing up the optimization options. Either way you can shadow the arm / thumb difference by being smart about using the tools. You probably know this but you would be surprised to know how many people think that the one way they know how to compile code is the only way and the only way to get better performance is throw more memory or other hardware at the problem.

If you are on the iPhone I hear those folks are using LLVM? I like the llvm concept in many ways and am eager to use it as my daily driver when it matures, but found it to produce code that was 10-20% (or much more) slower for the particular task I was doing. I was in arm mode, I did not try thumb mode, and I had an l1 and l2 cache on. Had I tested without the caches to truly compare thumb to arm I would probably see thumb a few percent slower, but if you think of it (which I wasnt interested in at the time) you can cache twice as much thumb code than arm code which MIGHT imply that even though there is a few percent more code overall for the task, by caching significantly more of it and reducing the average fetch time thumb can be noticeably faster. I may have to go try that.

If you are using llvm, you have the other problem of multiple places to perform optimizations. Going from C to bytecode you can optimize, you can then optimize the bytecode itself, you can then merge all of your bytecode and optimize that as a whole, then when going from byte code to assembler you can optimize. If you had only 3 source files, and assumed there were only two optimization levels per opportunity, those being dont optimize or do optimize, with gcc you would have 8 combinations to test, with llvm the number of experiments is almost an order of magnitude higher. More than you can really run, hundreds to thousands. For the one test I was running, NOT opimizing on the C to bytecode step, then NOT optimizing the bytecode while separate, but optimizing after merging the bytecode files into one big(ger) one. The having llc optimize on the way to arm produced the best results.

Bottom line...test,test,test.

EDIT:

I have been using the word bytecode, I think the correct term is bitcode in the LLVM world. The code in the .bc files is what I mean...

If you are going from C to ARM using LLVM, there is bitcode (bc) in the middle. There are command line options for optimizing on the C to bc step. Once bc you can optimize per file, bc to bc. If you choose you can merge two or more bc files into bigger bc files, or just turn all the files into one big bc file. Then each of these combined files can also be optimized.

My theory, that only has a couple of test cases behind it so far, is that if you do not do any optimization until you have the entire program/project in one big bc file, the optimizer has the maximum amount if information with which to do its job. So that means go from C to bc with no optimization. Then merge all the bc files into one big bc file. Once you have the whole thing as one big bc file then let the optimizer perform its optimization step, maximizing the information and hopefully quality of the optimization. Then go from the optimized bc file to ARM assembler. The default setting for llc is with optimization on, you do want to allow that optimization as it is the only step that knows how to optimize for the target. The bc to bc optimizations are generic and not target specific (AFAIK).

You still have to test, test, test. Go ahead and experiment with optimizations between the steps, see if it makes your program run faster or slower.

Skipbomb answered 2/8, 2009 at 6:30 Comment(4)
Can you elaborate on this? "NOT opimizing on the C to bytecode step, then NOT optimizing the bytecode while separate, but optimizing after merging the bytecode files into one big(ger) one. The having llc optimize on the way to arm produced the best results."Heiskell
The iPhone 3GS has a Cortex-A8 which does support Thumb-2. However, I don't know if Xcode will let you use it. Can you target a specific iPhone revision?Desist
As far as I know, Apple hasn't included LLVM for ARM in Xcode yet, IMHO it's not ready for prime time on ARM.Kassie
The "specific" information in these answer is outdated. Xcode by default uses the LLVM compiler for new projects. And, with the default project settings the LLVM compiler produces THUMB ARM assembly.Apteral
E
4

See this PDF from ARM/Thumb for performance/code size/power consumption trade offs.

Profile Guided Selection of ARM and Thumb Instructions
   - Department of Computer Science, The University of Arizona by Rajiv Gupta

Extravasate answered 29/7, 2009 at 7:5 Comment(2)
A link is not really an answer, but I have updated it with a good link.Lone
It concludes that ARM code generates large code, higher I-cache energy but faster; Thumb code generates small code, low I-cache energy but slower.Diplosis
S
0

Thumb code will essentially always be slower than equivalent ARM. The one case where Thumb code can be a big performance win is if it makes the difference between your code fitting into on-chip memory or cache.

It's hard to give exact numbers on performance differences, because it's entirely dependent on what your code actually does.

You can set per-architecture compiler flags in XCode, which would avoid breaking the simulator build. See the XCode build setting documentation.

Sprocket answered 29/7, 2009 at 12:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.