The performance improvement (if any) from hyperthreading is difficult to predict.
Hyperthreading means that if one thread stalls for (almost) any reason, the CPU will have a pool of instructions from another thread to (attempt to) execute. Even without an actual stall, if two threads are scheduled that use different execution resources, instructions from both can execute simultaneously on the same core. So if, for example, the code is heavily dependent on main-memory latency (e.g., unpredictable read patterns with no prefetching), hyperthreading might increase performance substantially.
In the other direction, if the code is carefully written to cover latency via careful cache use, prefetching, etc., it may gain little or nothing from hyperthreading. Especially with older OSes that don't try to account for hyperthreading in their thread scheduling, the extra threads can actually result in extra context switches, thus slowing overall execution.
Assuming you're starting with completely single-threaded code, and adding some OpenMP directives, my own experience is that Hyperthreading is typically good for improving performance by something on the order or 10%. If the code makes almost any attempt at prefecthing or anything similar, most (if not all) of that advantage evaporates almost immediately.