There are a range of possibilities. You could be filtering out large numbers of rows with invalid/null dates in pub_time, but I doubt that you'd fail to notice/mention a significant number of these.
The three things that stick out in my mind are:
1 - You have a index or composite index involving pub_time, and the restriction in your where clause is triggering the use of a different access path
2 - You had no statistics available for the optimizer when you ran your first query. When running the second query a better access path was selected thanks to some information caching that happened when you ran the first query. This can be verified by running the first query a few more times and seeing if there's a significant performance improvement.
3 - Similar to the first point, the optimizer could just be selecting a better access path based solely on the implications of the where clause. Perhaps giving the hint that null/invalid values will not have to be handled is enough - your system could be avoiding one or more full table scans to weed out invalid/null pub_times.
Pinpointing the reasons for things like this is quickly becoming an empirical venture - it's hard for me to say more without knowing your platform & version. From the tag I take it you're using oracle, in which case you should be able to use some form of "explain query" or "explain plan" tool to get a better sense of what's going on. For more information on the oracle optimizer see http://docs.oracle.com/cd/B10500_01/server.920/a96533/optimops.htm (This is for Oracle 9i v9.2, but it has a decent explanation of the version-independent concepts)
pub_time
column? – Moderatornull
values, or simply entries with faulty time values before 01.01.1900 – Thracian