Most compilers will go even further than reducing division by powers of 2 into shifts - they'll often convert integer division by a constant into a series of multiplication, shift, and addition instructions to get the result instead of using the CPU's built-in divide instruction (if there even is one).
For example, MSVC converts division by 71 to the following:
// volatile int y = x / 71;
8b 0c 24 mov ecx, DWORD PTR _x$[esp+8] ; load x into ecx
b8 49 b4 c2 e6 mov eax, -423447479 ; magic happens starting here...
f7 e9 imul ecx ; edx:eax = x * 0xe6c2b449
03 d1 add edx, ecx ; edx = x + edx
c1 fa 06 sar edx, 6 ; edx >>= 6 (with sign fill)
8b c2 mov eax, edx ; eax = edx
c1 e8 1f shr eax, 31 ; eax >>= 31 (no sign fill)
03 c2 add eax, edx ; eax += edx
89 04 24 mov DWORD PTR _y$[esp+8], eax
So, you get a divide by 71 with a multiply, a couple shifts and a couple adds.
For more details on what's going on, consult Henry Warren's Hacker's Delight book or the companion webpage:
There's an online added chapter that provides some addition information about about division by constants using multiplication/shift/add with magic numbers, and a page with a little JavaScript program that'll calculate the magic numbers you need.
The companion site for the book is well worth reading (as is the book) - particularly if you're interested in bit-level micro optimizations.
Another article that I discovered just now that discusses this optimization: https://learn.microsoft.com/en-us/archive/blogs/devdev/integer-division-by-constants