Let's start with some rewriting your code in C, because C more familiar for me to explain. So lets recall your code with some comments:
int
counting_sort(int a[], int a_len, int maxVal)
{
int i, j, outPos = 0;
int bucket_len = maxVal+1;
int bucket[bucket_len]; /* simple bucket structure */
memset(bucket, 0, sizeof(int) * bucket_len);
/* one loop bucket processing */
for (i = 0; i < a_len; i++)
{
bucket[a[i]]++; /* simple work with buckets */
}
for (i=0; i < bucket_len; i++)
{
for (j = 0; j < bucket[i]; j++)
{
a[outPos++] = i;
}
}
return 0;
}
Now lets offer to this guy some realistic data:
[126, 348, 343, 432, 316, 171, 556, 223, 670, 201]
On output we have
[126, 171, 201, 223, 316, 343, 348, 432, 556, 670]
It seems that everything is ok? Not yet. Lets look at maxVal. It is 670 (!) To sort array of 10 elements here we used array of 670 elements, primarily zeros. Awfully. To handle this problem of counting sort, we have two possible ways of generalization:
1) First way -- to make sort digit-wise. This is called radix-sort. Lets show some code, trying to make it as close to counting-sort code as possible. Again look at comments:
int
radix_sort(int a[], int a_len, int ndigits)
{
int i;
int b[a_len];
int expn = 1;
/* additional loop for digits */
for (i = 0; i != ndigits; ++i)
{
int j;
int bucket[10] = {0}; /* still simple buckets */
/* bucket processing becomes tricky */
for (j = 0; j != a_len; ++j)
bucket[ a[j] / expn % 10 ]++;
for (j = 1; j != 10; ++j)
bucket[j] += bucket[j - 1];
for (j = a_len - 1; j >= 0; --j)
b[--bucket[a[j] / expn % 10]] = a[j];
for (j = 0; j != a_len; ++j)
a[j] = b[j];
expn *= 10;
}
}
We are trading multiplier near N for memory. Profit? Maybe. But in some cases multiplier near N is very important. Program, working a day and working a week are very different from users view even if both works 1*O(N) and 7*O(N) respectively. And so we are coming to a second generalization:
2) Second way -- to make buckets more sophisticated. This is called bucket-sort.
Lets again start with some code. I prefer more code prior to philosophical arguments. Still look at comments, they are essential.
int
bucket_sort(int a[], int a_len, int maxVal)
{
int i, aidx;
typedef struct tag_list {
int elem;
struct tag_list *next;
} list_t, *list_p;
list_p bucket[10] = {0}; /* sophisticated buckets */
/* one loop simple processing with one more inner loop
to get sorted buckets (insert-sort on lists, Cormen-style) */
for (i = 0; i != a_len; ++i)
{
int bnum = (10 * a[i]) / maxVal;
list_p bptr = bucket[bnum];
list_p belem = malloc(sizeof(list_t));
belem->elem = a[i];
if (bptr == 0)
{
bucket[bnum] = belem;
belem->next = 0;
continue;
}
else if (a[i] <= bptr->elem)
{
belem->next = bptr;
bucket[bnum] = belem;
continue;
}
else
{
while (bptr != 0)
{
if ((bptr->elem <= a[i]) && ((bptr->next == 0) || (bptr->next->elem > a[i])))
{
belem->next = bptr->next;
bptr->next = belem;
break;
}
bptr = bptr->next;
}
}
}
/* one loop (looks as two) to get all back */
aidx = 0;
for (i = 0; i != 10; ++i)
{
list_p bptr = bucket[i];
while (bptr)
{
list_p optr = bptr;
a[aidx] = bptr->elem;
aidx += 1;
bptr = bptr->next;
free(optr);
}
}
return 0;
}
So what do we have here? We are trading some sophisticated bucket structure and requirement for dynamically allocated memory but winning static memory, and multiplier near N in average.
Now lets recall what did we see in code:
- Counting sort -- simple buckets, simple processing, memory overhead
- Radix sort -- simple buckets, sophisticated processing, speed overhead (and still need additional static memory)
- Bucket sort -- sophisticated buckets, simple processing, requires dynamic memory, good in average
Radix and bucket sorts thus are two useful generalizations of counting sort. They have a lot in common with counting sort and with each other but in every case we are losing something and winning something. Software engineering is about a balance between these opportunities.
else if (a[i] <= bptr->elem)
should be<
. Also,while (bptr != 0)
, can be awhile (true)
– Educatory