How to sort in-place using the merge sort algorithm?
Asked Answered
K

10

298

I know the question is not too specific.

All I want is someone to tell me how to convert a normal merge sort into an in-place merge sort (or a merge sort with constant extra space overhead).

All I can find (on the net) is pages saying "it is too complex" or "out of scope of this text".

The only known ways to merge in-place (without any extra space) are too complex to be reduced to practical program. (taken from here)

Even if it is too complex, what is the basic concept of how to make the merge sort in-place?

Kaiserdom answered 3/4, 2010 at 11:4 Comment(5)
Nice question, I asked that myself when reading through a question from yesterday: #2566959Connor
Just for reference, here is a nice implementation of a stable in-place merge sort. Complicated, but not too bad. I ended up implementing both a stable in-place merge sort and a stable in-place quicksort in Java. Please note the complexity is O(n (log n)^2)Mroz
There is a fairly simple method described here: xinok.wordpress.com/2014/08/17/…Bryology
In the usual split and merge algorithm, if you define the pointer array to be a linked list L(i) where you have an entry address that is the address of the first record in sorted order, and the pointer at that address is the address of the 2nd record in sorted order, and so forth, you will find that you CAN merge two linked -lists "in place" in O(n) You end up with a separate pointer as the entry point to the linked list and a linked list of n-1 pointers. I set the nth linked list entry to zero as a STOP indicator, which is useful in merging. You recur through the results using i=L(i)Conjecture
Keep in mind that when non-tail recursion is used as in the top-voted solution, the recursion stack could incur a O(log n) space complexity despite the in-place sorting.Mare
M
172

Knuth left this as an exercise (Vol 3, 5.2.5). There do exist in-place merge sorts. They must be implemented carefully.

First, naive in-place merge such as described here isn't the right solution. It downgrades the performance to O(N2).

The idea is to sort part of the array while using the rest as working area for merging.

For example like the following merge function.

void wmerge(Key* xs, int i, int m, int j, int n, int w) {
    while (i < m && j < n)
        swap(xs, w++, xs[i] < xs[j] ? i++ : j++);
    while (i < m)
        swap(xs, w++, i++);
    while (j < n)
        swap(xs, w++, j++);
}  

It takes the array xs, the two sorted sub-arrays are represented as ranges [i, m) and [j, n) respectively. The working area starts from w. Compare with the standard merge algorithm given in most textbooks, this one exchanges the contents between the sorted sub-array and the working area. As the result, the previous working area contains the merged sorted elements, while the previous elements stored in the working area are moved to the two sub-arrays.

However, there are two constraints that must be satisfied:

  1. The work area should be within the bounds of the array. In other words, it should be big enough to hold elements exchanged in without causing any out-of-bound error.
  2. The work area can be overlapped with either of the two sorted arrays; however, it must ensure that none of the unmerged elements are overwritten.

With this merging algorithm defined, it's easy to imagine a solution, which can sort half of the array; The next question is, how to deal with the rest of the unsorted part stored in work area as shown below:

... unsorted 1/2 array ... | ... sorted 1/2 array ...

One intuitive idea is to recursive sort another half of the working area, thus there are only 1/4 elements haven't been sorted yet.

... unsorted 1/4 array ... | sorted 1/4 array B | sorted 1/2 array A ...

The key point at this stage is that we must merge the sorted 1/4 elements B with the sorted 1/2 elements A sooner or later.

Is the working area left, which only holds 1/4 elements, big enough to merge A and B? Unfortunately, it isn't.

However, the second constraint mentioned above gives us a hint, that we can exploit it by arranging the working area to overlap with either sub-array if we can ensure the merging sequence that the unmerged elements won't be overwritten.

Actually, instead of sorting the second half of the working area, we can sort the first half, and put the working area between the two sorted arrays like this:

... sorted 1/4 array B | unsorted work area | ... sorted 1/2 array A ...

This setup effectively arranges the work area overlap with the sub-array A. This idea is proposed in [Jyrki Katajainen, Tomi Pasanen, Jukka Teuhola. ``Practical in-place mergesort''. Nordic Journal of Computing, 1996].

So the only thing left is to repeat the above step, which reduces the working area from 1/2, 1/4, 1/8, … When the working area becomes small enough (for example, only two elements left), we can switch to a trivial insertion sort to end this algorithm.

Here is the implementation in ANSI C based on this paper.

void imsort(Key* xs, int l, int u);

void swap(Key* xs, int i, int j) {
    Key tmp = xs[i]; xs[i] = xs[j]; xs[j] = tmp;
}

/* 
 * sort xs[l, u), and put result to working area w. 
 * constraint, len(w) == u - l
 */
void wsort(Key* xs, int l, int u, int w) {
    int m;
    if (u - l > 1) {
        m = l + (u - l) / 2;
        imsort(xs, l, m);
        imsort(xs, m, u);
        wmerge(xs, l, m, m, u, w);
    }
    else
        while (l < u)
            swap(xs, l++, w++);
}

void imsort(Key* xs, int l, int u) {
    int m, n, w;
    if (u - l > 1) {
        m = l + (u - l) / 2;
        w = l + u - m;
        wsort(xs, l, m, w); /* the last half contains sorted elements */
        while (w - l > 2) {
            n = w;
            w = l + (n - l + 1) / 2;
            wsort(xs, w, n, l);  /* the first half of the previous working area contains sorted elements */
            wmerge(xs, l, l + n - w, n, u, w);
        }
        for (n = w; n > l; --n) /*switch to insertion sort*/
            for (m = n; m < u && xs[m] < xs[m-1]; ++m)
                swap(xs, m, m - 1);
    }
}

Where wmerge is defined previously.

The full source code can be found here and the detailed explanation can be found here

By the way, this version isn't the fastest merge sort because it needs more swap operations. According to my test, it's faster than the standard version, which allocates extra spaces in every recursion. But it's slower than the optimized version, which doubles the original array in advance and uses it for further merging.

Maye answered 27/3, 2013 at 10:55 Comment(11)
I removed the link for that chapter. The content can be found in chapter 13 of the book: sites.google.com/site/algoxy/home/elementary-algorithms.pdfMaye
Knuth left this as an exercise (Vol 3, 5.2.5). refers to ex. 13.[40] Implement the internal sorting method suggested [at the close of this section], producing which sorts random data in O(N) units of time mith only O(sqrt(N)) additional memory locations.? (40 indicating Quite a difficult or lengthy problem which is perhaps suitable as a term project in classroom situations.)Complication
I think that the time-complexity of in-place algorithm mentioned in the penguin.ew site is O(log n * n^2).Since we have log n merges and each merge is of the order O(n ^2).Isnt that right ?Wisniewski
Is this algorithm still stable and in n log n time?Amperage
@PaulStelian - it's not stable. Elements in the working area are rearranged according to the ordering operations on elements in the sorted area. This means working area elements with equal values will get rearranged so they are no longer in their original order.Hoodlum
@Hoodlum Okay, so it isn't of that much interest anymore. I am aware of the Block Merge Sort algorithm which is stable though.Amperage
@PaulStelian - Wiki has an article for block merge sort, which as you commented is stable. It works best if there are at least 2 · sqrt(n) unique values, which allows them to be re-ordered to provide working areas of an array and remain stable.Hoodlum
Best Case of inplace merge Algorithm in mergeSort ? When array is already sorted can we say O(m) ? where m is the size of the 1st sorted array that is one of the inputs to merge algo.Theseus
What working area? This is theoretical and does not mean anything. If you are inplace merging, say 20,000 elements with 20,000 elements, that is 40,000 elements a 1/4 is 10,0000 elements. But those areas are occupied and cannot be unoccupied. The reality is even without a buffer, at least 1 element is on the stack and via rotation and shifts (a lot of them) the elements are put in place. All the various truly inplace algorithms are using that 1 element to effect an inplace mergeDuration
@Duration Hmm, why are you talking about merging 20,000 with 20,000? This algorithm never does that.Gracia
The problem with the naive in-place merge sort linked in the 2nd paragraph is that it uses a $O(n)$ insertion. This is trivially avoidable for linked lists.Joplin
P
64

Including its "big result", this paper describes a couple of variants of in-place merge sort (PDF):

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.5514&rep=rep1&type=pdf

In-place sorting with fewer moves

Jyrki Katajainen, Tomi A. Pasanen

It is shown that an array of n elements can be sorted using O(1) extra space, O(n log n / log log n) element moves, and n log2n + O(n log log n) comparisons. This is the first in-place sorting algorithm requiring o(n log n) moves in the worst case while guaranteeing O(n log n) comparisons, but due to the constant factors involved the algorithm is predominantly of theoretical interest.

I think this is relevant too. I have a printout of it lying around, passed on to me by a colleague, but I haven't read it. It seems to cover basic theory, but I'm not familiar enough with the topic to judge how comprehensively:

http://comjnl.oxfordjournals.org/cgi/content/abstract/38/8/681

Optimal Stable Merging

Antonios Symvonis

This paper shows how to stably merge two sequences A and B of sizes m and n, m ≤ n, respectively, with O(m+n) assignments, O(mlog(n/m+1)) comparisons and using only a constant amount of additional space. This result matches all known lower bounds...

Portable answered 3/4, 2010 at 11:26 Comment(0)
F
13

It really isn't easy or efficient, and I suggest you don't do it unless you really have to (and you probably don't have to unless this is homework since the applications of inplace merging are mostly theoretical). Can't you use quicksort instead? Quicksort will be faster anyway with a few simpler optimizations and its extra memory is O(log N).

Anyway, if you must do it then you must. Here's what I found: one and two. I'm not familiar with the inplace merge sort, but it seems like the basic idea is to use rotations to facilitate merging two arrays without using extra memory.

Note that this is slower even than the classic merge sort that's not inplace.

Freeness answered 3/4, 2010 at 11:23 Comment(7)
Quicksort isn't stable. That really matters for a lot of production code.Monostich
quicksort can be stable, and iirc merge sort is not necessarily stable if in placeChian
Quicksort also has a O(n^2) worst case for specially crafted inputGraybill
In-place merging is practically useful in C++ (at least before C++11): some objects are swappable but not copyable!Brownell
Is Quicksort's extra memory really O(log n)? I thought being an in-place algorithm it would be O(1) extra memory? Oh, being recursive, the stack usage is O(log n).Tonettetoney
Outside web archives, Jason Harrison's In-Place Merge Sort (two) is featured, for one place, in this collection of sort implementations, apparently started by the man himself. Several are not properly converted to HTML while labelled thus (use view source).Complication
@kristianp: Quicksort's extra memory depends on how you manage your recursion stack. If managed cleverly, it can be shown to never exceed O(log n); a naive recursive implementation will likely use O(n) memory in the worst case though.Fleecy
M
11

The critical step is getting the merge itself to be in-place. It's not as difficult as those sources make out, but you lose something when you try.

Looking at one step of the merge:

[...list-sorted...|x...list-A...|y...list-B...]

We know that the sorted sequence is less than everything else, that x is less than everything else in A, and that y is less than everything else in B. In the case where x is less than or equal to y, you just move your pointer to the start of A on one. In the case where y is less than x, you've got to shuffle y past the whole of A to sorted. That last step is what makes this expensive (except in degenerate cases).

It's generally cheaper (especially when the arrays only actually contain single words per element, e.g., a pointer to a string or structure) to trade off some space for time and have a separate temporary array that you sort back and forth between.

Monostich answered 3/4, 2010 at 11:28 Comment(3)
Your in-place merge has O(m*n) worst-case complexity, where m is A size, and n is B size. This is the case when the first item in A is larger than the last item in B. The complexity can be improved to O(k*log(k)+m+n), where k=min(m,n) by adding a heap between A and B. This heap should contain items from A, which are larger the remaining items in B, but smaller than the remaining items in A. If A is exhausted first, then the heap must be moved to the end of B. Otherwise the heap must be moved to the beginning of A. Then heap items must be popped out in-place and reversed to complete the merge.Tortola
@Tortola Note that when using a heap, the sort is no longer stable. Also, if you use a heap, you can go with heap sort instead of merge sort.Propolis
Just want to note that in-place merge is possible in optimal asymptotic time complexity, see c++ - Is it possible to do an inplace merge without temporary storage? - Stack OverflowAtharvaveda
M
7

An example of bufferless mergesort in C.

#define SWAP(type, a, b) \
    do { type t=(a);(a)=(b);(b)=t; } while (0)

static void reverse_(int* a, int* b)
{
    for ( --b; a < b; a++, b-- )
       SWAP(int, *a, *b);
}
static int* rotate_(int* a, int* b, int* c)
/* swap the sequence [a,b) with [b,c). */
{
    if (a != b && b != c)
     {
       reverse_(a, b);
       reverse_(b, c);
       reverse_(a, c);
     }
    return a + (c - b);
}

static int* lower_bound_(int* a, int* b, const int key)
/* find first element not less than @p key in sorted sequence or end of
 * sequence (@p b) if not found. */
{
    int i;
    for ( i = b-a; i != 0; i /= 2 )
     {
       int* mid = a + i/2;
       if (*mid < key)
          a = mid + 1, i--;
     }
    return a;
}
static int* upper_bound_(int* a, int* b, const int key)
/* find first element greater than @p key in sorted sequence or end of
 * sequence (@p b) if not found. */
{
    int i;
    for ( i = b-a; i != 0; i /= 2 )
     {
       int* mid = a + i/2;
       if (*mid <= key)
          a = mid + 1, i--;
     }
    return a;
}

static void ip_merge_(int* a, int* b, int* c)
/* inplace merge. */
{
    int n1 = b - a;
    int n2 = c - b;

    if (n1 == 0 || n2 == 0)
       return;
    if (n1 == 1 && n2 == 1)
     {
       if (*b < *a)
          SWAP(int, *a, *b);
     }
    else
     {
       int* p, * q;

       if (n1 <= n2)
          p = upper_bound_(a, b, *(q = b+n2/2));
       else
          q = lower_bound_(b, c, *(p = a+n1/2));
       b = rotate_(p, b, q);

       ip_merge_(a, p, b);
       ip_merge_(b, q, c);
     }
}

void mergesort(int* v, int n)
{
    if (n > 1)
     {
       int h = n/2;
       mergesort(v, h); mergesort(v+h, n-h);
       ip_merge_(v, v+h, v+n);
     }
}

An example of adaptive mergesort (optimized).

Adds support code and modifications to accelerate the merge when an auxiliary buffer of any size is available (still works without additional memory). Uses forward and backward merging, ring rotation, small sequence merging and sorting, and iterative mergesort.

#include <stdlib.h>
#include <string.h>

static int* copy_(const int* a, const int* b, int* out)
{
    int count = b - a;
    if (a != out)
       memcpy(out, a, count*sizeof(int));
    return out + count;
}
static int* copy_backward_(const int* a, const int* b, int* out)
{
    int count = b - a;
    if (b != out)
       memmove(out - count, a, count*sizeof(int));
    return out - count;
}

static int* merge_(const int* a1, const int* b1, const int* a2,
  const int* b2, int* out)
{
    while ( a1 != b1 && a2 != b2 )
       *out++ = (*a1 <= *a2) ? *a1++ : *a2++;
    return copy_(a2, b2, copy_(a1, b1, out));
}
static int* merge_backward_(const int* a1, const int* b1,
  const int* a2, const int* b2, int* out)
{
    while ( a1 != b1 && a2 != b2 )
       *--out = (*(b1-1) > *(b2-1)) ? *--b1 : *--b2;
    return copy_backward_(a1, b1, copy_backward_(a2, b2, out));
}

static unsigned int gcd_(unsigned int m, unsigned int n)
{
    while ( n != 0 )
     {
       unsigned int t = m % n;
       m = n;
       n = t;
     }
    return m;
}
static void rotate_inner_(const int length, const int stride,
  int* first, int* last)
{
    int* p, * next = first, x = *first;
    while ( 1 )
     {
       p = next;
       if ((next += stride) >= last)
          next -= length;
       if (next == first)
          break;
       *p = *next;
     }
    *p = x;
}
static int* rotate_(int* a, int* b, int* c)
/* swap the sequence [a,b) with [b,c). */
{
    if (a != b && b != c)
     {
       int n1 = c - a;
       int n2 = b - a;

       int* i = a;
       int* j = a + gcd_(n1, n2);

       for ( ; i != j; i++ )
          rotate_inner_(n1, n2, i, c);
     }
    return a + (c - b);
}

static void ip_merge_small_(int* a, int* b, int* c)
/* inplace merge.
 * @note faster for small sequences. */
{
    while ( a != b && b != c )
       if (*a <= *b)
          a++;
       else
        {
          int* p = b+1;
          while ( p != c && *p < *a )
             p++;
          rotate_(a, b, p);
          b = p;
        }
}
static void ip_merge_(int* a, int* b, int* c, int* t, const int ts)
/* inplace merge.
 * @note works with or without additional memory. */
{
    int n1 = b - a;
    int n2 = c - b;

    if (n1 <= n2 && n1 <= ts)
     {
       merge_(t, copy_(a, b, t), b, c, a);
     }
    else if (n2 <= ts)
     {
       merge_backward_(a, b, t, copy_(b, c, t), c);
     }
    /* merge without buffer. */
    else if (n1 + n2 < 48)
     {
       ip_merge_small_(a, b, c);
     }
    else
     {
       int* p, * q;

       if (n1 <= n2)
          p = upper_bound_(a, b, *(q = b+n2/2));
       else
          q = lower_bound_(b, c, *(p = a+n1/2));
       b = rotate_(p, b, q);

       ip_merge_(a, p, b, t, ts);
       ip_merge_(b, q, c, t, ts);
     }
}
static void ip_merge_chunk_(const int cs, int* a, int* b, int* t,
  const int ts)
{
    int* p = a + cs*2;
    for ( ; p <= b; a = p, p += cs*2 )
       ip_merge_(a, a+cs, p, t, ts);
    if (a+cs < b)
       ip_merge_(a, a+cs, b, t, ts);
}

static void smallsort_(int* a, int* b)
/* insertion sort.
 * @note any stable sort with low setup cost will do. */
{
    int* p, * q;
    for ( p = a+1; p < b; p++ )
     {
       int x = *p;
       for ( q = p; a < q && x < *(q-1); q-- )
          *q = *(q-1);
       *q = x;
     }
}
static void smallsort_chunk_(const int cs, int* a, int* b)
{
    int* p = a + cs;
    for ( ; p <= b; a = p, p += cs )
       smallsort_(a, p);
    smallsort_(a, b);
}

static void mergesort_lower_(int* v, int n, int* t, const int ts)
{
    int cs = 16;
    smallsort_chunk_(cs, v, v+n);
    for ( ; cs < n; cs *= 2 )
       ip_merge_chunk_(cs, v, v+n, t, ts);
}

static void* get_buffer_(int size, int* final)
{
    void* p = NULL;
    while ( size != 0 && (p = malloc(size)) == NULL )
       size /= 2;
    *final = size;
    return p;
}
void mergesort(int* v, int n)
{
    /* @note buffer size may be in the range [0,(n+1)/2]. */
    int request = (n+1)/2 * sizeof(int);
    int actual;
    int* t = (int*) get_buffer_(request, &actual);

    /* @note allocation failure okay. */
    int tsize = actual / sizeof(int);
    mergesort_lower_(v, n, t, tsize);
    free(t);
}
Minsk answered 3/4, 2010 at 11:4 Comment(5)
Did you write this? How does it overcome the difficulties expressed in the other answers? What is its running time?Bassarisk
This is adapted from my own custom library, but I didn't create these algorithms if that's what you're asking. Growth is O(n (log n)^2) without auxiliary memory; O(n log n) with full buffer. This tries to be a practical implementation, and is structured to show constituent algorithms.Minsk
Why do you need recursion or extra buffer to merge two sorted lists? I think it can be done by moving the two pointers forward and swapping if left is bigger than right.Generator
@jack: Are you sure the merging can be done in a single, fast loop (O(n) comparisons and O(n) swaps to in-place merge n elements in total), without recursion or extra buffer, while maintaining a stable order? If so, please post an answer, because this sounds like the holy grail of comparison-based sorts.Tenantry
This bufferless mergesort looks very similar to the one libstdc++ (see Philipp Claßen's answer) and the Java implementation based on that. The latter also contains 3 calls to reverse(...), but those are commented out, and a more sophisticated, GCD-based rotation code is provided. libstdc++ does an even more sophisticated GCD-based rotation.Tenantry
R
7

This answer has a code example, which implements the algorithm described in the paper Practical In-Place Merging by Bing-Chao Huang and Michael A. Langston. I have to admit that I do not understand the details, but the given complexity of the merge step is O(n).

From a practical perspective, there is evidence that pure in-place implementations are not performing better in real world scenarios. For example, the C++ standard defines std::inplace_merge, which is as the name implies an in-place merge operation.

Assuming that C++ libraries are typically very well optimized, it is interesting to see how it is implemented:

1) libstdc++ (part of the GCC code base): std::inplace_merge

The implementation delegates to __inplace_merge, which dodges the problem by trying to allocate a temporary buffer:

typedef _Temporary_buffer<_BidirectionalIterator, _ValueType> _TmpBuf;
_TmpBuf __buf(__first, __len1 + __len2);

if (__buf.begin() == 0)
  std::__merge_without_buffer
    (__first, __middle, __last, __len1, __len2, __comp);
else
  std::__merge_adaptive
   (__first, __middle, __last, __len1, __len2, __buf.begin(),
     _DistanceType(__buf.size()), __comp);

Otherwise, it falls back to an implementation (__merge_without_buffer), which requires no extra memory, but no longer runs in O(n) time.

2) libc++ (part of the Clang code base): std::inplace_merge

Looks similar. It delegates to a function, which also tries to allocate a buffer. Depending on whether it got enough elements, it will choose the implementation. The constant-memory fallback function is called __buffered_inplace_merge.

Maybe even the fallback is still O(n) time, but the point is that they do not use the implementation if temporary memory is available.


Note that the C++ standard explicitly gives implementations the freedom to choose this approach by lowering the required complexity from O(n) to O(N log N):

Complexity: Exactly N-1 comparisons if enough additional memory is available. If the memory is insufficient, O(N log N) comparisons.

Of course, this cannot be taken as a proof that constant space in-place merges in O(n) time should never be used. On the other hand, if it would be faster, the optimized C++ libraries would probably switch to that type of implementation.

Regnant answered 23/12, 2018 at 1:55 Comment(2)
This Java implementation is based on libstdc++ __merge_without_buffer.Tenantry
This Java implementation is based on libstdc++ __merge_adaptive (thus it uses a temporary buffer).Tenantry
A
3

This is my C version:

void mergesort(int *a, int len) {
  int temp, listsize, xsize;

  for (listsize = 1; listsize <= len; listsize*=2) {
    for (int i = 0, j = listsize; (j+listsize) <= len; i += (listsize*2), j += (listsize*2)) {
      merge(& a[i], listsize, listsize);
    }
  }

  listsize /= 2;

  xsize = len % listsize;
  if (xsize > 1)
    mergesort(& a[len-xsize], xsize);

  merge(a, listsize, xsize);
}

void merge(int *a, int sizei, int sizej) {
  int temp;
  int ii = 0;
  int ji = sizei;
  int flength = sizei+sizej;

  for (int f = 0; f < (flength-1); f++) {
    if (sizei == 0 || sizej == 0)
      break;

    if (a[ii] < a[ji]) {
      ii++;
      sizei--;
    }
    else {
      temp = a[ji];

      for (int z = (ji-1); z >= ii; z--)
        a[z+1] = a[z];  
      ii++;

      a[f] = temp;

      ji++;
      sizej--;
    }
  }
}
Astounding answered 14/2, 2012 at 2:24 Comment(2)
Note that this implementation takes Θ(n^2 log n) time in the worst case (reversed array).Propolis
Indeed, the hard part of mergesort is the fast, in-place, stable merge. The merge function in this answer is indeed in-place, but not fast: it's O(flength**2) worst case, when the input array is reversed (decreasing); the expectation for fast is O(flength) or O(flength*log(flength)). But is it stable?Tenantry
T
2

I know I'm late to the game, but here's a solution I wrote yesterday. I also posted this elsewhere, but this appears to be the most popular merge-in-place thread on SO. I've also not seen this algorithm posted anywhere else, so hopefully this helps some people.

This algorithm is in its most simple form so that it can be understood. It can be significantly tweaked for extra speed. Average time complexity is: O(n.log₂n) for the stable in-place array merge, and O(n.(log₂n)²) for the overall sort.

//                     Stable Merge In Place Sort
//
//
// The following code is written to illustrate the base algorithm. A good
// number of optimizations can be applied to boost its overall speed
// For all its simplicity, it does still perform somewhat decently.
// Average case time complexity appears to be: O(n.(log₂n)²)

#include <stddef.h>
#include <stdio.h>

#define swap(x, y)    (t=(x), (x)=(y), (y)=t)

// Both sorted sub-arrays must be adjacent in 'a'
// Assumes that both 'an' and 'bn' are always non-zero
// 'an' is the length of the first sorted section in 'a', referred to as A
// 'bn' is the length of the second sorted section in 'a', referred to as B
static void
merge_inplace(int A[], size_t an, size_t bn)
{
    int t, *B = &A[an];
    size_t  pa, pb;     // Swap partition pointers within A and B

    // Find the portion to swap.  We're looking for how much from the
    // start of B can swap with the end of A, such that every element
    // in A is less than or equal to any element in B.  This is quite
    // simple when both sub-arrays come at us pre-sorted
    for(pa = an, pb = 0; pa>0 && pb<bn && B[pb] < A[pa-1]; pa--, pb++);

    // Now swap last part of A with first part of B according to the
    // indicies we found
    for (size_t index=pa; index < an; index++)
        swap(A[index], B[index-pa]);

    // Now merge the two sub-array pairings.  We need to check that either array
    // didn't wholly swap out the other and cause the remaining portion to be zero
    if (pa>0 && (an-pa)>0)
        merge_inplace(A, pa, an-pa);

    if (pb>0 && (bn-pb)>0)
        merge_inplace(B, pb, bn-pb);
} // merge_inplace

// Implements a recursive merge-sort algorithm with an optional
// insertion sort for when the splits get too small.  'n' must
// ALWAYS be 2 or more.  It enforces this when calling itself
static void
merge_sort(int a[], size_t n)
{
    size_t  m = n/2;

    // Sort first and second halves only if the target 'n' will be > 1
    if (m > 1)
        merge_sort(a, m);

    if ((n-m)>1)
        merge_sort(a+m, n-m);

    // Now merge the two sorted sub-arrays together. We know that since
    // n > 1, then both m and n-m MUST be non-zero, and so we will never
    // violate the condition of not passing in zero length sub-arrays
    merge_inplace(a, m, n-m);
} // merge_sort

// Print an array */
static void
print_array(int a[], size_t size)
{
    if (size > 0) {
        printf("%d", a[0]);
        for (size_t i = 1; i < size; i++)
            printf(" %d", a[i]);
    }
    printf("\n");
} // print_array
 
// Test driver
int
main()
{
    int a[] = { 17, 3, 16, 5, 14, 8, 10, 7, 15, 1, 13, 4, 9, 12, 11, 6, 2 };
    size_t  n = sizeof(a) / sizeof(a[0]);
 
    merge_sort(a, n);
 
    print_array(a, n);

    return 0;
} // main
Thither answered 31/7, 2021 at 15:26 Comment(1)
What is the worst-case time complexity?Tenantry
S
2

Leveraging C++ std::inplace_merge, in-place merge sort can be implemented as follows:

template< class _Type >
inline void merge_sort_inplace(_Type* src, size_t l, size_t r)
{
    if (r <= l) return;

    size_t m = l + ( r - l ) / 2;             // computes the average without overflow

    merge_sort_inplace(src, l,     m);
    merge_sort_inplace(src, m + 1, r);

    std::inplace_merge(src + l, src + m + 1, src + r + 1);
}

More sorting algorithms, including parallel implementations, are available in https://github.com/DragonSpit/ParallelAlgorithms repo, which is open source and free.

Sotted answered 24/10, 2021 at 2:6 Comment(0)
C
-6

I just tried in place merge algorithm for merge sort in JAVA by using the insertion sort algorithm, using following steps.
1) Two sorted arrays are available.
2) Compare the first values of each array; and place the smallest value into the first array.
3) Place the larger value into the second array by using insertion sort (traverse from left to right).
4) Then again compare the second value of first array and first value of second array, and do the same. But when swapping happens there is some clue on skip comparing the further items, but just swapping required.

I have made some optimization here; to keep lesser comparisons in insertion sort.
The only drawback i found with this solutions is it needs larger swapping of array elements in the second array.

e.g)

First___Array : 3, 7, 8, 9

Second Array : 1, 2, 4, 5

Then 7, 8, 9 makes the second array to swap(move left by one) all its elements by one each time to place himself in the last.

So the assumption here is swapping items is negligible compare to comparing of two items.

https://github.com/skanagavelu/algorithams/blob/master/src/sorting/MergeSort.java

package sorting;

import java.util.Arrays;

public class MergeSort {
    public static void main(String[] args) {
    int[] array = { 5, 6, 10, 3, 9, 2, 12, 1, 8, 7 };
    mergeSort(array, 0, array.length -1);
    System.out.println(Arrays.toString(array));

    int[] array1 = {4, 7, 2};
    System.out.println(Arrays.toString(array1));
    mergeSort(array1, 0, array1.length -1);
    System.out.println(Arrays.toString(array1));
    System.out.println("\n\n");

    int[] array2 = {4, 7, 9};
    System.out.println(Arrays.toString(array2));
    mergeSort(array2, 0, array2.length -1);
    System.out.println(Arrays.toString(array2));
    System.out.println("\n\n");

    int[] array3 = {4, 7, 5};
    System.out.println(Arrays.toString(array3));
    mergeSort(array3, 0, array3.length -1);
    System.out.println(Arrays.toString(array3));
    System.out.println("\n\n");

    int[] array4 = {7, 4, 2};
    System.out.println(Arrays.toString(array4));
    mergeSort(array4, 0, array4.length -1);
    System.out.println(Arrays.toString(array4));
    System.out.println("\n\n");

    int[] array5 = {7, 4, 9};
    System.out.println(Arrays.toString(array5));
    mergeSort(array5, 0, array5.length -1);
    System.out.println(Arrays.toString(array5));
    System.out.println("\n\n");

    int[] array6 = {7, 4, 5};
    System.out.println(Arrays.toString(array6));
    mergeSort(array6, 0, array6.length -1);
    System.out.println(Arrays.toString(array6));
    System.out.println("\n\n");

    //Handling array of size two
    int[] array7 = {7, 4};
    System.out.println(Arrays.toString(array7));
    mergeSort(array7, 0, array7.length -1);
    System.out.println(Arrays.toString(array7));
    System.out.println("\n\n");

    int input1[] = {1};
    int input2[] = {4,2};
    int input3[] = {6,2,9};
    int input4[] = {6,-1,10,4,11,14,19,12,18};
    System.out.println(Arrays.toString(input1));
    mergeSort(input1, 0, input1.length-1);
    System.out.println(Arrays.toString(input1));
    System.out.println("\n\n");

    System.out.println(Arrays.toString(input2));
    mergeSort(input2, 0, input2.length-1);
    System.out.println(Arrays.toString(input2));
    System.out.println("\n\n");

    System.out.println(Arrays.toString(input3));
    mergeSort(input3, 0, input3.length-1);
    System.out.println(Arrays.toString(input3));
    System.out.println("\n\n");

    System.out.println(Arrays.toString(input4));
    mergeSort(input4, 0, input4.length-1);
    System.out.println(Arrays.toString(input4));
    System.out.println("\n\n");
}

private static void mergeSort(int[] array, int p, int r) {
    //Both below mid finding is fine.
    int mid = (r - p)/2 + p;
    int mid1 = (r + p)/2;
    if(mid != mid1) {
        System.out.println(" Mid is mismatching:" + mid + "/" + mid1+ "  for p:"+p+"  r:"+r);
    }

    if(p < r) {
        mergeSort(array, p, mid);
        mergeSort(array, mid+1, r);
//      merge(array, p, mid, r);
        inPlaceMerge(array, p, mid, r);
        }
    }

//Regular merge
private static void merge(int[] array, int p, int mid, int r) {
    int lengthOfLeftArray = mid - p + 1; // This is important to add +1.
    int lengthOfRightArray = r - mid;

    int[] left = new int[lengthOfLeftArray];
    int[] right = new int[lengthOfRightArray];

    for(int i = p, j = 0; i <= mid; ){
        left[j++] = array[i++];
    }

    for(int i = mid + 1, j = 0; i <= r; ){
        right[j++] = array[i++];
    }

    int i = 0, j = 0;
    for(; i < left.length && j < right.length; ) {
        if(left[i] < right[j]){
            array[p++] = left[i++];
        } else {
            array[p++] = right[j++];
        }
    }
    while(j < right.length){
        array[p++] = right[j++];
    } 
    while(i < left.length){
        array[p++] = left[i++];
    }
}

//InPlaceMerge no extra array
private static void inPlaceMerge(int[] array, int p, int mid, int r) {
    int secondArrayStart = mid+1;
    int prevPlaced = mid+1;
    int q = mid+1;
    while(p < mid+1 && q <= r){
        boolean swapped = false;
        if(array[p] > array[q]) {
            swap(array, p, q);
            swapped = true;
        }   
        if(q != secondArrayStart && array[p] > array[secondArrayStart]) {
            swap(array, p, secondArrayStart);
            swapped = true;
        }
        //Check swapped value is in right place of second sorted array
        if(swapped && secondArrayStart+1 <= r && array[secondArrayStart+1] < array[secondArrayStart]) {
            prevPlaced = placeInOrder(array, secondArrayStart, prevPlaced);
        }
        p++;
        if(q < r) {     //q+1 <= r) {
            q++;
        }
    }
}

private static int placeInOrder(int[] array, int secondArrayStart, int prevPlaced) {
    int i = secondArrayStart;
    for(; i < array.length; i++) {
        //Simply swap till the prevPlaced position
        if(secondArrayStart < prevPlaced) {
            swap(array, secondArrayStart, secondArrayStart+1);
            secondArrayStart++;
            continue;
        }
        if(array[i] < array[secondArrayStart]) {
            swap(array, i, secondArrayStart);
            secondArrayStart++;
        } else if(i != secondArrayStart && array[i] > array[secondArrayStart]){
            break;
        }
    }
    return secondArrayStart;
}

private static void swap(int[] array, int m, int n){
    int temp = array[m];
    array[m] = array[n];
    array[n] = temp;
}
}
Castrate answered 25/2, 2016 at 6:23 Comment(1)
It's both O(n^2) and also highly unreadable (because of the occasional syntax errors and inconsistent / poor style)Brash

© 2022 - 2024 — McMap. All rights reserved.