Given an array with N elements, I am looking for M (M < N) successive sub-arrays with equal lengths or with lengths that differ by mostly 1. For example, if N = 12 and M = 4, all sub-arrays would have equal lengths of N/M = 3. If N = 100 and M = 12, I expect sub-arrays with lengths 8 and 9, and both sizes should be uniformly spread within the original array. This simple task turned to be a little bit subtle to implement. I came up with an adaptation of the Bresenham's line algorithm, which looks like this when coded in C++:
/// The function suggests how an array with num_data-items can be
/// subdivided into successively arranged groups (intervals) with
/// equal or "similar" length. The number of intervals is specified
/// by the parameter num_intervals. The result is stored into an array
/// with (num_data + 1) items, each of which indicates the start-index of
/// an interval, the last additional index being a sentinel item which
/// contains the value num_data.
///
/// Example:
///
/// Input: num_data ........... 14,
/// num_intervals ...... 4
///
/// Result: result_start_idx ... [ 0, 3, 7, 10, 14 ]
///
void create_uniform_intervals( const size_t num_data,
const size_t num_intervals,
std::vector<size_t>& result_start_idx )
{
const size_t avg_interval_len = num_data / num_intervals;
const size_t last_interval_len = num_data % num_intervals;
// establish the new size of the result vector
result_start_idx.resize( num_intervals + 1L );
// write the pivot value at the end:
result_start_idx[ num_intervals ] = num_data;
size_t offset = 0L; // current offset
// use Bresenham's line algorithm to distribute
// last_interval_len over num_intervals:
intptr_t error = num_intervals / 2;
for( size_t i = 0L; i < num_intervals; i++ )
{
result_start_idx[ i ] = offset;
offset += avg_interval_len;
error -= last_interval_len;
if( error < 0 )
{
offset++;
error += num_intervals;
} // if
} // for
}
This code calculates the interval lengths for N = 100, M=12: 8 9 8 8 9 8 8 9 8 8 9 8
The actual question is that I don't know how exactly to call my problem, so I had difficulty searching for it.
- Are there other algorithms for accomplishing such a task?
- How are they called? Maybe the names would come if I knew other areas of application.
I needed the algorithm as a part of a bigger algorithm for clustering of data. I think it could also be useful for implementing a parallel sort(?).