Best way to combine two or more byte arrays in C#
Asked Answered
A

14

295

I have 3 byte arrays in C# that I need to combine into one. What would be the most efficient method to complete this task?

Actinism answered 6/1, 2009 at 2:54 Comment(3)
What specifically are your requirements? Are you taking the union of the arrays or are you preserving multiple instances of the same value? Do you want the items sorted, or do you want to preserve the ordering in the initial arrays? Are you looking for efficiency in speed or in lines of code?Deandreadeane
If you are able to use LINQ, then you can just use the Concat method: IEnumerable<byte> arrays = array1.Concat(array2).Concat(array3);Bootblack
Please try to be more clear in your questions. This vague question has caused a lot of confusion amongst those people good enough to take the time to answer you.Superincumbent
S
373

For primitive types (including bytes), use System.Buffer.BlockCopy instead of System.Array.Copy. It's faster.

I timed each of the suggested methods in a loop executed 1 million times using 3 arrays of 10 bytes each. Here are the results:

  1. New Byte Array using System.Array.Copy - 0.2187556 seconds
  2. New Byte Array using System.Buffer.BlockCopy - 0.1406286 seconds
  3. IEnumerable<byte> using C# yield operator - 0.0781270 seconds
  4. IEnumerable<byte> using LINQ's Concat<> - 0.0781270 seconds

I increased the size of each array to 100 elements and re-ran the test:

  1. New Byte Array using System.Array.Copy - 0.2812554 seconds
  2. New Byte Array using System.Buffer.BlockCopy - 0.2500048 seconds
  3. IEnumerable<byte> using C# yield operator - 0.0625012 seconds
  4. IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds

I increased the size of each array to 1000 elements and re-ran the test:

  1. New Byte Array using System.Array.Copy - 1.0781457 seconds
  2. New Byte Array using System.Buffer.BlockCopy - 1.0156445 seconds
  3. IEnumerable<byte> using C# yield operator - 0.0625012 seconds
  4. IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds

Finally, I increased the size of each array to 1 million elements and re-ran the test, executing each loop only 4000 times:

  1. New Byte Array using System.Array.Copy - 13.4533833 seconds
  2. New Byte Array using System.Buffer.BlockCopy - 13.1096267 seconds
  3. IEnumerable<byte> using C# yield operator - 0 seconds
  4. IEnumerable<byte> using LINQ's Concat<> - 0 seconds

So, if you need a new byte array, use

byte[] rv = new byte[a1.Length + a2.Length + a3.Length];
System.Buffer.BlockCopy(a1, 0, rv, 0, a1.Length);
System.Buffer.BlockCopy(a2, 0, rv, a1.Length, a2.Length);
System.Buffer.BlockCopy(a3, 0, rv, a1.Length + a2.Length, a3.Length);

But, if you can use an IEnumerable<byte>, DEFINITELY prefer LINQ's Concat<> method. It's only slightly slower than the C# yield operator, but is more concise and more elegant.

IEnumerable<byte> rv = a1.Concat(a2).Concat(a3);

If you have an arbitrary number of arrays and are using .NET 3.5, you can make the System.Buffer.BlockCopy solution more generic like this:

private byte[] Combine(params byte[][] arrays)
{
    byte[] rv = new byte[arrays.Sum(a => a.Length)];
    int offset = 0;
    foreach (byte[] array in arrays) {
        System.Buffer.BlockCopy(array, 0, rv, offset, array.Length);
        offset += array.Length;
    }
    return rv;
}

*Note: The above block requires you adding the following namespace at the the top for it to work.

using System.Linq;

To Jon Skeet's point regarding iteration of the subsequent data structures (byte array vs. IEnumerable<byte>), I re-ran the last timing test (1 million elements, 4000 iterations), adding a loop that iterates over the full array with each pass:

  1. New Byte Array using System.Array.Copy - 78.20550510 seconds
  2. New Byte Array using System.Buffer.BlockCopy - 77.89261900 seconds
  3. IEnumerable<byte> using C# yield operator - 551.7150161 seconds
  4. IEnumerable<byte> using LINQ's Concat<> - 448.1804799 seconds

The point is, it is VERY important to understand the efficiency of both the creation and the usage of the resulting data structure. Simply focusing on the efficiency of the creation may overlook the inefficiency associated with the usage. Kudos, Jon.

Samale answered 6/1, 2009 at 3:53 Comment(21)
so the IEnemerable is always faster than the blockCopy?Anecdotist
Yes. There is no memory being allocated/copied (aside from the iterator anonymous class).Cyanic
But are you actually converting it into an array at the end, as the question requires? If not, of course it's faster - but it's not fulfilling the requirements.Gabler
Isn't (lazy) functional programming grand? ;-)Dunseath
If you argue the IEnumerable<> solution doesn't fulfill the requirement, then I'll argue the requirement is poor because it offers no context. A combined array may not be necessary, in which case it'd be foolish to create one just to satisfy a poor requirement. Better to update the requirement.Samale
Awesome summary of multiple methods with pros and cons. +1. Much appreciated.Dominique
Re:Matt Davis - It doesn't matter if your "requirements" need turn the IEnumerable into an array - all that your requirements need is that the result is actually used in some fasion. The reason your performance tests on IEnumerable are so low is because you are not actually doing anything! LINQ does not perform any of its work until you attempt to use the results. For this reason I find your answer objectively incorrect and could lead others to use LINQ when they absolutely should not if they care about performance.Donny
As a general rule if your performance tests show 0 or close to 0 (as your IEnumerable results do), there is probably something wrong with your test.Donny
@csauve, Jon Skeet made the same point over 4 years ago, and I addressed it as part of the edit to my answer. I appreciate the clarification, but perhaps commenting after reading the entire answer would be more beneficial in the future.Samale
I read the entire answer including your update, my comment stands. I know I'm joining the party late, but the answer is grossly misleading and the first half is patently false.Donny
So when you say "But, if you can use an IEnumerable<byte>, DEFINITELY prefer Linq's Concat<> method." that is a false statement. When you make false statements (and make them in bold all caps DEFINITELY no-less) you should correct them inline rather than 20 lines down.Donny
I am also trying to make it very clear that the "result must be a byte array" requirement is of no consequence, which it seems you still cling to.Donny
Once you get the required 2000 rep, feel free to edit the answer to your satisfaction. Alternatively, you can post an answer yourself and provide as much clarification as you wish. Until then, your comments will serve to complement my edited answer.Samale
If the code that generated these results isn't available for public scrutiny, these figures are as good as fiction.Bise
Why is the answer that contains false and misleading information the top-voted answer, and was edited to basically completely invalidate its original statement after someone (Jon Skeet) pointed out that it didn't even answer OPs question?Marinelli
@MattDavis, Whould you please publish your test code too? It I cannot get the same result proportions as yours.Physiography
Misleading answer. Even the edition isn't answering the question.Hohenlohe
IEnumerable<byte> usage in your answer shows your limited knowledge of Linq because you only built expressions but not the result. If you target extreme performance, you must avoid Linq. Thank for pointing the faster execution of System.Buffer.BlockCopyBerardo
@Berardo that's not always true. I just did a benchmark on various array "combining" methos and Linq.SelectMany didn't perform noticeably worse than BlockCopy and other methods here in .NET 7. I'll post a complete write up on this. I believe they've really optmized many linq operations!Anse
@Anse would be nice to have some optimizations, moving soon to .net 8, thanks for sharingBerardo
@Berardo I've posted my findings in this question. You can read it here: https://mcmap.net/q/99989/-best-way-to-combine-two-or-more-byte-arrays-in-cAnse
G
183

Many of the answers seem to me to be ignoring the stated requirements:

  • The result should be a byte array
  • It should be as efficient as possible

These two together rule out a LINQ sequence of bytes - anything with yield is going to make it impossible to get the final size without iterating through the whole sequence.

If those aren't the real requirements of course, LINQ could be a perfectly good solution (or the IList<T> implementation). However, I'll assume that Superdumbell knows what he wants.

(EDIT: I've just had another thought. There's a big semantic difference between making a copy of the arrays and reading them lazily. Consider what happens if you change the data in one of the "source" arrays after calling the Combine (or whatever) method but before using the result - with lazy evaluation, that change will be visible. With an immediate copy, it won't. Different situations will call for different behaviour - just something to be aware of.)

Here are my proposed methods - which are very similar to those contained in some of the other answers, certainly :)

public static byte[] Combine(byte[] first, byte[] second)
{
    byte[] ret = new byte[first.Length + second.Length];
    Buffer.BlockCopy(first, 0, ret, 0, first.Length);
    Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
    return ret;
}

public static byte[] Combine(byte[] first, byte[] second, byte[] third)
{
    byte[] ret = new byte[first.Length + second.Length + third.Length];
    Buffer.BlockCopy(first, 0, ret, 0, first.Length);
    Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
    Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
                     third.Length);
    return ret;
}

public static byte[] Combine(params byte[][] arrays)
{
    byte[] ret = new byte[arrays.Sum(x => x.Length)];
    int offset = 0;
    foreach (byte[] data in arrays)
    {
        Buffer.BlockCopy(data, 0, ret, offset, data.Length);
        offset += data.Length;
    }
    return ret;
}

Of course the "params" version requires creating an array of the byte arrays first, which introduces extra inefficiency.

Gabler answered 6/1, 2009 at 8:39 Comment(7)
Jon, I understand precisely what you're saying. My only point is that sometimes questions are asked with a particular implementation already in mind without realizing that other solutions exist. Simply providing an answer without offering alternatives seems like a disservice to me. Thoughts?Samale
@Matt: Yes, offering alternatives is good - but it's worth explaining that they are alternatives rather than passing them off as the answer to the question being asked. (I'm not saying that you did that - your answer is very good.)Gabler
(Although I think your performance benchmark should show the time taken to go through all the results in each case, too, to avoid giving lazy evaluation an unfair advantage.)Gabler
You can also change params byte[][] to IEnumerable<byte[]>, thus avoiding the need for an arrayLeboff
Even without meeting the requirement of "result must be an array", simply meeting a requirement of "result must be used in some fasion" would make LINQ non-optimal. I think that requirement to be able to use the result should be implicit!Donny
@Jon why not: public static T[] Combine<T>(T[] first, T[] second) ?Upcoming
@andleer: Aside from anything else, Buffer.BlockCopy only works with primitive types.Gabler
R
67

I took Matt's LINQ example one step further for code cleanliness:

byte[] rv = a1.Concat(a2).Concat(a3).ToArray();

In my case, the arrays are small, so I'm not concerned about performance.

Reno answered 16/10, 2014 at 19:25 Comment(2)
Short and simple solution, a performance test would be great!Purpura
This is definitely clear, readable, requires no external libraries/helpers, and, in terms of development time, is quite efficient. Great when run-time performance is not critical.Corliss
C
35

If you simply need a new byte array, then use the following:

byte[] Combine(byte[] a1, byte[] a2, byte[] a3)
{
    byte[] ret = new byte[a1.Length + a2.Length + a3.Length];
    Array.Copy(a1, 0, ret, 0, a1.Length);
    Array.Copy(a2, 0, ret, a1.Length, a2.Length);
    Array.Copy(a3, 0, ret, a1.Length + a2.Length, a3.Length);
    return ret;
}

Alternatively, if you just need a single IEnumerable, consider using the C# 2.0 yield operator:

IEnumerable<byte> Combine(byte[] a1, byte[] a2, byte[] a3)
{
    foreach (byte b in a1)
        yield return b;
    foreach (byte b in a2)
        yield return b;
    foreach (byte b in a3)
        yield return b;
}
Cyanic answered 6/1, 2009 at 3:3 Comment(2)
I've done something similar to your 2nd option to merge large streams, worked like a charm. :)Dexamyl
The second option is great. +1.Mani
C
18

I actually ran into some issues with using Concat... (with arrays in the 10-million, it actually crashed).

I found the following to be simple, easy and works well enough without crashing on me, and it works for ANY number of arrays (not just three) (It uses LINQ):

public static byte[] ConcatByteArrays(params byte[][]  arrays)
{
    return arrays.SelectMany(x => x).ToArray();
}
Chidester answered 13/8, 2015 at 15:30 Comment(1)
This should be the accepted answer due to its simplicity, versatility and pedagogical value >;-)Eberhardt
B
7

The memorystream class does this job pretty nicely for me. I couldn't get the buffer class to run as fast as memorystream.

using (MemoryStream ms = new MemoryStream())
{
  ms.Write(BitConverter.GetBytes(22),0,4);
  ms.Write(BitConverter.GetBytes(44),0,4);
  ms.ToArray();
}
Bayless answered 14/5, 2010 at 12:49 Comment(4)
As qwe stated, I did a test in a loop 10,000,000 times, and MemoryStream came out 290% SLOWER than Buffer.BlockCopyCommonality
In some cases you may be iterating over an enumerable of arrays without any foreknowledge of the individual array lengths. This works well in this scenario. BlockCopy relies on having a destination array precreatedPulse
As @Pulse said, this answer is perfect for me because I have no knowledge of the size of things I have to write and lets me do things very cleanly. It also plays nice with .NET Core 3's [ReadOnly]Span<byte>!Designedly
If you initialize MemoryStream with the final size of the size it will not be recreated and it will be faster @esac.Seessel
A
6

Almost 15 years ago this was asked... and now I'm adding yet another answer to it... hopefully to try to bring it up to date with current .NET developments.

The OP requirements were (as perfectly stated and addressed by Jon Skeet's answer) :

  1. The result should be a byte array
  2. It should be as efficient as possible

Forcing the result to be a byte[] avoids answers with LINQ that didn't materialize it (e.g., didn't actually perform the "byte array combining") due to LINQ's deferred execution (a common pitfall when measuring performance).

To find the most efficient way we have to measure it. And it's not trivial because we have to measure different scenarios which depend on the amount of data and also the runtime we are targeting. To help with all that BenchmarkDotNet comes to the rescue!

At the time this was answered Buffer.BlockCopy was the fastest solution, but things may have changed since then. The NET team is working diligently at optimizing hot code paths in .NET newest releases, including many LINQ optimizations.

The three methods we'll be measuring are: Buffer.BlockCopy, Array.Copy and LINQ's Enumerable.SelectMany. You may be surprised (I was):

public byte[] CombineBlockCopy(params byte[][] arrays) {
    byte[] combined = new byte[arrays.Sum(x => x.Length)];
    int offset = 0;
    foreach (byte[] array in arrays) {
        Buffer.BlockCopy(array, 0, combined, offset, array.Length);
        offset += array.Length;
    }
    return combined;
}

public byte[] CombineArrayCopy(params byte[][] arrays) {
    byte[] combined = new byte[arrays.Sum(x => x.Length)];
    int offset = 0;
    foreach (byte[] array in arrays) {
        Array.Copy(array, 0, combined, offset, array.Length);
        offset += array.Length;
    }
    return combined;
}

public byte[] CombineSelectMany(params byte[][] arrays) 
    => arrays.SelectMany(x => x).ToArray();

Small Arrays

Combining 1,024 arrays of 100 elements:

1024 x 100 plot

1024 x 100 results

We can see that there were HUGE improvements in performance already in NET 6, with more optimizations still in NET 7, which makes LINQ totally fine in this scenario, while in .NET Framework LINQ was way too slow for any array combining job :-)

Note: Since there's little difference in .NET Framework 4.6.2 relative to 4.8.1 I'll omit 4.6.2 results from now on.

Medium Arrays

Combining 1,024 arrays of 100,000 elements:

1024 x 100,000 plot

1024 x 100,000 results

With larger arrays the results are even more obvious. LINQ in .NET Framework should not even be considered for any performance-sensitive code, while in .NET 6 and 7 it performs as well as the other methods (and even outperforms them sometimes - although it's inside the benchmark's margin of error).

In .NET 6 all three methods performs the same (within the error margin). Array.Copy and Buffer.BlockCopy performs as well as in .NET Framework 4.8.1.

In .NET 7 all methods are significantly faster than even in .NET 6. LINQ's solution is as good as good oldies Array.Copy and Buffer.BlockCopy. So, if you're targeting .NET 7 or above the LINQ solution will substantially outperform even Buffer.BlockCopy or Array.Copy while being very succinct and self-describing code.

Conclusion

While Array.Copy and Buffer.BlockCopy used to be the kings for this kind of byte crunching, current optimizations to the runtime brought the LINQ-based solution to the same level of performance for larger arrays (and almost the same performance for small arrays).

Despite common wisdom indicating that Buffer.BlockCopy should have performed better than Array.Copy in all tested scenarios that was not the case. The difference in performance can go either way and is within the error margin. This "wisdom" may have been true for older versions of the Framework (always measure your code's performance for the target runtime and architecture!)

As always, happy benchmarking!

Benchmark Source Code

NOTE: I'll setup a Github repository with the full solution / project soon. For now, here is the full source for the benchmarks for small, medium and large arrays.

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;

BenchmarkRunner.Run<Bench>();

[MemoryDiagnoser]
[HideColumns("Job", "Error", "StdDev", "Median")]
[SimpleJob(RuntimeMoniker.Net462)]
[SimpleJob(RuntimeMoniker.Net481)]
[SimpleJob(RuntimeMoniker.Net60)]
[SimpleJob(RuntimeMoniker.Net70)]
[RPlotExporter]
public class Bench
{
    static List<byte[]> Build(int size, int length) {
        var list = new List<byte[]>();
        for (int i = 0; i < size; i++)
            list.Add(new byte[length]);
        return list;
    }

    static readonly List<byte[]> smallArrays = Build(1024, 100);
    static readonly List<byte[]> mediumArrays = Build(1024, 100_000);
    static readonly List<byte[]> largeArrays = Build(1024, 1_000_000);

    public IEnumerable<object> Arrays() {
        yield return new Data(smallArrays); 
        yield return new Data(mediumArrays);
        yield return new Data(largeArrays); 
    }

    public class Data {
        public Data(IEnumerable<byte[]> arrays) { Arrays = arrays; }
        public IEnumerable<byte[]> Arrays { get; }
        public override string ToString() => $"[{Arrays.Count()}x{Arrays.First().Length}]";
    }

    [Benchmark, ArgumentsSource(nameof(Arrays))]
    public byte[] Buffer_Block_Copy(Data data) {
        var arrays = data.Arrays;
        byte[] combined = new byte[arrays.Sum(x => x.Length)];
        int offset = 0;
        foreach (byte[] array in arrays) {
            Buffer.BlockCopy(array, 0, combined, offset, array.Length);
            offset += array.Length;
        }
        return combined;
    }
    [Benchmark, ArgumentsSource(nameof(Arrays))]
    public byte[] Array_Copy(Data data) {
        var arrays = data.Arrays;
        byte[] combined = new byte[arrays.Sum(x => x.Length)];
        int offset = 0;
        foreach (byte[] array in arrays) {
            Buffer.BlockCopy(array, 0, combined, offset, array.Length);
            offset += array.Length;
        }
        return combined;
    }
    [Benchmark, ArgumentsSource(nameof(Arrays))]
    public byte[] Linq_SelectMany(Data data) {
        byte[] combined = data.Arrays.SelectMany(x => x).ToArray();  
        return combined;
    }
}
Anse answered 23/12, 2023 at 6:1 Comment(0)
N
3
    public static byte[] Concat(params byte[][] arrays) {
        using (var mem = new MemoryStream(arrays.Sum(a => a.Length))) {
            foreach (var array in arrays) {
                mem.Write(array, 0, array.Length);
            }
            return mem.ToArray();
        }
    }
Natation answered 10/12, 2014 at 17:20 Comment(2)
You're answer could be better if you had posted a little explanation of what does this code sample.Reareace
it does concatenate an array of byte arrays into one large byte array (like this): [1,2,3] + [4,5] + [6,7] ==> [1,2,3,4,5,6,7]Natation
Z
2
    public static bool MyConcat<T>(ref T[] base_arr, ref T[] add_arr)
    {
        try
        {
            int base_size = base_arr.Length;
            int size_T = System.Runtime.InteropServices.Marshal.SizeOf(base_arr[0]);
            Array.Resize(ref base_arr, base_size + add_arr.Length);
            Buffer.BlockCopy(add_arr, 0, base_arr, base_size * size_T, add_arr.Length * size_T);
        }
        catch (IndexOutOfRangeException ioor)
        {
            MessageBox.Show(ioor.Message);
            return false;
        }
        return true;
    }
Zigzagger answered 13/4, 2011 at 6:11 Comment(1)
Unfortunately this won't work with all types. Marshal.SizeOf() will be unable to return a size for many types (try using this method with arrays of strings and you'll see an exception "Type 'System.String' cannot be marshaled as an unmanaged structure; no meaningful size or offset can be computed". You could try limiting the type parameter to reference types only (by adding where T : struct), but - not being an expert in the innards of the CLR - I couldn't say whether you might get exceptions on certain structs as well (e.g. if they contain reference type fields).Chalcidice
E
2

Can use generics to combine arrays. Following code can easily be expanded to three arrays. This way you never need to duplicate code for different type of arrays. Some of the above answers seem overly complex to me.

private static T[] CombineTwoArrays<T>(T[] a1, T[] a2)
    {
        T[] arrayCombined = new T[a1.Length + a2.Length];
        Array.Copy(a1, 0, arrayCombined, 0, a1.Length);
        Array.Copy(a2, 0, arrayCombined, a1.Length, a2.Length);
        return arrayCombined;
    }
Einstein answered 20/6, 2018 at 20:12 Comment(0)
G
1
    /// <summary>
    /// Combine two Arrays with offset and count
    /// </summary>
    /// <param name="src1"></param>
    /// <param name="offset1"></param>
    /// <param name="count1"></param>
    /// <param name="src2"></param>
    /// <param name="offset2"></param>
    /// <param name="count2"></param>
    /// <returns></returns>
    public static T[] Combine<T>(this T[] src1, int offset1, int count1, T[] src2, int offset2, int count2) 
        => Enumerable.Range(0, count1 + count2).Select(a => (a < count1) ? src1[offset1 + a] : src2[offset2 + a - count1]).ToArray();
Gerfalcon answered 10/5, 2020 at 3:59 Comment(3)
Thank you for the contribution. Since there are already a number of highly rated answers to this from over a decade ago, it’d be useful to offer an explanation of what distinguishes your approach. Why should someone use this instead of e.g. the accepted answer?Deron
I like using extended methods, because there are clear code to understand. This Code select two arrays with start index and count and concat. Also this method extended. So, this is for all array types ready for all timesIncapacious
That makes good sense to me! Do you mind editing your question to include that information? I think it would be valuable to future readers to have that upfront, so they can quickly distinguish your approach from the existing answers. Thank you!Deron
H
0

Here's a generalization of the answer provided by @Jon Skeet. It is basically the same, only it is usable for any type of array, not only bytes:

public static T[] Combine<T>(T[] first, T[] second)
{
    T[] ret = new T[first.Length + second.Length];
    Buffer.BlockCopy(first, 0, ret, 0, first.Length);
    Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
    return ret;
}

public static T[] Combine<T>(T[] first, T[] second, T[] third)
{
    T[] ret = new T[first.Length + second.Length + third.Length];
    Buffer.BlockCopy(first, 0, ret, 0, first.Length);
    Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
    Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
                     third.Length);
    return ret;
}

public static T[] Combine<T>(params T[][] arrays)
{
    T[] ret = new T[arrays.Sum(x => x.Length)];
    int offset = 0;
    foreach (T[] data in arrays)
    {
        Buffer.BlockCopy(data, 0, ret, offset, data.Length);
        offset += data.Length;
    }
    return ret;
}
Hypabyssal answered 29/1, 2014 at 9:25 Comment(3)
DANGER! These methods won't work property with any array type with elements longer than one byte (pretty much everything other than byte arrays). Buffer.BlockCopy() works with quantities of bytes, not numbers of array elements. The reason it can be used easily with a byte array is that every element of the array is a single byte, so the physical length of the array equals the number of elements. To turn John's byte[] methods into generic methods you'll need to multiple all the offsets and lengths by the byte-length of a single array element - otherwise you won't copy all of the data.Chalcidice
Normally to make this work you'd compute the size of a single element using sizeof(...) and multiply that by the number of elements you want to copy, but sizeof can't be used with a generic type. It is possible - for some types - to use Marshal.SizeOf(typeof(T)), but you'll get runtime errors with certain types (e.g. strings). Someone with more thorough knowledge of the inner workings of CLR types will be able to point out all the possible traps here. Suffice to say that writing a generic array concatenation method [using BlockCopy] isn't trivial.Chalcidice
And finally - you can write a generic array concatenation method like this in almost exactly the way shown above (with slightly lower performance) by using Array.Copy instead. Just replace all the Buffer.BlockCopy calls with Array.Copy calls.Chalcidice
M
-3

All you need to pass list of Byte Arrays and this function will return you the Array of Bytes (Merged). This is the best solution i think :).

public static byte[] CombineMultipleByteArrays(List<byte[]> lstByteArray)
        {
            using (var ms = new MemoryStream())
            {
                using (var doc = new iTextSharp.text.Document())
                {
                    using (var copy = new PdfSmartCopy(doc, ms))
                    {
                        doc.Open();
                        foreach (var p in lstByteArray)
                        {
                            using (var reader = new PdfReader(p))
                            {
                                copy.AddDocument(reader);
                            }
                        }

                        doc.Close();
                    }
                }
                return ms.ToArray();
            }
        }
Meakem answered 27/7, 2017 at 13:3 Comment(0)
M
-5

Concat is the right answer, but for some reason a handrolled thing is getting the most votes. If you like that answer, perhaps you'd like this more general solution even more:

    IEnumerable<byte> Combine(params byte[][] arrays)
    {
        foreach (byte[] a in arrays)
            foreach (byte b in a)
                yield return b;
    }

which would let you do things like:

    byte[] c = Combine(new byte[] { 0, 1, 2 }, new byte[] { 3, 4, 5 }).ToArray();
Marrero answered 6/1, 2009 at 5:33 Comment(1)
The question specifically asks for the most efficient solution. Enumerable.ToArray isn't going to be very efficient, as it can't know the size of the final array to start with - whereas the hand-rolled techniques can.Gabler

© 2022 - 2024 — McMap. All rights reserved.