What is the most efficient string concatenation method in Python?
Asked Answered
O

13

204

Is there an efficient mass string concatenation method in Python (like StringBuilder in C# or StringBuffer in Java)?

I found following methods here:

  • Simple concatenation using +
  • Using a string list and the join method
  • Using UserString from the MutableString module
  • Using a character array and the array module
  • Using cStringIO from the StringIO module

What should be used and why?

(A related question is here.)

Overstate answered 22/8, 2009 at 19:53 Comment(1)
Similar question: stackoverflow.com/questions/476772Lomax
W
127

If you know all components beforehand once, use the literal string interpolation, also known as f-strings or formatted strings, introduced in Python 3.6.

Given the test case from mkoistinen's answer, having strings

domain = 'some_really_long_example.com'
lang = 'en'
path = 'some/really/long/path/'

The contenders and their execution time on my computer using Python 3.6 on Linux as timed by IPython and the timeit module are

  • f'http://{domain}/{lang}/{path}' - 0.151 µs

  • 'http://%s/%s/%s' % (domain, lang, path) - 0.321 µs

  • 'http://' + domain + '/' + lang + '/' + path - 0.356 µs

  • ''.join(('http://', domain, '/', lang, '/', path)) - 0.249 µs (notice that building a constant-length tuple is slightly faster than building a constant-length list).

Thus the shortest and the most beautiful code possible is also fastest.


The speed can be contrasted with the fastest method for Python 2, which is + concatenation on my computer; and that takes 0.203 µs with 8-bit strings, and 0.259 µs if the strings are all Unicode.

(In alpha versions of Python 3.6 the implementation of f'' strings was the slowest possible - actually the generated byte code is pretty much equivalent to the ''.join() case with unnecessary calls to str.__format__ which without arguments would just return self unchanged. These inefficiencies were addressed before 3.6 final.)

Washtub answered 13/7, 2016 at 21:38 Comment(3)
This is the actual answer to the question "which is fastest". The currently accepted answer is overly opinionated that this is a premature optimization, to the point where it doesn't even bother testing the various techniques available.Gebelein
As of 3.10.0, the ranking are unchanged relative to one another. My timings are f-string 110 ns, printf-style w/% 160 ns, concat w/+ 176 ns, and ''.join 130 ns (I used %%timeit magic where domain, lang and path were all defined in the setup step, the first line, and the code to test was not wrapped in a function otherwise, minimizing overhead unrelated to the operation being tested). Note that you can beat f-strings with '/'.join(('http:/', domain, lang, path)) (at 99 ns), but that's neither pretty nor generalizable.Whichsoever
@Whichsoever interesting though that the %-tuple formatting isn't faster, as Serhiy Storchaka was working on an optimization that would make it on par, maybe it just landed in 3.11.Manet
E
156

You may be interested in this: An optimization anecdote by Guido. Although it is worth remembering also that this is an old article and it predates the existence of things like ''.join (although I guess string.joinfields is more-or-less the same)

On the strength of that, the array module may be fastest if you can shoehorn your problem into it. But ''.join is probably fast enough and has the benefit of being idiomatic and thus easier for other Python programmers to understand.

Finally, the golden rule of optimization: don't optimize unless you know you need to, and measure rather than guessing.

You can measure different methods using the timeit module. That can tell you which is fastest, instead of random strangers on the Internet making guesses.

Erymanthus answered 22/8, 2009 at 20:26 Comment(8)
Wanting to add onto the point about when to optimize: make sure to test against the worst cases. For example, I can increase my sample so that my current code goes from running at 0.17 seconds to 170 seconds. Well I want to test at larger sample sizes since there is less variation there.Yacketyyak
"Don't optimize until you know you need to." Unless you are just using a nominally different idiom and can avoid rework of your code with little extra effort.Azilian
One place you know you need to is interview (which is always a great time to brush up your deep understanding). Unfortunately I haven't found ANY modern article about this. (1) Is Java/C# String still that bad in 2017? (2) How about C++? (3) Now tell about latest and greatest in Python focusing on cases when we need to do millions of concatenations. Can we trust that join would work in linear time ?Blithe
What does "fast enough" mean for .join()? The main question is, does it a) create a copy of the string for concatenation (similar to s = s + 'abc'), which requires O(n) runtime, or b) simply append to the existing string without creating a copy, which requires O(1)?Shagreen
@CGFoX s = s + 'abc' creates a brand-new str object, then makes s refer to that instead of the original object referred to by s. If you do this inside a loop, you are repeatedly copying the (increasingly long) value of s into a series of new object. ''.join, however, operates "inside" the str type. It only has to access the contents of the operands once, to copy into a str object pre-allocated to be large enough to hold the result.Chutney
@user1854182: 1) The nature of Java and C# (and Python for that matter's) string types (immutable) makes optimizations of +/+= string concatenation an implementation detail at best; it doesn't really matter if it can work, you shouldn't rely on it. 2) For C++, as long as you use +=, it's always been fine, you just can't use str = str + more, because that inherently requires construction of a new string. There could theoretically be better ways (especially if you're concatenating a mix of types), but it's not going to be awful. 3) PEP8 guarantees linear performance as a side-note.Whichsoever
@CGFoX: ''.join is actually required to make a copy. But the idea is that you build up all your pieces into a list (which has amortized O(1) performance per append) and then join all at once, so you only copy any given character once, no matter how many str are being joined. Concatenating just two strings in a language with immutable strings is always O(m + n) in the length of the two strings (though CPython has a cheat that sometimes makes it just O(n) in the size of the second string), the problem join solves is repeated concatenation.Whichsoever
Update on this answer for Python 3: The array module solution is nonsensical at this point; it worked well on Python 2 because str happened to be both bytes-like and text-like. In Python 3: 1) The best solution for the original task in that blog (convert list of ints representing ASCII to an equivalent str) is almost certainly bytes(lst).decode() 2) array is not suitable for use with arbitrary str directly (doesn't portably handle non-latin-1 data); making it work would make it a slower equivalent to using bytearray and decoding 3) But ''.join is still the fast obvious way.Whichsoever
W
127

If you know all components beforehand once, use the literal string interpolation, also known as f-strings or formatted strings, introduced in Python 3.6.

Given the test case from mkoistinen's answer, having strings

domain = 'some_really_long_example.com'
lang = 'en'
path = 'some/really/long/path/'

The contenders and their execution time on my computer using Python 3.6 on Linux as timed by IPython and the timeit module are

  • f'http://{domain}/{lang}/{path}' - 0.151 µs

  • 'http://%s/%s/%s' % (domain, lang, path) - 0.321 µs

  • 'http://' + domain + '/' + lang + '/' + path - 0.356 µs

  • ''.join(('http://', domain, '/', lang, '/', path)) - 0.249 µs (notice that building a constant-length tuple is slightly faster than building a constant-length list).

Thus the shortest and the most beautiful code possible is also fastest.


The speed can be contrasted with the fastest method for Python 2, which is + concatenation on my computer; and that takes 0.203 µs with 8-bit strings, and 0.259 µs if the strings are all Unicode.

(In alpha versions of Python 3.6 the implementation of f'' strings was the slowest possible - actually the generated byte code is pretty much equivalent to the ''.join() case with unnecessary calls to str.__format__ which without arguments would just return self unchanged. These inefficiencies were addressed before 3.6 final.)

Washtub answered 13/7, 2016 at 21:38 Comment(3)
This is the actual answer to the question "which is fastest". The currently accepted answer is overly opinionated that this is a premature optimization, to the point where it doesn't even bother testing the various techniques available.Gebelein
As of 3.10.0, the ranking are unchanged relative to one another. My timings are f-string 110 ns, printf-style w/% 160 ns, concat w/+ 176 ns, and ''.join 130 ns (I used %%timeit magic where domain, lang and path were all defined in the setup step, the first line, and the code to test was not wrapped in a function otherwise, minimizing overhead unrelated to the operation being tested). Note that you can beat f-strings with '/'.join(('http:/', domain, lang, path)) (at 99 ns), but that's neither pretty nor generalizable.Whichsoever
@Whichsoever interesting though that the %-tuple formatting isn't faster, as Serhiy Storchaka was working on an optimization that would make it on par, maybe it just landed in 3.11.Manet
B
71

''.join(sequence_of_strings) is what usually works best – simplest and fastest.

Broeder answered 22/8, 2009 at 19:55 Comment(15)
But to do that, shouldn't I build a list first? Building an empty list, then appending many many strings into it and then joining is better than simply concatenating strings? Why? Please explain...I am a newB..Overstate
@mshsayem, in Python a sequence can be any enumerable object, even a function.Apochromatic
I absolutely love the ''.join(sequence) idiom. It's especially useful to produce comma-separated lists: ', '.join([1, 2, 3]) gives the string '1, 2, 3'.Pernik
@Andrew .. that is indeed useful.Confucian
@Nick D: Please explain more. Best of all, please give a code example which concatenates strings in efficient way ...thanksOverstate
@mshsayem: "".join(chr(x) for x in xrange(65,91)) --- in this case, the argument to join is an iterator, created through a generator expression. There's no temporary list that gets constructed.Raymund
No matter how you obtain the strings, ''.join is a good way to put them together -- possibly via an intermediate list or tuple. Tell us in what form you get them and we can help more!Broeder
@Alex Martelli: Supppose, I want to build a dyanmic javascript that to be put on a requested page and the javascript may vary depending on some conditions and the size of the javascript can be large. In that case should I build a list/tuple of strings first? Or, just concatenate?Overstate
@mshayem, looks like pieces of that JS may come at several different moments in your processing, and not necessarily in order, so I'd definitely go with a list. "large" is a relative terms: surely you're not going to inject many megabytes of javascript into a poor HTML page, are you?-) Even with tens of megabytes, an intermediate list and ''.join at the end would still perform just fine, anyway.Broeder
@balpha: and yet the generator version is slower than the list comprehension version: C:\temp>python -mtimeit "''.join(chr(x) for x in xrange(65,91))" 100000 loops, best of 3: 9.71 usec per loop C:\temp>python -mtimeit "''.join([chr(x) for x in xrange(65,91)])" 100000 loops, best of 3: 7.1 usec per loopSynder
@hughdbrown, yes, when you have free memory out the wazoo (typical timeit case) listcomp can be better optimized than genexp, often by 20-30%. When memory's tight things are different -- hard to reproduce in timeit, though!-)Broeder
Exactly. If someone is concerned about the efficiency of string concatenation, we're usually talking about loooong strings, i.e. higher memory usage; also memory /time is a classical tradeoff. As always, the only reliable answer is measuring the particular use case and optimizing for the particular situation.Raymund
Can you quantify somewhat in your answer? E.g., how much faster? Under what conditions? What Python version? On what system (as an example)? What are the scaling characteristics? Presumably not Shlemiel the painter level. Or least provide a reference? (But without "Edit:", "Update:", or similar - the answer should appear as if it was written today.)Lomax
@Raymund & @AlexMartelli: CPython implementation detail here, but "".join(chr(x) for x in xrange(65,91)) loses to "".join([chr(x) for x in xrange(65,91)]) because internally, the first thing str.join does is convert anything that's not a list or tuple to a list. It still makes the list as if you called it with "".join(list(chr(x) for x in xrange(65,91))), using the slightly higher overhead genexpr instead of the "optimized for making the list directly" listcomp. This could change (PyUnicode_Writer could build lazily), but it's been true for decades.Whichsoever
@Whichsoever well not a "list" but a materialized fast sequence, which, essentially has the same time complexity as a list..Manet
S
42

It depends on what you're doing.

After Python 2.5, string concatenation with the + operator is pretty fast. If you're just concatenating a couple of values, using the + operator works best:

>>> x = timeit.Timer(stmt="'a' + 'b'")
>>> x.timeit()
0.039999961853027344

>>> x = timeit.Timer(stmt="''.join(['a', 'b'])")
>>> x.timeit()
0.76200008392333984

However, if you're putting together a string in a loop, you're better off using the list joining method:

>>> join_stmt = """
... joined_str = ''
... for i in xrange(100000):
...   joined_str += str(i)
... """
>>> x = timeit.Timer(join_stmt)
>>> x.timeit(100)
13.278000116348267

>>> list_stmt = """
... str_list = []
... for i in xrange(100000):
...   str_list.append(str(i))
... ''.join(str_list)
... """
>>> x = timeit.Timer(list_stmt)
>>> x.timeit(100)
12.401000022888184

...but notice that you have to be putting together a relatively high number of strings before the difference becomes noticeable.

Spellbound answered 22/8, 2009 at 20:36 Comment(2)
1) In your first measurement it's probably the list construction that takes the time. Try with a tuple. 2) CPython performs uniformly good, however other Python implementations perform way worse with + and +=Liebowitz
The above user is actually right. Using a tuple almost halves the time when compared to a list. >>> x = timeit.Timer(stmt="''.join(['a', 'b'])") >>> x.timeit() 0.08877951399881567 >>> x = timeit.Timer(stmt="''.join(('a', 'b'))") >>> x.timeit() 0.046619118000307935Clovah
G
29

As per John Fouhy's answer, don't optimize unless you have to, but if you're here and asking this question, it may be precisely because you have to.

In my case, I needed to assemble some URLs from string variables... fast. I noticed no one (so far) seems to be considering the string format method, so I thought I'd try that and, mostly for mild interest, I thought I'd toss the string interpolation operator in there for good measure.

To be honest, I didn't think either of these would stack up to a direct '+' operation or a ''.join(). But guess what? On my Python 2.7.5 system, the string interpolation operator rules them all and string.format() is the worst performer:

# concatenate_test.py

from __future__ import print_function
import timeit

domain = 'some_really_long_example.com'
lang = 'en'
path = 'some/really/long/path/'
iterations = 1000000

def meth_plus():
    '''Using + operator'''
    return 'http://' + domain + '/' + lang + '/' + path

def meth_join():
    '''Using ''.join()'''
    return ''.join(['http://', domain, '/', lang, '/', path])

def meth_form():
    '''Using string.format'''
    return 'http://{0}/{1}/{2}'.format(domain, lang, path)

def meth_intp():
    '''Using string interpolation'''
    return 'http://%s/%s/%s' % (domain, lang, path)

plus = timeit.Timer(stmt="meth_plus()", setup="from __main__ import meth_plus")
join = timeit.Timer(stmt="meth_join()", setup="from __main__ import meth_join")
form = timeit.Timer(stmt="meth_form()", setup="from __main__ import meth_form")
intp = timeit.Timer(stmt="meth_intp()", setup="from __main__ import meth_intp")

plus.val = plus.timeit(iterations)
join.val = join.timeit(iterations)
form.val = form.timeit(iterations)
intp.val = intp.timeit(iterations)

min_val = min([plus.val, join.val, form.val, intp.val])

print('plus %0.12f (%0.2f%% as fast)' % (plus.val, (100 * min_val / plus.val), ))
print('join %0.12f (%0.2f%% as fast)' % (join.val, (100 * min_val / join.val), ))
print('form %0.12f (%0.2f%% as fast)' % (form.val, (100 * min_val / form.val), ))
print('intp %0.12f (%0.2f%% as fast)' % (intp.val, (100 * min_val / intp.val), ))

The results:

# Python 2.7 concatenate_test.py
plus 0.360787868500 (90.81% as fast)
join 0.452811956406 (72.36% as fast)
form 0.502608060837 (65.19% as fast)
intp 0.327636957169 (100.00% as fast)

If I use a shorter domain and shorter path, interpolation still wins out. The difference is more pronounced, though, with longer strings.

Now that I had a nice test script, I also tested under Python 2.6, 3.3 and 3.4, here's the results. In Python 2.6, the plus operator is the fastest! On Python 3, join wins out. Note: these tests are very repeatable on my system. So, 'plus' is always faster on 2.6, 'intp' is always faster on 2.7 and 'join' is always faster on Python 3.x.

# Python 2.6 concatenate_test.py
plus 0.338213920593 (100.00% as fast)
join 0.427221059799 (79.17% as fast)
form 0.515371084213 (65.63% as fast)
intp 0.378169059753 (89.43% as fast)

# Python 3.3 concatenate_test.py
plus 0.409130576998 (89.20% as fast)
join 0.364938726001 (100.00% as fast)
form 0.621366866995 (58.73% as fast)
intp 0.419064424001 (87.08% as fast)

# Python 3.4 concatenate_test.py
plus 0.481188605998 (85.14% as fast)
join 0.409673971997 (100.00% as fast)
form 0.652010936996 (62.83% as fast)
intp 0.460400978001 (88.98% as fast)

# Python 3.5 concatenate_test.py
plus 0.417167026084 (93.47% as fast)
join 0.389929617057 (100.00% as fast)
form 0.595661019906 (65.46% as fast)
intp 0.404455224983 (96.41% as fast)

Lesson learned:

  • Sometimes, my assumptions are dead wrong.
  • Test against the system environment. You'll be running in production.
  • String interpolation isn't dead yet!

tl;dr:

  • If you using Python 2.6, use the '+' operator.
  • if you're using Python 2.7, use the '%' operator.
  • if you're using Python 3.x, use ''.join().
Gravitation answered 13/7, 2014 at 0:39 Comment(5)
Note: literal string interpolation is faster still for 3.6+ : f'http://{domain}/{lang}/{path}'Grayback
Also, .format() has three forms, in order from fast to slow: "{}".format(x), "{0}".format(x), "{x}".format(x=x)Grayback
The real lesson: when your problem domain is small, e.g. composing short strings, method most often does not matter. And even when it matters, e.g. you really are building a million strings, the overhead often matters more. It is a typical symptom of worrying about the wrong problem. Only when the overhead is not significant, e.g. when building up entire book as a string, the method difference start to matter.Cheyenne
Here's an example based on the above benchmark that includes f-strings gist.github.com/holmanb/84be00eab35477565cb95a1d62a741a9Mosera
Results with 12 significant digits do not make sense in this context (for example, due to time jitter caused preemptive multitasking in the operating system). Can you round them to a more realistic number of significant digits?Lomax
L
12

Update: Python3.11 has some optimizations for % formatting yet it maybe still better to stick with f-strings.

For Python 3.8.6/3.9, I had to do some dirty hacks, because perfplot was giving out some errors. Here assume that x[0] is a a and x[1] is b:

Performance

The plot is nearly same for large data. For small data,

Performance 2

Taken by perfplot and this is the code, large data == range(8), small data == range(4).

import perfplot

from random import choice
from string import ascii_lowercase as letters

def generate_random(x):
    data = ''.join(choice(letters) for i in range(x))
    sata = ''.join(choice(letters) for i in range(x))
    return [data,sata]

def fstring_func(x):
    return [ord(i) for i in f'{x[0]}{x[1]}']

def format_func(x):
    return [ord(i) for i in "{}{}".format(x[0], x[1])]

def replace_func(x):
    return [ord(i) for i in "|~".replace('|', x[0]).replace('~', x[1])]

def join_func(x):
    return [ord(i) for i in "".join([x[0], x[1]])]

perfplot.show(
    setup=lambda n: generate_random(n),
    kernels=[
        fstring_func,
        format_func,
        replace_func,
        join_func,
    ],
    n_range=[int(k ** 2.5) for k in range(4)],
)

When medium data is there, and four strings are there x[0], x[1], x[2], x[3] instead of two strings:

def generate_random(x):
    a =  ''.join(choice(letters) for i in range(x))
    b =  ''.join(choice(letters) for i in range(x))
    c =  ''.join(choice(letters) for i in range(x))
    d =  ''.join(choice(letters) for i in range(x))
    return [a,b,c,d]

Performance 3

Better to stick with f-strings. Also the speed of %s is similar to .format().

Lycian answered 25/10, 2020 at 19:17 Comment(0)
B
11

It pretty much depends on the relative sizes of the new string after every new concatenation.

With the + operator, for every concatenation, a new string is made. If the intermediary strings are relatively long, the + becomes increasingly slower, because the new intermediary string is being stored.

Consider this case:

from time import time
stri=''
a='aagsdfghfhdyjddtyjdhmfghmfgsdgsdfgsdfsdfsdfsdfsdfsdfddsksarigqeirnvgsdfsdgfsdfgfg'
l=[]

# Case 1
t=time()
for i in range(1000):
    stri=stri+a+repr(i)
print time()-t

# Case 2
t=time()
for i in xrange(1000):
    l.append(a+repr(i))
z=''.join(l)
print time()-t

# Case 3
t=time()
for i in range(1000):
    stri=stri+repr(i)
print time()-t

# Case 4
t=time()
for i in xrange(1000):
    l.append(repr(i))
z=''.join(l)
print time()-t

Results

1 0.00493192672729

2 0.000509023666382

3 0.00042200088501

4 0.000482797622681

In the case of 1&2, we add a large string, and join() performs about 10 times faster. In case 3&4, we add a small string, and '+' performs slightly faster.

Boaster answered 12/3, 2014 at 15:26 Comment(1)
Results with 12 significant digits do not make sense in this context (for example, due to time jitter caused preemptive multitasking in the operating system). Can you round them to a more realistic number of significant digits?Lomax
C
3

I ran into a situation where I needed to have an appendable string of unknown size. These are the benchmark results (python 2.7.3):

$ python -m timeit -s 's=""' 's+="a"'
10000000 loops, best of 3: 0.176 usec per loop

$ python -m timeit -s 's=[]' 's.append("a")'
10000000 loops, best of 3: 0.196 usec per loop

$ python -m timeit -s 's=""' 's="".join((s,"a"))'
100000 loops, best of 3: 16.9 usec per loop

$ python -m timeit -s 's=""' 's="%s%s"%(s,"a")'
100000 loops, best of 3: 19.4 usec per loop

This seems to show that '+=' is the fastest. The results from the skymind link are a bit out of date.

(I realize that the second example is not complete. The final list would need to be joined. This does show, however, that simply preparing the list takes longer than the string concatenation.)

Crashing answered 7/9, 2012 at 15:32 Comment(3)
I'm getting sub 1-sec times for 3rd and 4th tests. Why you getting such high times? pastebin.com/qabNMCHSVaish
@ronnieaka: He's getting sub 1-sec times for all tests. He is getting >1 µs for the 3rd & 4th, which you did not. I also get slower times on those tests (on Python 2.7.5, Linux). Could be CPU, version, build flags, who knows.Turkey
These benchmark results are useless. Especially, the first case, which isn't doing any string concatenation, just returning the second string value intact.Manet
M
2

One year later, let's test mkoistinen's answer with Python 3.4.3:

  • plus 0.963564149000 (95.83% as fast)
  • join 0.923408469000 (100.00% as fast)
  • form 1.501130934000 (61.51% as fast)
  • intp 1.019677452000 (90.56% as fast)

Nothing changed. join is still the fastest method. With string interpolation (intp) being arguably the best choice in terms of readability, you might want to use string interpolation nevertheless.

Muller answered 29/11, 2015 at 10:8 Comment(2)
Maybe it could be an addition to mkoistinen answer since it is a bit short of a full blown answer (or at least add the code you are using).Skylab
Results with 9 significant digits do not make sense in this context (for example, due to time jitter caused preemptive multitasking in the operating system). Can you round them to a more realistic number of significant digits? (NB: Why are there three trailing zeros?)Lomax
C
2

Probably the "new f-strings in Python 3.6" is the most efficient way of concatenating strings.

Using %s

>>> timeit.timeit("""name = "Some"
... age = 100
... '%s is %s.' % (name, age)""", number = 10000)
0.0029734770068898797

Using .format

>>> timeit.timeit("""name = "Some"
... age = 100
... '{} is {}.'.format(name, age)""", number = 10000)
0.004015227983472869

Using f-strings

>>> timeit.timeit("""name = "Some"
... age = 100
... f'{name} is {age}.'""", number = 10000)
0.0019175919878762215
Cilium answered 22/5, 2018 at 18:39 Comment(0)
V
1

Inspired by JasonBaker's benchmarks, here's a simple one, comparing 10 "abcdefghijklmnopqrstuvxyz" strings, showing that .join() is faster; even with this tiny increase in variables:

Concatenation

>>> x = timeit.Timer(stmt='"abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz" + "abcdefghijklmnopqrstuvxyz"')
>>> x.timeit()
0.9828147209324385

Join

>>> x = timeit.Timer(stmt='"".join(["abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz", "abcdefghijklmnopqrstuvxyz"])')
>>> x.timeit()
0.6114138159765048
Vamp answered 30/1, 2013 at 17:48 Comment(2)
Have a look at the accepted answer (scroll down long) of this question: #1349811Overstate
What do you mean by "increase in variables"? A relatively low number of constant strings? Or something else?Lomax
L
1

For a small set of short strings (i.e. 2 or 3 strings of no more than a few characters), plus is still way faster. Using mkoistinen's wonderful script in Python 2 and 3:

plus 2.679107467004 (100.00% as fast)
join 3.653773699996 (73.32% as fast)
form 6.594011374000 (40.63% as fast)
intp 4.568015249999 (58.65% as fast)

So when your code is doing a huge number of separate small concatenations, plus is the preferred way if speed is crucial.

Luau answered 2/2, 2017 at 11:45 Comment(0)
G
-4

=>The best way to concatinate string is using '+'.For eg:-

Print(string1 +string2)

=>The another easy way is by using joint method.For eg:-

.joint()
Gynaeceum answered 29/6, 2023 at 7:36 Comment(1)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Tesler

© 2022 - 2024 — McMap. All rights reserved.