TStringList, Dynamic Array or Linked List in Delphi?
Asked Answered
T

7

10

I have a choice.

I have a number of already ordered strings that I need to store and access. It looks like I can choose between using:

  1. A TStringList

  2. A Dynamic Array of strings, and

  3. A Linked List of strings (singly linked)

    and Alan in his comment suggested I also add to the choices:

  4. TList<string>

In what circumstances is each of these better than the others?

Which is best for small lists (under 10 items)?

Which is best for large lists (over 1000 items)?

Which is best for huge lists (over 1,000,000 items)?

Which is best to minimize memory use?

Which is best to minimize loading time to add extra items on the end?

Which is best to minimize access time for accessing the entire list from first to last?

On this basis (or any others), which data structure would be preferable?

For reference, I am using Delphi 2009.


Dimitry in a comment said:

Describe your task and data access pattern, then it will be possible to give you an exact answer

Okay. I've got a genealogy program with lots of data.

For each person I have a number of events and attributes. I am storing them as short text strings but there are many of them for each person, ranging from 0 to a few hundred. And I've got thousands of people. I don't need random access to them. I only need them associated as a number of strings in a known order attached to each person. This is my case of thousands of "small lists". They take time to load and use memory, and take time to access if I need them all (e.g. to export the entire generated report).

Then I have a few larger lists, e.g. all the names of the sections of my "virtual" treeview, which can have hundreds of thousands of names. Again I only need a list that I can access by index. These are stored separately from the treeview for efficiency, and the treeview retrieves them only as needed. This takes a while to load and is very expensive memory-wise for my program. But I don't have to worry about access time, because only a few are accessed at a time.

Hopefully this gives you an idea of what I'm trying to accomplish.

p.s. I've posted a lot of questions about optimizing Delphi here at StackOverflow. My program reads 25 MB files with 100,000 people and creates data structures and a report and treeview for them in 8 seconds but uses 175 MB of RAM to do so. I'm working to reduce that because I'm aiming to load files with several million people in 32-bit Windows.


I've just found some excellent suggestions for optimizing a TList at this StackOverflow question: Is there a faster TList implementation?

Tachycardia answered 21/4, 2010 at 5:51 Comment(3)
There are two more options: TList<string> and TStringBuilder.Wentletrap
A hashlist really doesn't fit with these, because we're looking at just a list of string that don't need to be accessed by key.Tachycardia
TStringBuilder is really only for concatenating strings - not for a list of strings.Tachycardia
S
10

Unless you have special needs, a TStringList is hard to beat because it provides the TStrings interface that many components can use directly. With TStringList.Sorted := True, binary search will be used which means that search will be very quick. You also get object mapping for free, each item can also be associated with a pointer, and you get all the existing methods for marshalling, stream interfaces, comma-text, delimited-text, and so on.

On the other hand, for special needs purposes, if you need to do many inserts and deletions, then something more approaching a linked list would be better. But then search becomes slower, and it is a rare collection of strings indeed that never needs searching. In such situations, some type of hash is often used where a hash is created out of, say, the first 2 bytes of a string (preallocate an array with length 65536, and the first 2 bytes of a string is converted directly into a hash index within that range), and then at that hash location, a linked list is stored with each item key consisting of the remaining bytes in the strings (to save space---the hash index already contains the first two bytes). Then, the initial hash lookup is O(1), and the subsequent insertions and deletions are linked-list-fast. This is a trade-off that can be manipulated, and the levers should be clear.

Souvaine answered 21/4, 2010 at 6:42 Comment(0)
J
6
  1. A TStringList. Pros: has extended functionality, allowing to dynamically grow, sort, save, load, search, etc. Cons: on large amount of access to the items by the index, Strings[Index] is introducing sensible performance lost (few percents), comparing to access to an array, memory overhead for each item cell.

  2. A Dynamic Array of strings. Pros: combines ability to dynamically grow, as a TStrings, with the fastest access by the index, minimal memory usage from others. Cons: limited standard "string list" functionality.

  3. A Linked List of strings (singly linked). Pros: the linear speed of addition of an item to the list end. Cons: slowest access by the index and searching, limited standard "string list" functionality, memory overhead for "next item" pointer, spead overhead for each item memory allocation.

  4. TList< string >. As above.

  5. TStringBuilder. I does not have a good idea, how to use TStringBuilder as a storage for multiple strings.

Actually, there are much more approaches:

  • linked list of dynamic arrays
  • hash tables
  • databases
  • binary trees
  • etc

The best approach will depend on the task.

Which is best for small lists (under 10 items)?

Anyone, may be even static array with total items count variable.

Which is best for large lists (over 1000 items)? Which is best for huge lists (over 1,000,000 items)?

For large lists I will choose: - dynamic array, if I need a lot of access by the index or search for specific item - hash table, if I need to search by the key - linked list of dynamic arrays, if I need many item appends and no access by the index

Which is best to minimize memory use?

dynamic array will eat less memory. But the question is not about overhead, but about on which number of items this overhead become sensible. And then how to properly handle this number of items.

Which is best to minimize loading time to add extra items on the end?

dynamic array may dynamically grow, but on really large number of items, memory manager may not found a continous memory area. While linked list will work until there is a memory for at least a cell, but for cost of memory allocation for each item. The mixed approach - linked list of dynamic arrays should work.

Which is best to minimize access time for accessing the entire list from first to last?

dynamic array.

On this basis (or any others), which data structure would be preferable?

For which task ?

Jellybean answered 21/4, 2010 at 6:54 Comment(2)
You may think that small lists are not worthwhile to optimize, but what if I have 10,000 different small lists? Their total performance will be significant so I should use the most optimal data structure for them all.Tachycardia
No, I does not think so. What I really think, that we talk about "cubic horse in the vacuum" <g> Describe your task and data access pattern, then it will be possible to give you an exact answer ...Jellybean
R
2

If your stated goal is to improve your program to the point that it can load genealogy files with millions of persons in it, then deciding between the four data structures in your question isn't really going to get you there.

Do the math - you are currently loading a 25 MB file with about 100000 persons in it, which causes your application to consume 175 MB of memory. If you wish to load files with several millions of persons in it you can estimate that without drastic changes to your program you will need to multiply your memory needs by n * 10 as well. There's no way to do that in a 32 bit process while keeping everything in memory the way you currently do.

You basically have two options:

  1. Not keeping everything in memory at once, instead using a database, or a file-based solution which you load data from when you need it. I remember you had other questions about this already, and probably decided against it, so I'll leave it at that.

  2. Keep everything in memory, but in the most space-efficient way possible. As long as there is no 64 bit Delphi this should allow for a few million persons, depending on how much data there will be for each person. Recompiling this for 64 bit will do away with that limit as well.

If you go for the second option then you need to minimize memory consumption much more aggressively:

  • Use string interning. Every loaded data element in your program that contains the same data but is contained in different strings is basically wasted memory. I understand that your program is a viewer, not an editor, so you can probably get away with only ever adding strings to your pool of interned strings. Doing string interning with millions of string is still difficult, the "Optimizing Memory Consumption with String Pools" blog postings on the SmartInspect blog may give you some good ideas. These guys deal regularly with huge data files and had to make it work with the same constraints you are facing.
    This should also connect this answer to your question - if you use string interning you would not need to keep lists of strings in your data structures, but lists of string pool indexes.
    It may also be beneficial to use multiple string pools, like one for names, but a different one for locations like cities or countries. This should speed up insertion into the pools.

  • Use the string encoding that gives the smallest in-memory representation. Storing everything as a native Windows Unicode string will probably consume much more space than storing strings in UTF-8, unless you deal regularly with strings that contain mostly characters which need three or more bytes in the UTF-8 encoding.
    Due to the necessary character set conversion your program will need more CPU cycles for displaying strings, but with that amount of data it's a worthy trade-off, as memory access will be the bottleneck, and smaller data size helps with decreasing memory access load.

Remediless answered 22/4, 2010 at 7:20 Comment(5)
Thank you @Remediless for your thoughtful answer. Yes, I do plan to use a database (I have asked questions re that here at SO) and I will implement that before I turn my program from a viewer to an editor. All my optimization is with the editor concept in mind. The string pool reference you give is excellent, and I hadn't seen it before, but the database implementation will likely reduce my need for this. Storing text as UTF8 is an option I'll leave for last, because it biases performance away from international users. I am doing many things to reduce the 175 MB, and this question is one of them.Tachycardia
@lkessler: How would performance be worse for international users? I understand that ancestor names may (depending on their origin) have all kinds of non-ASCII characters in them, but according to en.wikipedia.org/wiki/UTF-8 you have 2048 characters that are equal in size or smaller than equivalent wide chars, and these include Greek, Cyrillic, Hebrew and Arabic, so it should cover quite a lot already. And since strings carry their encoding with them you could even have different encodings in the same pool.Remediless
@mghie: "Performance" as defined by memory use and associated allocation and deallocation will be worse because their text will take more memory. 1 byte per character for Western script, 2 bytes for Greek, Cyrillic, etc. 3 or more bytes for Chinese etc. Whereas straight Unicode is 2 bytes for everyone.Tachycardia
@lkessler: Of course, you have to decide whether most of the data for most of the users is indeed in CJK scripts or Klingon ;-)Remediless
Apparently the nobugleftbehind blog no longer exist. The article on "Optimizing Memory Consumption with String Pools" can still be found in the internet archive at web.archive.org/web/20090220130903/http://nobugleftbehind.com/…Chevet
L
1

TStringList stores an array of pointer to (string, TObject) records.

TList stores an array of pointers.

TStringBuilder cannot store a collection of strings. It is similar to .NET's StringBuilder and should only be used to concatenate (many) strings.

Resizing dynamic arrays is slow, so do not even consider it as an option.

I would use Delphi's generic TList<string> in all your scenarios. It stores an array of strings (not string pointers). It should have faster access in all cases due to no (un)boxing.

You may be able to find or implement a slightly better linked-list solution if you only want sequential access. See Delphi Algorithms and Data Structures.

Delphi promotes its TList and TList<>. The internal array implementation is highly optimized and I have never experienced performance/memory issues when using it. See Efficiency of TList and TStringList

Lemuellemuela answered 21/4, 2010 at 6:33 Comment(3)
TStringList is largely equivalent to TList<string>, except that TStringList can also store an object reference for each entry. TList<string> and TStringList both store strings in arrays of string, which is implemented as a reference-counted pointer into the heap. There shouldn't be significant performance difference.Censorious
I believe Marcu Cantu wrote in his Delphi 2009 book about TList<string> that it uses dynamic arrays. If that's the case, I don't know how you can say that dynamic arrays are slow but TList is faster.Tachycardia
@lkessler: You are right. I looked at the TList<T> implementation and it uses dynamic arrays. A common bottleneck when manually managing dynamic arrays is the resizing strategy: Expanding the array size by 1 for each insert is slow, but TList<T> gets around this by doubling the dynamic array size whenever the limit is reached. I am not sure which is faster between TList and TList<T>... I would write a quick benchmark both for speed/memory usage on large amounts of data.Lemuellemuela
M
1

One question: How do you query: do you match the strings or query on an ID or position in the list?

Best for small # strings:

Whatever makes your program easy to understand. Program readability is very important and you should only sacrifice it in real hotspots in your application for speed.

Best for memory (if that is the largest constrained) and load times:

Keep all strings in a single memory buffer (or memory mapped file) and only keep pointers to the strings (or offsets). Whenever you need a string you can clip-out a string using two pointers and return it as a Delphi string. This way you avoid the overhead of the string structure itself (refcount, length int, codepage int and the memory manager structures for each string allocation.

This only works fine if the strings are static and don't change.

TList, TList<>, array of string and the solution above have a "list" overhead of one pointer per string. A linked list has an overhead of at least 2 pointers (single linked list) or 3 pointers (double linked list). The linked list solution does not have fast random access but allows for O(1) resizes where trhe other options have O(lgN) (using a factor for resize) or O(N) using a fixed resize.

What I would do:

If < 1000 items and performance is not utmost important: use TStringList or a dyn array whatever is easiest for you. else if static: use the trick above. This will give you O(lgN) query time, least used memory and very fast load times (just gulp it in or use a memory mapped file)

All mentioned structures in your question will fail when using large amounts of data 1M+ strings that needs to be dynamically chaned in code. At that Time I would use a balances binary tree or a hash table depending on the type of queries I need to maken.

Mossy answered 21/4, 2010 at 6:46 Comment(1)
Answer to question: No matching needed. Think of the strings as, say, lines of text that represent items of information, ordered from first item to last item. When you are processing some object, it may refer to this list of strings as information in some way about the object. I've actually got a genealogy program, and some of these lists are lists of children's names, name variations, or even the entire name index itself presorted and entered into a list.Tachycardia
P
1

From your description, I'm not entirely sure if it could fit in your design but one way you could improve on memory usage without suffering a huge performance penalty is by using a trie.

Advantages relative to binary search tree

The following are the main advantages of tries over binary search trees (BSTs):

  • Looking up keys is faster. Looking up a key of length m takes worst case O(m) time. A BST performs O(log(n)) comparisons of keys, where n is the number of elements in the tree, because lookups depend on the depth of the tree, which is logarithmic in the number of keys if the tree is balanced. Hence in the worst case, a BST takes O(m log n) time. Moreover, in the worst case log(n) will approach m. Also, the simple operations tries use during lookup, such as array indexing using a character, are fast on real machines.

  • Tries can require less space when they contain a large number of short strings, because the keys are not stored explicitly and nodes are shared between keys with common initial subsequences.

  • Tries facilitate longest-prefix matching, helping to find the key sharing the longest possible prefix of characters all unique.
Planetary answered 22/4, 2010 at 14:13 Comment(1)
Yes. Actually @Lieven, I have been thinking about taking my B* trees that store the data structures for which I need both a sorted order and retrieval by key and converting them to tries. This question is asking about the string lists I have the need to be in sequence but don't need retrieval by key which is what a trie (and a tree and a hash table) is for.Tachycardia
M
1

Possible alternative:

I've recently discovered SynBigTable (http://blog.synopse.info/post/2010/03/16/Synopse-Big-Table) which has a TSynBigTableString class for storing large amounts of data using a string index.

Very simple, single layer bigtable implementation, and it mainly uses disc storage, to consumes a lot less memory than expected when storing hundreds of thousands of records.

As simple as:

aId := UTF8String(Format('%s.%s', [name, surname]));

bigtable.Add(data, aId)

and

bigtable.Get(aId, data)

One catch, indexes must be unique, and the cost of update is a bit high (first delete, then re-insert)

Meek answered 11/6, 2010 at 7:57 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.