What's the rationale for null terminated strings? – Dev

The best answers to the question “What's the rationale for null terminated strings?” in the category Dev.

QUESTION:

As much as I love C and C++, I can’t help but scratch my head at the choice of null terminated strings:

  • Length prefixed (i.e. Pascal) strings existed before C
  • Length prefixed strings make several algorithms faster by allowing constant time length lookup.
  • Length prefixed strings make it more difficult to cause buffer overrun errors.
  • Even on a 32 bit machine, if you allow the string to be the size of available memory, a length prefixed string is only three bytes wider than a null terminated string. On 16 bit machines this is a single byte. On 64 bit machines, 4GB is a reasonable string length limit, but even if you want to expand it to the size of the machine word, 64 bit machines usually have ample memory making the extra seven bytes sort of a null argument. I know the original C standard was written for insanely poor machines (in terms of memory), but the efficiency argument doesn’t sell me here.
  • Pretty much every other language (i.e. Perl, Pascal, Python, Java, C#, etc) use length prefixed strings. These languages usually beat C in string manipulation benchmarks because they are more efficient with strings.
  • C++ rectified this a bit with the std::basic_string template, but plain character arrays expecting null terminated strings are still pervasive. This is also imperfect because it requires heap allocation.
  • Null terminated strings have to reserve a character (namely, null), which cannot exist in the string, while length prefixed strings can contain embedded nulls.

Several of these things have come to light more recently than C, so it would make sense for C to not have known of them. However, several were plain well before C came to be. Why would null terminated strings have been chosen instead of the obviously superior length prefixing?

EDIT: Since some asked for facts (and didn’t like the ones I already provided) on my efficiency point above, they stem from a few things:

  • Concat using null terminated strings requires O(n + m) time complexity. Length prefixing often require only O(m).
  • Length using null terminated strings requires O(n) time complexity. Length prefixing is O(1).
  • Length and concat are by far the most common string operations. There are several cases where null terminated strings can be more efficient, but these occur much less often.

From answers below, these are some cases where null terminated strings are more efficient:

  • When you need to cut off the start of a string and need to pass it to some method. You can’t really do this in constant time with length prefixing even if you are allowed to destroy the original string, because the length prefix probably needs to follow alignment rules.
  • In some cases where you’re just looping through the string character by character you might be able to save a CPU register. Note that this works only in the case that you haven’t dynamically allocated the string (Because then you’d have to free it, necessitating using that CPU register you saved to hold the pointer you originally got from malloc and friends).

None of the above are nearly as common as length and concat.

There’s one more asserted in the answers below:

  • You need to cut off the end of the string

but this one is incorrect — it’s the same amount of time for null terminated and length prefixed strings. (Null terminated strings just stick a null where you want the new end to be, length prefixers just subtract from the prefix.)

ANSWER:

C doesn’t have a string as part of the language. A ‘string’ in C is just a pointer to char. So maybe you’re asking the wrong question.

“What’s the rationale for leaving out a string type” might be more relevant. To that I would point out that C is not an object oriented language and only has basic value types. A string is a higher level concept that has to be implemented by in some way combining values of other types. C is at a lower level of abstraction.

in light of the raging squall below:

I just want to point out that I’m not trying to say this is a stupid or bad question, or that the C way of representing strings is the best choice. I’m trying to clarify that the question would be more succinctly put if you take into account the fact that C has no mechanism for differentiating a string as a datatype from a byte array. Is this the best choice in light of the processing and memory power of todays computers? Probably not. But hindsight is always 20/20 and all that 🙂

ANSWER:

From the horse’s mouth

None of BCPL, B, or C supports
character data strongly in the
language; each treats strings much
like vectors of integers and
supplements general rules by a few
conventions. In both BCPL and B a
string literal denotes the address of
a static area initialized with the
characters of the string, packed into
cells. In BCPL, the first packed byte
contains the number of characters in
the string; in B, there is no count
and strings are terminated by a
special character, which B spelled
*e. This change was made partially
to avoid the limitation on the length
of a string caused by holding the
count in an 8- or 9-bit slot, and
partly because maintaining the count
seemed, in our experience, less
convenient than using a terminator.

Dennis M Ritchie, Development of the C Language

ANSWER:

I think, it has historical reasons and found this in wikipedia:

At the time C (and the languages that
it was derived from) were developed,
memory was extremely limited, so using
only one byte of overhead to store the
length of a string was attractive. The
only popular alternative at that time,
usually called a “Pascal string”
(though also used by early versions of
BASIC), used a leading byte to store
the length of the string. This allows
the string to contain NUL and made
finding the length need only one
memory access (O(1) (constant) time).
But one byte limits the length to 255.
This length limitation was far more
restrictive than the problems with the
C string, so the C string in general
won out.

ANSWER:

The question is asked as a Length Prefixed Strings (LPS) vs zero terminated strings (SZ) thing, but mostly expose benefits of length prefixed strings. That may seem overwhelming, but to be honest we should also consider drawbacks of LPS and advantages of SZ.

As I understand it, the question may even be understood as a biased way to ask “what are the advantages of Zero Terminated Strings ?”.

Advantages (I see) of Zero Terminated Strings:

  • very simple, no need to introduce new concepts in language, char
    arrays/char pointers can do.
  • the core language just include minimal syntaxic sugar to convert
    something between double quotes to a
    bunch of chars (really a bunch of
    bytes). In some cases it can be used
    to initialize things completely
    unrelated with text. For instance xpm
    image file format is a valid C source
    that contains image data encoded as a
    string.
  • by the way, you can put a zero in a string literal, the compiler will
    just also add another one at the end of the literal: "this\0is\0valid\0C".
    Is it a string ? or four strings ? Or a bunch of bytes…
  • flat implementation, no hidden indirection, no hidden integer.
  • no hidden memory allocation involved (well, some infamous non
    standard functions like strdup
    perform allocation, but that’s mostly
    a source of problem).
  • no specific issue for small or large hardware (imagine the burden to
    manage 32 bits prefix length on 8
    bits microcontrollers, or the
    restrictions of limiting string size
    to less than 256 bytes, that was a problem I actually had with Turbo Pascal eons ago).
  • implementation of string manipulation is just a handful of
    very simple library function
  • efficient for the main use of strings : constant text read
    sequentially from a known start
    (mostly messages to the user).
  • the terminating zero is not even mandatory, all necessary tools
    to manipulate chars like a bunch of
    bytes are available. When performing
    array initialisation in C, you can
    even avoid the NUL terminator. Just
    set the right size. char a[3] =
    "foo";
    is valid C (not C++) and
    won’t put a final zero in a.
  • coherent with the unix point of view “everything is file”, including
    “files” that have no intrinsic length
    like stdin, stdout. You should remember that open read and write primitives are implemented
    at a very low level. They are not library calls, but system calls. And the same API is used
    for binary or text files. File reading primitives get a buffer address and a size and return
    the new size. And you can use strings as the buffer to write. Using another kind of string
    representation would imply you can’t easily use a literal string as the buffer to output, or
    you would have to make it have a very strange behavior when casting it to char*. Namely
    not to return the address of the string, but instead to return the actual data.
  • very easy to manipulate text data read from a file in-place, without useless copy of buffer,
    just insert zeroes at the right places (well, not really with modern C as double quoted strings are const char arrays nowaday usually kept in non modifiable data segment).
  • prepending some int values of whatever size would implies alignment issues. The initial
    length should be aligned, but there is no reason to do that for the characters datas (and
    again, forcing alignment of strings would imply problems when treating them as a bunch of
    bytes).
  • length is known at compile time for constant literal strings (sizeof). So why would
    anyone want to store it in memory prepending it to actual data ?
  • in a way C is doing as (nearly) everyone else, strings are viewed as arrays of char. As array length is not managed by C, it is logical length is not managed either for strings. The only surprising thing is that 0 item added at the end, but that’s just at core language level when typing a string between double quotes. Users can perfectly call string manipulation functions passing length, or even use plain memcopy instead. SZ are just a facility. In most other languages array length is managed, it’s logical that is the same for strings.
  • in modern times anyway 1 byte character sets are not enough and you often have to deal with encoded unicode strings where the number of characters is very different of the number of bytes. It implies that users will probably want more than “just the size”, but also other informations. Keeping length give use nothing (particularly no natural place to store them) regarding these other useful pieces of information.

That said, no need to complain in the rare case where standard C strings are indeed inefficient. Libs are available. If I followed that trend, I should complain that standard C does not include any regex support functions… but really everybody knows it’s not a real problem as there is libraries available for that purpose. So when string manipulation efficiency is wanted, why not use a library like bstring ? Or even C++ strings ?

EDIT: I recently had a look to D strings. It is interesting enough to see that the solution choosed is neither a size prefix, nor zero termination. As in C, literal strings enclosed in double quotes are just short hand for immutable char arrays, and the language also has a string keyword meaning that (immutable char array).

But D arrays are much richer than C arrays. In the case of static arrays length is known at run-time so there is no need to store the length. Compiler has it at compile time. In the case of dynamic arrays, length is available but D documentation does not state where it is kept. For all we know, compiler could choose to keep it in some register, or in some variable stored far away from the characters data.

On normal char arrays or non literal strings there is no final zero, hence programmer has to put it itself if he wants to call some C function from D. In the particular case of literal strings, however the D compiler still put a zero at the end of each strings (to allow easy cast to C strings to make easier calling C function ?), but this zero is not part of the string (D does not count it in string size).

The only thing that disappointed me somewhat is that strings are supposed to be utf-8, but length apparently still returns a number of bytes (at least it’s true on my compiler gdc) even when using multi-byte chars. It is unclear to me if it’s a compiler bug or by purpose. (OK, I probably have found out what happened. To say to D compiler your source use utf-8 you have to put some stupid byte order mark at beginning. I write stupid because I know of not editor doing that, especially for UTF-8 that is supposed to be ASCII compatible).