Why is “while( !feof(file) )” always wrong?
Asked Answered
P

6

677

What is wrong with using feof() to control a read loop? For example:

#include <stdio.h>
#include <stdlib.h>

int
main(int argc, char **argv)
{
    char *path = "stdin";
    FILE *fp = argc > 1 ? fopen(path=argv[1], "r") : stdin;

    if( fp == NULL ){
        perror(path);
        return EXIT_FAILURE;
    }

    while( !feof(fp) ){  /* THIS IS WRONG */
        /* Read and process data from file… */
    }
    if( fclose(fp) != 0 ){
        perror(path);
        return EXIT_FAILURE;
    }
    return EXIT_SUCCESS;
}

What is wrong with this loop?

Postliminy answered 25/3, 2011 at 11:42 Comment(4)
Why it's bad to use feof() to control a loopDredge
Why is iostream::eof inside a loop condition considered wrong?Iorio
However it should be noted that do { ... } while (!feof(...)) is perfectly valid as long as the block contains a stdio read of some kind.Smelser
@Smelser Not quite. do { ... } while (!feof(...)); is an infinite loop if there is a read error. It must be do {} while(!feof() && !ferror()); It is much cleaner to use the standard idiom.Postliminy
C
548

TL;DR

while(!feof(file)) is wrong because it tests for something that is irrelevant and fails to test for something that you need to know. The result is that you are erroneously executing code that assumes that it is accessing data that was read successfully, when in fact this never happened.

I'd like to provide an abstract, high-level perspective. So continue reading if you're interested in what while(!feof(file)) actually does.

Concurrency and simultaneity

I/O operations interact with the environment. The environment is not part of your program, and not under your control. The environment truly exists "concurrently" with your program. As with all things concurrent, questions about the "current state" don't make sense: There is no concept of "simultaneity" across concurrent events. Many properties of state simply don't exist concurrently.

Let me make this more precise: Suppose you want to ask, "do you have more data". You could ask this of a concurrent container, or of your I/O system. But the answer is generally unactionable, and thus meaningless. So what if the container says "yes" – by the time you try reading, it may no longer have data. Similarly, if the answer is "no", by the time you try reading, data may have arrived. The conclusion is that there simply is no property like "I have data", since you cannot act meaningfully in response to any possible answer. (The situation is slightly better with buffered input, where you might conceivably get a "yes, I have data" that constitutes some kind of guarantee, but you would still have to be able to deal with the opposite case. And with output the situation is certainly just as bad as I described: you never know if that disk or that network buffer is full.)

So we conclude that it is impossible, and in fact unreasonable, to ask an I/O system whether it will be able to perform an I/O operation. The only possible way we can interact with it (just as with a concurrent container) is to attempt the operation and check whether it succeeded or failed. At that moment where you interact with the environment, then and only then can you know whether the interaction was actually possible, and at that point you must commit to performing the interaction. (This is a "synchronisation point", if you will.)

EOF

Now we get to EOF. EOF is the response you get from an attempted I/O operation. It means that you were trying to read or write something, but when doing so you failed to read or write any data, and instead the end of the input or output was encountered. This is true for essentially all the I/O APIs, whether it be the C standard library, C++ iostreams, or other libraries. As long as the I/O operations succeed, you simply cannot know whether further, future operations will succeed. You must always first try the operation and then respond to success or failure.

Examples

In each of the examples, note carefully that we first attempt the I/O operation and then consume the result if it is valid. Note further that we always must use the result of the I/O operation, though the result takes different shapes and forms in each example.

  • C stdio, read from a file:

      for (;;) {
          size_t n = fread(buf, 1, bufsize, infile);
          consume(buf, n);
          if (n == 0) { break; }
      }
    

    The result we must use is n, the number of elements that were read (which may be as little as zero).

  • C stdio, scanf:

      for (int a, b, c; scanf("%d %d %d", &a, &b, &c) == 3; ) {
          consume(a, b, c);
      }
    

    The result we must use is the return value of scanf, the number of elements converted.

  • C++, iostreams formatted extraction:

      for (int n; std::cin >> n; ) {
          consume(n);
      }
    

    The result we must use is std::cin itself, which can be evaluated in a boolean context and tells us whether the stream is still in the good() state.

  • C++, iostreams getline:

      for (std::string line; std::getline(std::cin, line); ) {
          consume(line);
      }
    

    The result we must use is again std::cin, just as before.

  • POSIX, write(2) to flush a buffer:

      char const * p = buf;
      ssize_t n = bufsize;
      for (ssize_t k = bufsize; (k = write(fd, p, n)) > 0; p += k, n -= k) {}
      if (n != 0) { /* error, failed to write complete buffer */ }
    

    The result we use here is k, the number of bytes written. The point here is that we can only know how many bytes were written after the write operation.

  • POSIX getline()

      char *buffer = NULL;
      size_t bufsiz = 0;
      ssize_t nbytes;
      while ((nbytes = getline(&buffer, &bufsiz, fp)) != -1)
      {
          /* Use nbytes of data in buffer */
      }
      free(buffer);
    

    The result we must use is nbytes, the number of bytes up to and including the newline (or EOF if the file did not end with a newline).

    Note that the function explicitly returns -1 (and not EOF!) when an error occurs or it reaches EOF.

You may notice that we very rarely spell out the actual word "EOF". We usually detect the error condition in some other way that is more immediately interesting to us (e.g. failure to perform as much I/O as we had desired). In every example there is some API feature that could tell us explicitly that the EOF state has been encountered, but this is in fact not a terribly useful piece of information. It is much more of a detail than we often care about. What matters is whether the I/O succeeded, more-so than how it failed.

  • A final example that actually queries the EOF state: Suppose you have a string and want to test that it represents an integer in its entirety, with no extra bits at the end except whitespace. Using C++ iostreams, it goes like this:

      std::string input = "   123   ";   // example
    
      std::istringstream iss(input);
      int value;
      if (iss >> value >> std::ws && iss.get() == EOF) {
          consume(value);
      } else {
          // error, "input" is not parsable as an integer
      }
    

We use two results here. The first is iss, the stream object itself, to check that the formatted extraction to value succeeded. But then, after also consuming whitespace, we perform another I/O/ operation, iss.get(), and expect it to fail as EOF, which is the case if the entire string has already been consumed by the formatted extraction.

In the C standard library you can achieve something similar with the strto*l functions by checking that the end pointer has reached the end of the input string.

Callipygian answered 24/10, 2014 at 22:28 Comment(21)
@CiaPan: I don't think that's true. Both C99 and C11 allow this.Callipygian
@KerrekSB I'm using an ifstream and trying to test good as my loop condition. Can you help give me an example of why this is a bad idea: https://mcmap.net/q/41128/-example-of-why-stream-good-is-wrong/2642059Prophet
@JonathanMee: It's bad for all the reasons I mention: you cannot look into the future. You cannot tell what will happen in the future.Callipygian
@KerrekSB So I'm not expecting that good means my next read will work. But I am expecting that good means my previous read worked. (Or that the stream itself is readable.) In such a situation it's OK to use good?Prophet
@JonathanMee: Yes, that would be appropriate, though usually you can combine this check into the operation (since most iostreams operations return the stream object, which itself has a boolean conversion), and that way you make it obvious that you're not ignoring the return value.Callipygian
The std::cin and scanf examples happily treat bad input (something worth raising an error) as EOF (which just means leave the loop).Reconstruction
Isn't C++'s design flawed? Suppose you read some data - which succeeds, but before you're able to check the good flag, an asynchronous failing read occurs. Now you good returns false for you even though the read succeededMorbilli
@WorldSEnder: That sounds like a general problem with modifying shared state concurrently. If you're not the only one accessing some shared state, then you can never know what that state is "at the moment"; in fact, the very notion of "at the moment" stops being meaningful.Callipygian
I think the C example should test for feof() before exiting. If an error has occurred during fread() the read may be short even if EOF has not been met.Tardif
@MikkoRantalainen: That's true, but that's also true of every example -- none of them tell you the reason for the end of input, and all of them could be augmented with an additional check.Callipygian
@KerrekSB I understand. On the other hand, I would love the situation where every example provided as an answer did contain the best possible code that works correctly in corner cases, too and does not contain security vulnerabilities. These examples do not work correctly for the short read corner case as-is.Tardif
Third paragraph is remarkably misleading/inaccurate for an accepted and highly upvoted answer. feof() does not "ask the I/O system whether it has more data". feof(), according to the (Linux) manpage: "tests the end-of-file indicator for the stream pointed to by stream, returning nonzero if it is set." (also, an explicit call to clearerr() is the only way to reset this indicator); In this respect, William Pursell's answer is much better.Fore
@ArneVogel: Hm, the third paragraph doesn't talk about feof, but about the abstract concept of performing I/O and the inability to know ahead of time whether the operation will succeed. That part is independent of programming language and any particular API. Does that make sense?Callipygian
But why does Java allow a hasNext() method?Quicklime
@MinhNghĩa: That's a blocking method, right? That's basically just a convenience wrapper around "try to read (blocking if necessary), then report the success state, and if successful store the read result in a special buffer". You can implement the same in C and in C++ if you like.Callipygian
The only possible way we can interact with it (just as with a concurrent container) is to attempt the operation and check whether it succeeded or failed. Then what is select() for? select() and pselect() allow a program to monitor multiple file descriptors, waiting until one or more of the file descriptors become "ready" for some class of I/O operation (e.g., input possible). Maybe you mean something else but the point of select() is exactly that: to tell you when there's data that can be read etc. And yes I know this is about feof() but still the point is the same.Weigh
Note that the first example is wrong: short reads can occur with fread and do not necessarily indicate an EOF condition. Do not break on short read, break on zero-sized read.Exarate
Those first three paragraphs about simultaneous changes outside the program seem to be a bit beside the point. While it's technically true that the file can change while we're reading it, it doesn't really matter for how feof() should be used. feof() doesn't tell if we're at the EOF now, it tells if we've tried to read past it. Or, if an earlier call to fread() or whatever has returned short because of reaching EOF. It tells of a past event, and is exactly why while(!feof()) { fread(); assume_it_worked(); } is wrong, but it's nothing the outside world can affect.Convergent
OTOH, if we did something like while(ftell(f) < file_size) { fgets(); assume_it_worked(); }, then we would have a time-of-check-to-time-of-use problem. (Also, it would work in the vast majority of cases where no other process is changing the file in the meantime, at least for fgets() where the length of data read is implicit in the data itself, not so much for fread() where you need to check the retval to see how many elements you actually got.)Convergent
Minor nit -- you say getline returns -1 and not EOF on failure, but stdio.h defines EOF as -1. So getline returns both -1 and EOF on failure (they're the same).Picnic
@ChrisDodd So getline returns both -1 and EOF on failure (they're the same). They don't have to be the same. The POSIX definition of EOF: "The <stdio.h> header shall define the following macro which shall expand to an integer constant expression with type int and a negative value: EOF", is the same as the C definition: "EOF which expands to an integer constant expression, with type int and a negative value" -97 would be valid for EOF.Instar
P
282

It's wrong because (in the absence of a read error) it enters the loop one more time than the author expects. If there is a read error, the loop never terminates.

Consider the following code:

/* WARNING: demonstration of bad coding technique!! */

#include <stdio.h>
#include <stdlib.h>

FILE *Fopen(const char *path, const char *mode);

int
main(int argc, char **argv)
{
    FILE *in = argc > 1 ? Fopen(argv[1], "r") : stdin;
    unsigned count = 0;

    /* WARNING: this is a bug */
    while( !feof(in) ) {  /* This is WRONG! */
        fgetc(in);
        count++;
    }
    printf("Number of characters read: %u\n", count);
    return EXIT_SUCCESS;
}

FILE *
Fopen(const char *path, const char *mode)
{
    FILE *f = fopen(path, mode);
    if( f == NULL ) {
        perror(path);
        exit(EXIT_FAILURE);
    }
    return f;
}

This program will consistently print one greater than the number of characters in the input stream (assuming no read errors). Consider the case where the input stream is empty:

$ ./a.out < /dev/null
Number of characters read: 1

In this case, feof() is called before any data has been read, so it returns false. The loop is entered, fgetc() is called (and returns EOF), and count is incremented. Then feof() is called and returns true, causing the loop to abort.

This happens in all such cases. feof() does not return true until after a read on the stream encounters the end of file. The purpose of feof() is NOT to check if the next read will reach the end of file. The purpose of feof() is to determine the status of a previous read function and distinguish between an error condition and the end of the data stream. If fread() returns 0, you must use feof/ferror to decide whether an error occurred or if all of the data was consumed. Similarly if fgetc returns EOF. feof() is only useful after fread has returned zero or fgetc has returned EOF. Before that happens, feof() will always return 0.

It is always necessary to check the return value of a read (either an fread(), or an fscanf(), or an fgetc()) before calling feof().

Even worse, consider the case where a read error occurs. In that case, fgetc() returns EOF, feof() returns false, and the loop never terminates. In all cases where while(!feof(p)) is used, there must be at least a check inside the loop for ferror(), or at the very least the while condition should be replaced with while(!feof(p) && !ferror(p)) or there is a very real possibility of an infinite loop, probably spewing all sorts of garbage as invalid data is being processed.

In summary, although I cannot state with certainty that there is never a situation in which it may be semantically correct to write "while(!feof(f))" (although there must be another check inside the loop with a break to avoid a infinite loop on a read error), it is the case that it is almost certainly always wrong. And even if a case ever arose where it would be correct, it is so idiomatically wrong that it would not be the right way to write the code. Anyone seeing that code should immediately hesitate and say, "that's a bug". And possibly slap the author (unless the author is your boss in which case discretion is advised.)

EDIT: one way to write the code correctly, demonstrating correct usage of feof and ferror:

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

int
main(int argc, char **argv)
{
    FILE *in = stdin;
    unsigned count = 0;

    while( getc(in) != EOF ){
        count++;
    }
    if( feof(in) ){
        printf("Number of characters read: %u\n", count);
    } else if( ferror(in) ){
        perror("stdin");
    } else {
        assert(0);
    }
    return EXIT_SUCCESS;
}
Postliminy answered 25/3, 2011 at 12:39 Comment(14)
You should add an example of correct code, as I imagine lots of people will come here looking for a quick fix.Tret
Is this different from file.eof()?Stander
@Thomas: I'm not a C++ expert, but I believe file.eof() returns effectively the same result as feof(file) || ferror(file), so it is very different. But this question is not intended to be applicable to C++.Postliminy
I suppose correct code could be: change while(!feof(in)) into do { ... } while (!feof(in)); and add a conditioned break to avoid infinite loop.Photosensitive
@Photosensitive that's not right either, because you'll still try to process a read that failed.Scrub
this is the actual correct answer. feof() is used to know the outcome of previous read attempt. Thus probably you don't want to use it as your loop break condition. +1Baculiform
In the middle of the answer we read "[...] fgetc() is called (and returns EOF), and count is incremented. Then feof() is called and returns true", and then a few lines below we read is "[...] fgetc() returns EOF, feof() returns false, and the loop never terminates". Why does feof() return different values in these two scenarios?Dextrose
@WilliamPursell I´d like to go along with luizfls comment and ask: - Why does feof() return true when fgetc() returned EOF at the first time but does not return true at the exact same situation for the second and will cause the loop to be infinite? This is contradictory. Please clarify that.A
@RobertSsupportsMonicaCellio It is not "the exact same situation" at all. In the second case, there is a read error on the stream. Because there is a read error, fgetc returns EOF. The subsequent call to feof returns false, because the end of file was not reached. Instead, fgetc returned EOF because there was an error.Postliminy
@WilliamPursell Ah yes, now it comes clearly to me. Thank you for the explanation. +1. Would be better if the C standard would provide another distinct macro value especially for encountering errors on reading, instead to use EOF. That was the source of my confusion.A
"Even worse, consider the case where a read error occurs. In that case, fgetc() returns EOF, feof() returns false, and the loop never terminates." is not supported by the C spec. fgetc() returns EOF when "end-of-file indicator for the stream is set, or if the stream is at end-of-file" or "If a read error occurs". "if error indicator indicator is set" is not in that list. The next read may work just fine.Simpkins
@chux-ReinstateMonica "If a read error occurs, the error indicator for the stream is set and getc returns EOF." (7.21.7.5(3) from web.archive.org/web/20181230041359if_/http://www.open-std.org/…)Postliminy
That point is not in question - a read error returns EOF and sets the error indicator.. Even with the error indicator set, a subsequent read is not obliged to return EOF. Thus the "the loop never terminates." is not supported by spec as end-of-file can still happen.Simpkins
@chux-ReinstateMonica A valid point, but (I suspect) the more common actual behavior is an erroneous infinite loop.Postliminy
C
78

No it's not always wrong. If your loop condition is "while we haven't tried to read past end of file" then you use while (!feof(f)). This is however not a common loop condition - usually you want to test for something else (such as "can I read more"). while (!feof(f)) isn't wrong, it's just used wrong.

Carbon answered 25/3, 2011 at 11:49 Comment(4)
I wonder ... f = fopen("A:\\bigfile"); while (!feof(f)) { /* remove diskette */ } or (going to test this) f = fopen(NETWORK_FILE); while (!feof(f)) { /* unplug network cable */ }Guidance
@pmg: As said, "not a common loop condition" hehe. I can't really think of any case I've needed it, usually I'm interested in "could I read what I wanted" with all that implies of error handlingCarbon
@pmg: As said, you rarely want while(!eof(f))Carbon
More accurately, the condition is "while we haven't tried to read past the end of the file and there was no read error" feof is not about detecting end of file; it is about determining whether a read was short because of an error or because the input is exhausted.Postliminy
H
47

feof() indicates if one has tried to read past the end of file. That means it has little predictive effect: if it is true, you are sure that the next input operation will fail (you aren't sure the previous one failed BTW), but if it is false, you aren't sure the next input operation will succeed. More over, input operations may fail for other reasons than the end of file (a format error for formatted input, a pure IO failure -- disk failure, network timeout -- for all input kinds), so even if you could be predictive about the end of file (and anybody who has tried to implement Ada one, which is predictive, will tell you it can complex if you need to skip spaces, and that it has undesirable effects on interactive devices -- sometimes forcing the input of the next line before starting the handling of the previous one), you would have to be able to handle a failure.

So the correct idiom in C is to loop with the IO operation success as loop condition, and then test the cause of the failure. For instance:

while (fgets(line, sizeof(line), file)) {
    /* note that fgets don't strip the terminating \n, checking its
       presence allow to handle lines longer that sizeof(line), not showed here */
    ...
}
if (ferror(file)) {
   /* IO failure */
} else if (feof(file)) {
   /* format error (not possible with fgets, but would be with fscanf) or end of file */
} else {
   /* format error (not possible with fgets, but would be with fscanf) */
}
Humfrid answered 10/2, 2012 at 10:22 Comment(6)
Getting to the end of a file is not an error, so I question the phrasing "input operations may fail for other reasons than the end of file".Postliminy
@WilliamPursell, reaching the eof isn't necessarily an error, but being unable to do an input operation because of eof is one. And it is impossible in C to detect reliably the eof without having made an input operation fails.Humfrid
Agree last else not possible with sizeof(line) >= 2 and fgets(line, sizeof(line), file) but possible with pathological size <= 0 and fgets(line, size, file). Maybe even possible with sizeof(line) == 1.Simpkins
All that "predictive value" talk... I never thought about it that way. In my world, feof(f) does not PREDICT anything. It states that a PREVIOUS operation has hit the end of the file. Nothing more, nothing less. And if there was no previous operation (just opened it), it does not report end of file even if the file was empty to start with. So, apart of the concurrency explanation in another answer above, I do not think there is any reason not to loop on feof(f).Equestrian
@AProgrammer: A "read up to N bytes" request that yields zero, whether because of a "permanent" EOF or because no more data is available yet, is not an error. While feof() may not reliably predict that future requests will yield data, it may reliably indicate that future requests won't. Perhaps there should be a status function that would indicate "It is plausible that future read requests will succeed", with semantics that after reading to the end of an ordinary file, a quality implementation should say future reads are unlikely to succeed absent some reason to believe they might.Jaramillo
@AProgrammer: The circumstances in which an implementation might treat end-of-file as a transitory condition would vary among implementations, but a typical reason might be that the the file was opened using "r" or "rb" mode while it was also open with "w" or "wb" mode.Jaramillo
I
2

The other answers to this question are very good, but rather long. If you just want the TL;DR, it's this:

feof(F) is poorly named. It does not mean "check whether F is at end of file now"; rather it tells you why the previous attempt failed to get any data from F.

The end-of-file state can easily change, because a file can grow or shrink, and a terminal reports EOF once each time you press ^D (in "cooked" mode, on an otherwise empty line).

Unless you actually care why the previous read failed to return any data, you're better off forgetting that the feof function exists.

Incest answered 20/10, 2023 at 0:40 Comment(0)
C
-2

feof() is not very intuitive. In my very humble opinion, the FILE's end-of-file state should be set to true if any read operation results in the end of file being reached. Instead, you have to manually check if the end of file has been reached after each read operation. For example, something like this will work if reading from a text file using fgetc():

#include <stdio.h>

int main(int argc, char *argv[])
{
  FILE *in = fopen("testfile.txt", "r");

  while(1) {
    char c = fgetc(in);
    if (feof(in)) break;
    printf("%c", c);
  }

  fclose(in);
  return 0;
}

It would be great if something like this would work instead:

#include <stdio.h>

int main(int argc, char *argv[])
{
  FILE *in = fopen("testfile.txt", "r");

  while(!feof(in)) {
    printf("%c", fgetc(in));
  }

  fclose(in);
  return 0;
}
Coltin answered 8/6, 2020 at 0:21 Comment(15)
printf("%c", fgetc(in));? That's undefined behavior. fgetc() returns int, not char.Instar
It seems to me that the standard idiom while( (c = getchar()) != EOF) is very much "something like this".Postliminy
while( (c = getchar()) != EOF) works on one of my desktop running GNU C 10.1.0, but fails on my Raspberry Pi 4 running GNU C 9.3.0. On my RPi4, it doesn't detect the end of file, and just keeps going.Coltin
@AndrewHenle You're right! Changing char c to int c works! Thanks!!Coltin
The first example does not work reliably when reading from a text file. If you ever encounter a read error, the process will be stuck in an infinite loop with c constantly being set to EOF and feof constantly returning false.Postliminy
@AndrewHenle "%c" expects a int that can be converted to a char. It will be promoted to char anyway even when you use a char, except when CHAR_MAX==UINT_MAX.Alaster
@Alaster Undefined behavior is undefined behavior: "If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined." Are you willing to go to your managers and salespeople and tell them, "Yeah, I rely on undefined behavior in my code, but trust me, it will never blow up. Even though it's literally impossible to predict where our code base will be ported to in the future." Are you willing to do that to everyone in your management chain?Instar
@AndrewHenle It is not UB, it is well defined as long as no error or EOF occurs. "%c" expects a int and fgetc() returns a int.Alaster
@Alaster What part of "If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined." is hard to understand?Instar
@AndrewHenle Which part of "%c" expects a int, and not a char, is hard to understand? Read the manpage or the C standard, any of them.Alaster
@AndrewHenle: It is not even possible pass a char argument to printf, because an argument of type char will get promoted to an int anyway.Liberalize
@AndrewHenle The C standard you are trying to use against us says this about %c (emphasis mine): "If no l length modifier is present, the int argument is converted to an unsigned char, and the resulting character is written."Ingrid
@Alaster It has well defined behavior even for EOF because it is defined to be converted to unsigned char and "if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type." and also "a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."Ingrid
@Ingrid Das gilt für explizite und implizierte konvertierung. variadic (...) hat nur die Standardpromotion. Damit gilt dein Zitat aus »6.3.1.3 Signed and unsigned integers« Abschnitt 2 nicht. Wohlgemerkt, es ist nur auf System Undefiniertes Verhalten bei denen CHAR_MAX==UINT_MAX gilt. Diese sind extrem selten, möglich das kein einziges reales System existiert bei dem dies der Fall ist. Denn dafür müsste CHAR_BITS>=16 und CHAR_MIN==0 sein, es gibt nur wenige systeme mit CHAR_BITS>=16 und die meisten davon werden wahrscheinlich CHAR_MIN<0 haben.Alaster
@Alaster It has nothing to do with variadic argument promotions, this mechanism is specific to *printf, it says about the %c specifier (emphasis mine): "If no l length modifier is present, the int argument is converted to an unsigned char, and the resulting character is written.", it explicitly says that it gets converted to an unsigned char before being written so it is guaranteed. And therefore yes my quote is still relevant.Ingrid

© 2022 - 2025 — McMap. All rights reserved.