Why do C++ standard file streams not follow RAII conventions more closely?
Asked Answered
S

5

29

Why do C++ Standard Library streams use open()/close() semantics decoupled from object lifetime? Closing on destruction might still technically make the classes RAII, but acquisition/release independence leaves holes in scopes where handles can point to nothing but still need run-time checks to catch.

Why did the library designers choose their approach over having opening only in constructors that throw on a failure?

void foo() {
  std::ofstream ofs;
  ofs << "Can't do this!\n"; // XXX
  ofs.open("foo.txt");

  // Safe access requires explicit checking after open().
  if (ofs) {
    // Other calls still need checks but must be shielded by an initial one.
  }

  ofs.close();
  ofs << "Whoops!\n"; // XXX
}

// This approach would seem better IMO:
void bar() {
  std_raii::ofstream ofs("foo.txt"); // throw on failure and catch wherever
  // do whatever, then close ofs on destruction ...
}

A better wording of the question might be why access to a non-opened fstream is ever worth having. Controlling open file duration via handle lifetime does not seem to me to be a burden at all, but actually a safety benefit.

Semiotics answered 2/9, 2014 at 13:36 Comment(1)
Yes, it is definitely missing throw_exception mode value. It is possible to set exceptions for later operations, but throwing constructor would be better.Dissolute
E
33

Although the other answers are valid and useful, I think the real reason is simpler.

The iostreams design is much older than a lot of the Standard Library, and predates wide use of exceptions. I suspect that in order to be compatible with existing code, the use of exceptions was made optional, not the default for failure to open a file.

Also, your question is only really relevant to file streams, the other types of standard stream don't have open() or close() member functions, so their constructors don't throw if a file can't be opened :-)

For files, you may want to check that the close() call succeeded, so you know if the data got written to disk, so that's a good reason not to do it in the destructor, because by the time the object is destroyed it is too late to do anything useful with it and you almost certainly don't want to throw an exception from the destructor. So an fstreambuf will call close in its destructor, but you can also do it manually before destruction if you want to.

In any case, I don't agree that it doesn't follow RAII conventions...

Why did the library designers choose their approach over having opening only in constructors that throw on a failure?

N.B. RAII doesn't mean you can't have a separate open() member in addition to a resource-acquiring constructor, or you can't clean up the resource before destruction e.g. unique_ptr has a reset() member.

Also, RAII doesn't mean you must throw on failure, or an object can't be in an empty state e.g. unique_ptr can be constructed with a null pointer or default-constructed, and so can also point to nothing and so in some cases you need to check it before dereferencing.

File streams acquire a resource on construction and release it on destruction - that is RAII as far as I'm concerned. What you are objecting to is requiring a check, which smells of two-stage initialization, and I agree that is a bit smelly. It doesn't make it not RAII though.

In the past I have solved the smell with a CheckedFstream class, which is a simple wrapper that adds a single feature: throwing in the cosntructor if the stream couldn't be opened. In C++11 that's as simple as this:

struct CheckedFstream : std::fstream
{
  CheckedFstream() = default;

  CheckedFstream(std::string const& path, std::ios::openmode m = std::ios::in|std::ios::out)
  : fstream(path, m)
  { if (!is_open()) throw std::ios::failure("Could not open " + path); }
};
Euripides answered 2/9, 2014 at 13:49 Comment(6)
interesting fact about why exceptions are not default.Canvasback
Yep, 'not' was typo, now fixed. Also, I had been assuming up to present that one-stage initialization was a core part of RAII philosophy, not just a complementary design approach.Semiotics
Two-stage init is not essential with fstreams, you can call the constructor, and you don't have to check is_open(), you can just start writing to it. That will fail (and set failbit, and maybe throw, depending on the exception mask) if the file wasn't opened. So you can use it in a normal one-stage init manner, and it will cleanup on destruction if needed. That is a valid form of RAII IMHO.Euripides
@JonathanWakely I actually use a couple of wrappers virtually indistinguishable from this myself already. I do have a question about your checked close() comment though: is it really possible to perform anything meaningful on an ofstream at that point, or anything that isn't better served by checking flush beforehand? I have never actually witnessed anyone checking close() and wonder whether it's just prevalent apathy or a deeper complexity.Semiotics
Also, with regards to two-stage initialization: the object to two stage initialization is that it can leave the object in an unusable state, which must be tested. This objection doesn't apply to IO (nor to smart pointers which support null pointer values) because they can become unusable at any point in time, even after having been successfully constructed. So you have to test everywhere anyway.Womanizer
I'm not sure that existing code had much to do with it. The standard broke an awful lot of existing streams code; I doubt that adding an exception would have bothered anyone.Womanizer
C
14

This way you get more and nothing less.

  • You get the same: You still can open the file via constructor. You still get RAII: it will automatically close the file at object destruction.

  • You get more: you can use the same stream to reopen other file; you can close the file when you want, not being restricted to wait for the object going out of scope or being destructed (this is very important).

  • You get nothing less: The advantage you see is not real. You say that your way you don’t have to check at each operation. This is false. The stream can fail at any time even if it successfully opened (the file).

As about error checking vs throwing exceptions, see @PiotrS’s answer. Conceptually I see no difference between having to check the return status vs having to catch error. The error is still there; the difference is how you detect it. But as pointed by @PiotrS you can opt for both.

Canvasback answered 2/9, 2014 at 13:42 Comment(4)
The std::ios::exceptions() masking allows non-checked error propagation, as noted by Piotr.Semiotics
@Semiotics I interpreted the question more along the lines why are you allowed to explicitly open/close the stream at any time; rather then why error check and not exceptions.Canvasback
It is more about the open/close aspects, and I think I still disagree with you more/nothing less assertions. Doing a declaration/open then a check or a declaration, exception mask, then open is clumsier than throwing on a failed instantiation. I also question whether having the ability to redirect a file handle really buys anything. The costs of user-space handle construction/destruction are dwarfed by the underlying system calls, and descoping==closing prevents a small class of dumb access errors.Semiotics
@Semiotics there are instances where keeping a file handler opened prevents other processes from opening that file. Having the ability to close the handle at any time rather than having to wait for object destruction is crucial. As for redirecting I agree, not a big advantage, but you have the option nevertheless.Canvasback
B
7

The library designers gave you alternative:

std::ifstream file{};
file.exceptions(std::ifstream::failbit | std::ifstream::badbit);

try
{
    file.open(path); // now it will throw on failure
}
catch (const std::ifstream::failure& e)
{
}
Bichloride answered 2/9, 2014 at 13:40 Comment(3)
This is a very good point, but it still leaves the exception masking decoupled from the instance creation and leaves open the possibility of accesses to non-opened files that have to be caught at run-time.Semiotics
@Jeff: seems I don't get your question at all, closing invalidly opened stream causes no problems: "If the operation fails (including if no file was open before the call), the failbit state flag is set for the stream "Bichloride
Sorry, I meant that accesses to a file stream will never work before an open after either a close or non-opening construction but that the stream status (or an enabled exception) will have to trigger a run-time error instead of using scoped variable lifetime to prevent it at compile time.Semiotics
W
4

The standard library file streams do provide RAII, in the sense that calling the destructor on one will close any file which happens to be open. At least in the case of output, however, this is an emergency measure, which should only be used if you have encountered another error, and are not going to use the file which was being written anyway. (Good programming practice would be to delete it.) Generally, you need to check the status of the stream after you've closed it, and this is an operation which can fail, so shouldn't be done in the destructor.

For input, it's not so critical, since you'll have checked the status after the last input anyway, and most of the time, will have read until an input fails. But it does seem reasonable to have the same interface for both; from a programming point of view, however, you can usually just let the close in the destructor do its job on input.

With regards to open: you can just as easily do the open in the constructor, and for isolated uses like you show, this is probably the preferred solution. But there are cases where you might want to reuse an std::filebuf, opening it and closing it explicitly, and of course, in almost all cases, you will want to handle a failure to open the file immediately, rather than through some exception.

Womanizer answered 2/9, 2014 at 15:34 Comment(0)
H
1

It depends on what you are doing, reading or writing. You can encapsulate an input stream in RAII way, but it is not true for output streams. If the destination is a disk file or network socket, NEVER, NEVER put fclose/close in destructor. Because you need check the return value of fclose, and there is no way to report an error occurred in destructor. see How can I handle a destructor that fails

Hardi answered 20/9, 2016 at 3:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.