I would say that a lot if it comes down to how the system designer(s) are handling system safety at an architectural level.
Triplication
For instance if the approach taken is triplicate redundancy with a trusted voter then system trials/testing is going to be the major step in approving the implementation. Suppose that one of the triplicates' development team chose to use Boost. If the system as a whole passed all its test vectors then one could argue that one didn't need to trawl through Boost itself looking for implementation errors. Obviously if all three triplicates had chosen to use Boost then that would be cause for concern, because then the scope of the test vectors becomes unmanageable.
Triplication is a standard approach to handling the problem of using software resources like compilers, libraries and programmers all of which are at risk of error. Boost is just another one of these. One could argue that Boost shared pointers are clearly a way of reducing the risk of programmer error. That in the round is most likely to be beneficial to the system as a whole.
Important, but Not Safety Critical
Where it gets interesting is when triplication is not being used, and one is now in the realms of really having to trust stuff. Interestingly the way a lot of systems seem to get round the problem is to say that ultimately there is a human in control who is supervising and able to intervene in the event of system error.
For example the car industry has a set of programming rules called MISRA. The software for an ABS system is supposed to be written to this rule set, and the development tools are supposed to be set to enforce those rules on the source code. The idea is that this will reduce the risk of undetected bugs to an acceptable level. And because ultimately there is a driver driving the car they can always do their own cadence braking. And thus the car industry has avoided having to have a triplicate implementation of ABS.
They are extending the same philosophy to more complex car systems like adaptive cruise control and self driving cars. Personally I think that such an extension is unreasonable for self driving cars. The relevant legislation makes it clear that it is the 'drivers' fault if such a vehicle has a crash (ie you are still 'driving' it), but the glossy advertising won't dwell on that important aspect.
It's the same in the world of medical devices; there's supposed to be a nurse or someone monitoring the patient anyway, so the occasional blip is covered by that supervision. The whole thing is very poor anyway; whilst the software for a medical device may have been written tested and approved, quite often these things run on embedded Windows XP. They all get networked up and end up being infested with computer viruses, etc. The FDA won't let you have an auto update system inplace, only the device vendor can update it, and of course they can't ever hope to keep up. So you end up with a nicely written well tested and good piece of medical software running on top of an OS installation which has had all the world's hackers running around inside it doing god knows what. I think that the use of Boost in these circumstances is not going to add much to the overall system risk.
So if a MISRA compliant toolchain offered Boost as part of that toolchain then I don't see why that would be any different to a toolchain offering a standard C library. If the toolchain vendor is certifying it then it's no different to the situation with anything else.
There are weaknesses with that approach. In my experience I have come across a MISRA compliant tool chain in widespread use whose compiler turned out junk object code when all optimisations were turned on. I was actually able to verify this in the disassembly. I then took a look at the source code for toolchains's standard C library, and it clearly wasn't in itself written to the MISRA rule set, and furthermore it contained glaring, horrible bugs.
And yet there is no regulatory block to building, testing and selling a car ABS system using this tool chain so long as you tick the MISRA checkbox in the project settings. Adding Boost to that toolchain would hardly make matters worse.
Safety Critical Without Triplication
The final approach is no triplication and no human supervision. This is really hard because you then need formal proof of the correctness of every component of the tool chain, OS, drivers, chips, etc. AFAIK it's never been done for a truly safety critical system like nuclear reactors, flight control avionics, or other systems that really will definitely kill people if they go wrong.
The only thing that comes close so far as I can tell is Greenhill's compiler suite and their INTEGRITY operating system. They can give you (for a large fee) formal testing and verification evidence for every single line of the OS, all their libraries and their compiler, everything. If one were ever to attempt a truly safety critical system without triplication that would be a starting point.
I don't think they've done a C++11 yet, though I have added Boost to their toolchain and it worked just fine (it wasn't in a safety critical system I hasten to add).
Conclusion
Certainly if outfits like Greenhills with a well deserved and good reputation for reliable and thoroughly tested toolchains offer Boost then I think one would be in good position to use it in an regulated system. However I doubt that the whole of Boost would be offered that way; they are more likely to follow the compiler standards.
I also know that GCC has in the past been put through formal compiler validation testing so that it could be used in Stuff That Matters. I expect that that will get repeated sooner of later for the more recent incarnations that have taken on aspects of Boost.