digitalmars.D.learn - If stdout is __gshared, why does this throw / crash?
- Atila Neves (15/15) Mar 05 2016 With a small number of threads, things work as intended in the
- Yuxuan Shui (2/17) Mar 05 2016 Could this be a bug in phobos/compiler?
- Marco Leise (12/18) Mar 05 2016 First thing I tried:
- Anon (17/32) Mar 05 2016 Note that `1000.iota.parallel` does *not* run 1000 threads.
- Marco Leise (5/8) Mar 05 2016 Meh. Too little drama. :p
- Atila Neves (7/26) Mar 06 2016 I see. Here's my problem: I want to make it so code not under my
- Marco Leise (11/11) Mar 05 2016 Got it now: https://issues.dlang.org/show_bug.cgi?id=15768
- Atila Neves (3/12) Mar 06 2016 Nice, good work!
With a small number of threads, things work as intended in the code below. But with 1000, on my machine it either crashes or throws an exception: import std.stdio; import std.parallelism; import std.range; void main() { stdout = File("/dev/null", "w"); foreach(t; 1000.iota.parallel) { writeln("Oops"); } } I get, depending on the run, "Bad file descriptor", "Attempting to write to a closed file", or segfaults. What am I doing wrong? Atila
Mar 05 2016
On Saturday, 5 March 2016 at 14:18:31 UTC, Atila Neves wrote:With a small number of threads, things work as intended in the code below. But with 1000, on my machine it either crashes or throws an exception: import std.stdio; import std.parallelism; import std.range; void main() { stdout = File("/dev/null", "w"); foreach(t; 1000.iota.parallel) { writeln("Oops"); } } I get, depending on the run, "Bad file descriptor", "Attempting to write to a closed file", or segfaults. What am I doing wrong? AtilaCould this be a bug in phobos/compiler?
Mar 05 2016
Am Sat, 05 Mar 2016 14:18:31 +0000 schrieb Atila Neves <atila.neves gmail.com>:void main() { stdout = File("/dev/null", "w"); foreach(t; 1000.iota.parallel) { writeln("Oops"); } }First thing I tried: void main() { stdout = File("/dev/null", "w"); foreach(t; 1000.iota.parallel) { stdout.writeln("Oops"); } } That does NOT segfault ... hmm. -- Marco
Mar 05 2016
On Saturday, 5 March 2016 at 14:18:31 UTC, Atila Neves wrote:With a small number of threads, things work as intended in the code below. But with 1000, on my machine it either crashes or throws an exception: import std.stdio; import std.parallelism; import std.range; void main() { stdout = File("/dev/null", "w"); foreach(t; 1000.iota.parallel) { writeln("Oops"); } }Note that `1000.iota.parallel` does *not* run 1000 threads. `parallel` just splits the work of the range up between the worker threads (likely 2, 4, or 8, depending on your CPU). I see the effect you describe with any parallel workload. Smaller numbers in place of 1000 aren't necessarily splitting things off to additional threads, which is why smaller numbers avoid the multi-threaded problems you are encountering.I get, depending on the run, "Bad file descriptor", "Attempting to write to a closed file", or segfaults. What am I doing wrong? Atila`File` uses ref-counting internally to allow it to auto-close. `stdout` and friends are initialized in a special way such that they have a high initial ref-count. When you assign a new file to stdout, the ref count becomes one. As soon as one of your threads exits, this will cause stdout to close, producing the odd errors you are encountering on all the other threads. I would avoid reassigning `stdout` and friends in favor of using a logger or manually specifying the file to write to if I were you.
Mar 05 2016
Am Sun, 06 Mar 2016 01:10:58 +0000 schrieb Anon <anon anon.anon>:I would avoid reassigning `stdout` and friends in favor of using a logger or manually specifying the file to write to if I were you.Meh. Too little drama. :p -- Marco
Mar 05 2016
On Sunday, 6 March 2016 at 01:10:58 UTC, Anon wrote:On Saturday, 5 March 2016 at 14:18:31 UTC, Atila Neves wrote:Err, right.[...]Note that `1000.iota.parallel` does *not* run 1000 threads. `parallel` just splits the work of the range up between the worker threads (likely 2, 4, or 8, depending on your CPU). I see the effect you describe with any parallel workload. Smaller numbers in place of 1000 aren't necessarily splitting things off to additional threads, which is why smaller numbers avoid the multi-threaded problems you are encountering.I see. Here's my problem: I want to make it so code not under my control doesn't get to write to stdout and stderr. I don't see any other way but to reassign stdout. Maybe I can manually bump up the ref count? Atila[...]`File` uses ref-counting internally to allow it to auto-close. `stdout` and friends are initialized in a special way such that they have a high initial ref-count. When you assign a new file to stdout, the ref count becomes one. As soon as one of your threads exits, this will cause stdout to close, producing the odd errors you are encountering on all the other threads. I would avoid reassigning `stdout` and friends in favor of using a logger or manually specifying the file to write to if I were you.
Mar 06 2016
Got it now: https://issues.dlang.org/show_bug.cgi?id=15768 writeln() creates a copy of the stdout struct in a non thread-safe way. If stdout has been assigned a File struct created from a file name this copy includes a "racy" increment/decrement of a reference count to the underlying C-library FILE*. In the case that the reference count is erroneously reaching 0, the file is closed prematurely and when Glibc tries to access internal data it results in the observable SIGSEGV. -- Marco
Mar 05 2016
On Sunday, 6 March 2016 at 01:28:52 UTC, Marco Leise wrote:Got it now: https://issues.dlang.org/show_bug.cgi?id=15768 writeln() creates a copy of the stdout struct in a non thread-safe way. If stdout has been assigned a File struct created from a file name this copy includes a "racy" increment/decrement of a reference count to the underlying C-library FILE*. In the case that the reference count is erroneously reaching 0, the file is closed prematurely and when Glibc tries to access internal data it results in the observable SIGSEGV.Nice, good work! Atila
Mar 06 2016