www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Determining trusted-status

reply Clarice <cl ar.ice> writes:
It seems that  safe will be de jure, whether by the current state 
of DIP1028 or otherwise. However, I'm unsure how to responsibly 
determine whether a FFI may be  trusted: the type signature and 
the body. Should I run, for example, a C library through valgrind 
to observe any memory leaks/corruption? Is it enough to trust the 
authors of a library (e.g. SDL and OpenAL) where applying 
 trusted is acceptable?
There's probably no one right answer, but I'd be very thankful 
for some clarity, regardless.
May 28 2020
next sibling parent Johannes Loher <johannes.loher fg4f.de> writes:
On Friday, 29 May 2020 at 00:09:56 UTC, Clarice wrote:
 It seems that  safe will be de jure, whether by the current 
 state of DIP1028 or otherwise. However, I'm unsure how to 
 responsibly determine whether a FFI may be  trusted: the type 
 signature and the body. Should I run, for example, a C library 
 through valgrind to observe any memory leaks/corruption? Is it 
 enough to trust the authors of a library (e.g. SDL and OpenAL) 
 where applying  trusted is acceptable?
 There's probably no one right answer, but I'd be very thankful 
 for some clarity, regardless.
In theory, you should probably actually verify the code of the library you are using by any means. That can be very broad and range from looking at the code, using static analysis tools, valgrind to fuzzing. In practice, it really depends on how certain you need to be that your code is free of memory corruption errors and how much you trust the authors of the library (however, if they don't claim to have a safe interface, don't assume anything ;)).
May 28 2020
prev sibling next sibling parent ag0aep6g <anonymous example.com> writes:
On 29.05.20 02:09, Clarice wrote:
 It seems that  safe will be de jure, whether by the current state of 
 DIP1028 or otherwise. However, I'm unsure how to responsibly determine 
 whether a FFI may be  trusted: the type signature and the body. Should I 
 run, for example, a C library through valgrind to observe any memory 
 leaks/corruption? Is it enough to trust the authors of a library (e.g. 
 SDL and OpenAL) where applying  trusted is acceptable?
 There's probably no one right answer, but I'd be very thankful for some 
 clarity, regardless.
There are two ways in which a function can be unsafe: 1) The function has a bug and doesn't behave as intended. 2) The function doesn't have a safe interface [1]. happen. You are allowed to trust that the author of the library made no safety-critical mistakes. requirements for how to call it, and calling it incorrectly can lead to undefined behavior / memory corruption, then it cannot be trusted. It can only be system. In order to use the function in safe code, you need to write an trusted wrapper that provides a safe interface and makes sure that the system function is called correctly. [1] https://dlang.org/spec/function.html#safe-interfaces
May 28 2020
prev sibling next sibling parent reply JN <666total wp.pl> writes:
On Friday, 29 May 2020 at 00:09:56 UTC, Clarice wrote:
 It seems that  safe will be de jure, whether by the current 
 state of DIP1028 or otherwise. However, I'm unsure how to 
 responsibly determine whether a FFI may be  trusted: the type 
 signature and the body. Should I run, for example, a C library 
 through valgrind to observe any memory leaks/corruption? Is it 
 enough to trust the authors of a library (e.g. SDL and OpenAL) 
 where applying  trusted is acceptable?
 There's probably no one right answer, but I'd be very thankful 
 for some clarity, regardless.
I think most C FFI should be system, even if it's for popular libraries like SDL. Whenever you have API that takes a pointer and a size of array, you are risking buffer overflows and similar issues. It's very easy to mess up and send array length instead of array length * element.sizeof. A trusted API would only accept a slice, which is much safer than raw pointers. Alternatively you could just use trusted blocks. Unsafe blocks to calling unsafe code. safe isn't about 100% bulletproof safety. safe is (should be) about not having memory related errors outside of trusted code, minimizing the surface area for errors.
May 28 2020
parent ag0aep6g <anonymous example.com> writes:
On 29.05.20 08:28, JN wrote:
 Alternatively you could just use  trusted blocks. Unsafe blocks are a 

 unsafe code.  safe isn't about 100% bulletproof safety.  safe is (should 
 be) about not having memory related errors outside of  trusted code, 
 minimizing the surface area for errors.
Note that an " trusted block" is really a nested trusted function being called immediately. Being an trusted function, the "block" must have a safe interface. I.e., its safety cannot depend on its inputs. The inputs of a nested function include the variables of the surrounding function. trusted blocks often violate the letter of trusted law, because people forget/ignore that. For example, the second trusted block here is strictly speaking not allowed, because its safety depends on `p`: void main() safe { import core.stdc.stdlib: free, malloc; int* p = () trusted { return cast(int*) malloc(int.sizeof); } (); if (p is null) return; /* ... else: do something with p ... */ () trusted { free(p); } (); }
May 28 2020
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 8:09 PM, Clarice wrote:
 It seems that  safe will be de jure, whether by the current state of 
 DIP1028 or otherwise. However, I'm unsure how to responsibly determine 
 whether a FFI may be  trusted: the type signature and the body. Should I 
 run, for example, a C library through valgrind to observe any memory 
 leaks/corruption? Is it enough to trust the authors of a library (e.g. 
 SDL and OpenAL) where applying  trusted is acceptable?
 There's probably no one right answer, but I'd be very thankful for some 
 clarity, regardless.
trusted doesn't necessarily mean "bug free", what it means is that given the parameters to the function, does it certify that it will only do safe things with those parameters. Note that it can do whatever it wants elsewhere, as long as it doesn't violate the constraints of safe that the caller needs. So whether you mark something trusted or system highly depends on the behavior of the function. A classic example is in the documentation of safe [1]: memcpy. mempcy does not provide a safe interface, because safe code is allowed to use pointers as long as you only access the one item it refers to. However, memcpy will access a provided number of bytes *beyond* the item. Therefore, C's memcpy should be marked system, not trusted. But you can provide a trusted interface to memcpy because you know the semantic guarantees for memcpy (i.e. what it is specified to do): trusted void safeMemcpy(T)(T[] dst, T[] src) { // must be same length enforce(dst.length == src.length); // no overlap (undefined behavior otherwise) enforce(dst.ptr >= src.ptr + src.length || src.ptr >= dst.ptr + dst.length); import core.stdc.string : memcpy; memcpy(dst.ptr, src.ptr, dst.length * T.sizeof); } There is no way to call this function and violate memory safety. One has to be extra cautious though, when passing in templated parameters. Hidden calls such as postblit and destructors can easily not be safe, so in those cases, it's wise to wrap the unsafe parts in trusted lambda functions (as others have mentioned). In my example above, I know that arrays do not have these hidden calls, and I'm never directly using any of the elements, just pointers. -Steve [1]https://dlang.org/spec/function.html#safe-interfaces
May 29 2020
prev sibling parent Clarice <cl ar.ice> writes:
I didn't know the spec was changed to include a section on 
 safe/ trusted/ system interfaces, because otherwise I wouldn't 
have made this thread. But regardless, thank you everyone for 
your time: your posts are very helpful.
May 29 2020