www.digitalmars.com         C & C++   DMDScript  

digitalmars.dip.development - third draft: add bitfields to D

reply Walter Bright <newshound2 digitalmars.com> writes:
https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

Adds introspection capabilities.

https://github.com/dlang/dmd/pull/16444
May 05
next sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
Does it support the :0 syntax of C fields? struct bf_t { int a:12; int :0; int b:10; } bf_t f; assert(f.b.bitoffset = 16);
May 21
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/21/2024 9:44 AM, Patrick Schluter wrote:
 Does it support the :0 syntax of C fields?
yes
May 24
parent swigy food <hamza1r3g3h gmail.com> writes:
Hello
No, JavaScript does not support the :0 syntax of C fields.

JavaScript object properties do not have bit-field syntax like C.
Jul 23
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 5/6/24 01:08, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md
 
 Adds introspection capabilities.
 
 https://github.com/dlang/dmd/pull/16444
Thanks, this is an improvement (though I think there would be ways to not require complex special cases in introspective code that does not deeply care about bitfields). It seems these sentences are stale and should be deleted:
 There isn't a specific trait for "is this a bitfield". However, a bitfield can
be detected by taking the address of it being a compile time error.
 A bitfield can be detected by not being able to take the address of it.
This is still wrong and should be fixed:
 The value returned by .sizeof is the same as typeof(bitfield).sizeof.
(Bitfields may have a smaller memory footprint than their type. You can just move it to the `.offsetof` and `.alignof` section and use the same wording.) I am not sure about this:
 shared, const, immutable, __gshared, static, extern
 Are not allowed for bitfields.
Some elaboration is needed. For example: A `const` object can contain `const` bitfields even if the bitfields themselves were not declared `const`. What happens in this case? Regarding the "Controversy" section: The problem is not that there are no alternatives to bitfields (clearly there are), it is that they have the nicest syntax and semantic analysis support and yet they have pitfalls. It would be good to give more cumbersome syntax to those cases where pitfalls are in fact present.
 If one sticks with one field type (such as int), then the layout is
predictable in practice, although not in specification
I mostly care about practice. (E.g, D's specified floating-point semantics are completely insane, but LDC is mostly sane and portable by default, so I have not been forced to change languages.)
 A bit of testing will show what a particular layout is, after which it will be
reliable
No, absolutely not, which is the problem. If the behavior is not reliable, then testing on any one platform will be insufficient. Portability/deployability is a big selling point of D, and we should be wary of limiting that for no good reason.
 Note that the author would use a lot more bitfields if they were in the
language.
Which is why there should be basic protections in place against pitfalls.
May 24
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/24/2024 4:51 AM, Timon Gehr wrote:
 (E.g, D's specified floating-point semantics are 
 completely insane,
I presume you are talking about doing intermediate computations as 80 bit floats. This was inevitable with the x86 prior to XMM. With XMM, floats and doubles are evaluated with XMM instructions that do 32/64 bit respectively.
May 24
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 5/25/24 00:21, Walter Bright wrote:
 On 5/24/2024 4:51 AM, Timon Gehr wrote:
 (E.g, D's specified floating-point semantics are completely insane,
I presume you are talking about doing intermediate computations as 80 bit floats. This was inevitable with the x86 prior to XMM. With XMM, floats and doubles are evaluated with XMM instructions that do 32/64 bit respectively.
It was avoidable, as even the x87 allows controlling the mantissa size (which is insufficient but "more" correct) or flushing floats and doubles to memory. D compilers are allowed by the spec e.g. to use 64-bit XMM registers to compute with 32-bit float intermediates, with incorrect rounding behavior.
May 24
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/24/2024 11:52 PM, Timon Gehr wrote:
 It was avoidable, as even the x87 allows controlling the mantissa size (which
is 
 insufficient but "more" correct) or flushing floats and doubles to memory.
Flushing it to memory was correct but was terribly slow.
 D compilers are allowed by the spec e.g. to use 64-bit XMM registers to
compute 
 with 32-bit float intermediates, with incorrect rounding behavior.
Nobody implemented that.
May 25
parent reply Daniel N <no public.email> writes:
On Saturday, 25 May 2024 at 07:47:13 UTC, Walter Bright wrote:
 On 5/24/2024 11:52 PM, Timon Gehr wrote:
 It was avoidable, as even the x87 allows controlling the 
 mantissa size (which is insufficient but "more" correct) or 
 flushing floats and doubles to memory.
Flushing it to memory was correct but was terribly slow.
 D compilers are allowed by the spec e.g. to use 64-bit XMM 
 registers to compute with 32-bit float intermediates, with 
 incorrect rounding behavior.
Nobody implemented that.
Just saw "Excess precision support" in the gcc changelog, which might interest you. https://gcc.gnu.org/gcc-13/changes.html#cxx "-std=gnu++20 it defaults to -fexcess-precision=fast" I was aware of -ffast-math but not that the seemingly harmless enabling of gnu dialect extensions would change excess-precision!
May 26
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2024 1:05 AM, Daniel N wrote:
 I was aware of -ffast-math but not that the seemingly harmless enabling of gnu 
 dialect extensions would change excess-precision!
Supporting various kinds of fp math with a switch is just a bad idea, mainly because so few people understand fp math and its tradeoffs. It will also bork up code that is carefully tuned to the default settings.
Jun 09
parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Sunday, 9 June 2024 at 17:54:38 UTC, Walter Bright wrote:
 On 5/26/2024 1:05 AM, Daniel N wrote:
 I was aware of `-ffast-math` but not that the seemingly 
 harmless enabling of gnu dialect extensions would change 
 excess-precision!
Supporting various kinds of fp math with a switch is just a bad idea, mainly because so few people understand fp math and its tradeoffs. It will also bork up code that is carefully tuned to the default settings.
Every once in a while, I believe floating-point numbers are such delicate tools, they should be disabled by default and require a compiler switch to enable. Something like `--enable-floating-point-types--yes-i-know-what-i-am-doing-with-them--i-swe r-i-read-the-docs`. And yeah, this is tongue in cheek, but I taught applied numerics and programming for mathematicians for 4 years, and even I get it wrong sometimes. In my work, we use arbitrary precision rational numbers because they’re fool-proof and we don’t need any transcendental functions.
Jun 12
parent reply Dukc <ajieskola gmail.com> writes:
Quirin Schroll kirjoitti 12.6.2024 klo 16.25:
 On Sunday, 9 June 2024 at 17:54:38 UTC, Walter Bright wrote:
 On 5/26/2024 1:05 AM, Daniel N wrote:
 I was aware of `-ffast-math` but not that the seemingly harmless 
 enabling of gnu dialect extensions would change excess-precision!
Supporting various kinds of fp math with a switch is just a bad idea, mainly because so few people understand fp math and its tradeoffs. It will also bork up code that is carefully tuned to the default settings.
Every once in a while, I believe floating-point numbers are such delicate tools, they should be disabled by default and require a compiler switch to enable. Something like `--enable-floating-point-types--yes-i-know-what-i-am-doing-with-them--i-swe r-i-read-the-docs`. And yeah, this is tongue in cheek, but I taught applied numerics and programming for mathematicians for 4 years, and even I get it wrong sometimes. In my work, we use arbitrary precision rational numbers because they’re fool-proof and we don’t need any transcendental functions.
My rule of thumb - if I don't have a better idea - is to treat FP numbers like I'd treat readings from a physical instrument: there's always going to be a slight "random" factor in my calculations. I can't trust `==` operator with FP expression result just like I can't trust I'll get exactly the same result if I measure the length of the same metal rod twice. Would you agree this being a good basic rule?
Jun 12
parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 12 June 2024 at 21:05:39 UTC, Dukc wrote:
 Quirin Schroll kirjoitti 12.6.2024 klo 16.25:
 On Sunday, 9 June 2024 at 17:54:38 UTC, Walter Bright wrote:
 On 5/26/2024 1:05 AM, Daniel N wrote:
 I was aware of `-ffast-math` but not that the seemingly 
 harmless enabling of gnu dialect extensions would change 
 excess-precision!
Supporting various kinds of fp math with a switch is just a bad idea, mainly because so few people understand fp math and its tradeoffs. It will also bork up code that is carefully tuned to the default settings.
Every once in a while, I believe floating-point numbers are such delicate tools, they should be disabled by default and require a compiler switch to enable. Something like `--enable-floating-point-types--yes-i-know-what-i-am-doing-with-them--i-swe r-i-read-the-docs`. And yeah, this is tongue in cheek, but I taught applied numerics and programming for mathematicians for 4 years, and even I get it wrong sometimes. In my work, we use arbitrary precision rational numbers because they’re fool-proof and we don’t need any transcendental functions.
My rule of thumb - if I don't have a better idea - is to treat FP numbers like I'd treat readings from a physical instrument: there's always going to be a slight "random" factor in my calculations. I can't trust `==` operator with FP expression result just like I can't trust I'll get exactly the same result if I measure the length of the same metal rod twice. Would you agree this being a good basic rule?
That’s so fundamental, it’s not even a good first step. It’s a good first half-step. The problem isn’t simple rules such as “don’t use `==`.” The problem isn’t knowing there are rounding errors. If you know that, you might say: Obviously, that means I can’t trust every single digit printed. True, but that’s all just the beginning. If you implement an algorithm, you have to take into account how rounding errors propagate through the calculations. The issue is that you can’t do that intuitively. You just can’t. You can intuit _some_ obvious problems. Generally speaking, if you implement a formula, you must extract from the algorithm what exactly you are doing and then calculate the so-called condition which tells you if errors add up. While that sounds easy, it can be next to impossible for non-linear problems. (IIRC, for linear ones, it’s always possible, it may just be a lot of work in practice.) Not to mention other quirks such as `==` not being an equivalence relation, `==` equal values not being substitutable, and lexically ordering a bunch of arrays of float values is huge fun. I haven’t touched FPs in years, and I’m not planning to do so in any professional form maybe ever. If my employer needed something like FPs from me, I suggest to use rationals unless those are a proven bottleneck.
Jun 13
parent reply Dom DiSc <dominikus scherkl.de> writes:
On Thursday, 13 June 2024 at 19:50:34 UTC, Quirin Schroll wrote:
 [...] If you implement an algorithm, you have to take into 
 account how rounding errors propagate through the calculations. 
 The issue is that you can’t do that intuitively. You just can’t.
Best is to use an interval-type, that says the result is between the lower and the upper bound. This contains both the error and the accuracy information. e.g. sqrt(2) may be ]1.414213, 1.414214[, so you know the exact result is in this interval and you can't trust digits after the 6th. If now you square this, the result is ]1.9999984, 2.0000012[, which for sure contains the correct answer. The comparison operators on these types are ⊂, ⊃, ⊆ and ⊇ - which of course can not only be true or false, but also ND (not defined) if the two intervals intersect. so checks would ask something like "if(sqrt(2)^^2 ⊂ ]1.999, 2.001[)", so they don't ask only for the value but also for the allowed error.
Jun 13
parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Friday, 14 June 2024 at 06:39:59 UTC, Dom DiSc wrote:
 On Thursday, 13 June 2024 at 19:50:34 UTC, Quirin Schroll wrote:
 [...] If you implement an algorithm, you have to take into 
 account how rounding errors propagate through the 
 calculations. The issue is that you can’t do that intuitively. 
 You just can’t.
Best is to use an interval-type, that says the result is between the lower and the upper bound. This contains both the error and the accuracy information. e.g. sqrt(2) may be ]1.414213, 1.414214[, so you know the exact result is in this interval and you can't trust digits after the 6th. If now you square this, the result is ]1.9999984, 2.0000012[, which for sure contains the correct answer. The comparison operators on these types are ⊂, ⊃, ⊆ and ⊇ - which of course can not only be true or false, but also ND (not defined) if the two intervals intersect. so checks would ask something like "if(sqrt(2)^^2 ⊂ ]1.999, 2.001[)", so they don't ask only for the value but also for the allowed error.
[Note: The following contain fractions which only render nice on some fonts, _excluding_ the Dlang forum font.] Interval arithmetic (even on precise numbers such as rationals) fails when the operations blow up the bound to infinity. This can happen because of two reasons: The actual bounds blow up (i.e. the algorithm isn’t well-conditioned, evidenced by analysis), or analysis shows the bounds are actually bounded, but interval arithmetic doesn’t see it. An example for the latter case: Let *x* ∈ [1⁄4, 3⁄4]. Set  *y* = 1⁄2 − *x* ⋅ (1 − *x*) and  *z* = 1 / *y* Then, standard interval arithmetic gives  *y* ∈ [1⁄2, 1⁄2] − [1⁄16, 9⁄16] = [−1⁄16, 7⁄16]  *z* ∈ [−∞, −16] ∪ [16⁄7, +∞] However, rather simple analysis shows:  *y* ∈ [1⁄2, 1⁄2] − [3⁄16, 1⁄4] = [1⁄4, 5⁄16]  *z* ∈ [16⁄5, 4] Of course, interval arithmetic isn’t mathematically wrong, after all  [16⁄5, 4] ⊂ [−∞, −16] ∪ [16⁄7, +∞], but the left-hand side interval is a useful bound (it’s optimal, in fact), but the right-hand side one is near-useless as it contains infinity. The big-picture reason is that interval arithmetic doesn’t work well when a value interacts with itself: It counts all occurrences as independent values. In the example above, in *x* ⋅ (1 − *x*), the two factors aren’t independent. And as you can see, for the upper bound of *y*, that’s not even an issue: If interval arithmetic determined that *y* is at most 7⁄16 (instead of 5⁄16) and therefore *z* is at least 16⁄7 (instead of 16⁄5), that’s not too bad. The issue is the lower bound. Essentially, what one has to do is make every expression containing a variable more than once into its own (primitive) operation, i.e. define *f*(*x*) = *x* ⋅ (1 − *x*), then determine how *f* acts on intervals, and use that. An interval type (implemented in D or any other programming language) probably can’t do easily automate that. I could maybe see a way using expression templates or something like that where it could determine that two variables are in fact the same variable, but that is a lot of work.
Jun 14
parent Dukc <ajieskola gmail.com> writes:
Quirin Schroll kirjoitti 14.6.2024 klo 15.15:
 Interval arithmetic (even on precise numbers such as rationals) fails 
 when the operations blow up the bound to infinity. This can happen 
 because of two reasons: The actual bounds blow up (i.e. the algorithm 
 isn’t well-conditioned, evidenced by analysis), or analysis shows the 
 bounds are actually bounded, but interval arithmetic doesn’t see it. An 
 example for the latter case:
We're drifting off-topic as this isn't related to bitfields anymore. It was largely my fault since I kinda started it by asking you the unrelated guestion about floats - sorry. Anyway, let's move this discussion to another thread or stop it here.
Jun 14
prev sibling next sibling parent reply Dave P. <dave287091 gmail.com> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
Wouldn’t a more D-like syntax be to have the number of bits following the type instead of following the variable name? This would be more consistent with how D arrays are different than C arrays. Example: ``` struct Foo { uint:4 x; uint:5 y; } ```
Jul 03
next sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Wednesday, 3 July 2024 at 19:02:59 UTC, Dave P. wrote:
 Wouldn’t a more D-like syntax be to have the number of bits 
 following the type instead of following the variable name? This 
 would be more consistent with how D arrays are different than C 
 arrays.

 ```D
 struct Foo {
     uint:4 x;
     uint:5 y;
 }
 ```
I like this. As for arrays it's much more readable than the C syntax.
Jul 04
next sibling parent Daniel N <no public.email> writes:
On Thursday, 4 July 2024 at 12:20:52 UTC, Dom DiSc wrote:
 On Wednesday, 3 July 2024 at 19:02:59 UTC, Dave P. wrote:
 Wouldn’t a more D-like syntax be to have the number of bits 
 following the type instead of following the variable name? 
 This would be more consistent with how D arrays are different 
 than C arrays.

 ```D
 struct Foo {
     uint:4 x;
     uint:5 y;
 }
 ```
I like this. As for arrays it's much more readable than the C syntax.
Yes, good idea.
Jul 04
prev sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Thursday, 4 July 2024 at 12:20:52 UTC, Dom DiSc wrote:
 On Wednesday, 3 July 2024 at 19:02:59 UTC, Dave P. wrote:
 Wouldn’t a more D-like syntax be to have the number of bits 
 following the type instead of following the variable name? 
 This would be more consistent with how D arrays are different 
 than C arrays.

 ```D
 struct Foo {
     uint:4 x;
     uint:5 y;
 }
 ```
I like this. As for arrays it's much more readable than the C syntax.
What I dislike about this is that `uint:4` and `ushort:4` would be different types, but that makes no sense. The only way this would make sense to me is if `uint` were fixed and not an arbitrary type, but a more natural approach to D would be using a `__traits` trait or adding new keywords e.g. `__bits` and `__ubits` with them producing `__bits(4)`. To pack those in structs, we could be cheeky and extend the `align` concept with fractions. Alignment is in bytes, but two `__bits(4)` can be aligned so that they overlap, and `align(0.5)` could do that. Of course, the argument to `align` must be a multiple of `0.125` and be a power of 2. Or we just allow `align(0.125)` as a special exception to specify that `__bits(n)` fields of a struct are packed. ```D struct Foo { __bits(3) x; __bits(5) y; } ``` There, `x` and `y` have different addresses and `Foo.sizeof` is 2. ```D struct Foo { align(0.125) __bits(3) x; align(0.125) __bits(5) y; } ``` Here, `x` and `y` are stored in the same byte and `Foo.sizeof` is 1.
Jul 12
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On Wednesday, 3 July 2024 at 19:02:59 UTC, Dave P. wrote:
 On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
Wouldn’t a more D-like syntax be to have the number of bits following the type instead of following the variable name? This would be more consistent with how D arrays are different than C arrays. Example: ``` struct Foo { uint:4 x; uint:5 y; } ```
This is a great idea. And then you can extract this as a type as well! I see potential for some nice things here. For instance VRP: ```d uint:4 value(){ // return 200; // error return 7; // ok } int[16] arr; arr[value()] == 5; // VRP makes this valid ``` Walter, is this possible to do? This would make a lot of the traits parts unnecessary -- you could introspect the bits just from the type. e.g. ```d enum bitsForType(T : V:N, V, size_t N) = N; ``` -Steve
Jul 04
next sibling parent Daniel N <no public.email> writes:
On Thursday, 4 July 2024 at 18:34:00 UTC, Steven Schveighoffer 
wrote:
 This is a great idea. And then you can extract this as a type 
 as well!

 I see potential for some nice things here. For instance VRP:

 ```d
 uint:4 value(){
    // return 200; // error
    return 7; // ok
 }

 int[16] arr;
 arr[value()] == 5; // VRP makes this valid
 ```

 Walter, is this possible to do? This would make a lot of the 
 traits parts unnecessary -- you could introspect the bits just 
 from the type. e.g.

 ```d
 enum bitsForType(T : V:N, V, size_t N) = N;
 ```

 -Steve
Fits nicely with C23 _BitInt. https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2709.pdf
Jul 05
prev sibling next sibling parent Dom DiSc <dominikus scherkl.de> writes:
On Thursday, 4 July 2024 at 18:34:00 UTC, Steven Schveighoffer 
wrote:

 ```d
 uint:4 value(){
    // return 200; // error
    return 7; // ok
 }
 ```
Yes, super! This would also allow us do define ```d alias uint:1 bool ``` and be sure assigning a value larger than 1 to be an error!
Jul 05
prev sibling next sibling parent reply Nick Treleaven <nick geany.org> writes:
On Thursday, 4 July 2024 at 18:34:00 UTC, Steven Schveighoffer 
wrote:
 On Wednesday, 3 July 2024 at 19:02:59 UTC, Dave P. wrote:
 ```
 struct Foo {
     uint:4 x;
     uint:5 y;
 }
 ```
This is a great idea. And then you can extract this as a type as well!
A weird restricted type. Didn't dmd used to have a `bit` type that was abandoned?
 I see potential for some nice things here. For instance VRP:

 ```d
 uint:4 value(){
    // return 200; // error
    return 7; // ok
 }

 int[16] arr;
 arr[value()] == 5; // VRP makes this valid
 ```
Did you mean `= 5`? If so the assignment already compiles, so VRP would make no difference for that code.
Jul 05
next sibling parent Nick Treleaven <nick geany.org> writes:
On Friday, 5 July 2024 at 10:19:56 UTC, Nick Treleaven wrote:
 On Thursday, 4 July 2024 at 18:34:00 UTC, Steven Schveighoffer 
 wrote:
 int[16] arr;
 arr[value()] == 5; // VRP makes this valid
 ```
Did you mean `= 5`? If so the assignment already compiles, so VRP would make no difference for that code.
Well, it would allow skipping the bounds check.
Jul 05
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/5/2024 3:19 AM, Nick Treleaven wrote:
 dmd used to have a `bit` type that was abandoned?
Yes.
Jul 16
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/4/2024 11:34 AM, Steven Schveighoffer wrote:
 Walter, is this possible to do?
D originally had the "bit" data type, which represented a single bit. It was a great idea. But I ran into endless problems with it. It turned the type system on it's ear. You couldn't take a pointer/reference to it. It problems being the subject of a type constructor. For example, what does it mean to have one bit be const and the adjacent bit be mutable? I spent so much time trying to work it in that the benefits of it became swamped by the complexity. (Sort of like how "volatile" infects everything in C++, way beyond the scope of what volatile actually is.) So I gave up and removed it all. Haven't missed it :-/
Jul 16
next sibling parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 17 July 2024 at 03:24:28 UTC, Walter Bright wrote:
 On 7/4/2024 11:34 AM, Steven Schveighoffer wrote:
 Walter, is this possible to do?
D originally had the "bit" data type, which represented a single bit.
I would honestly love a `bit` type. Make today’s `bool` the bit type and re-introduce real booleans that interact with integer types similar to how pointer types do: You can cast pointers to integers and vice versa, but not implicitly. Conceptually, a `bit` is a number that’s 0 or 1. It’s e.g. ideal to represent numerical carry. `bool` could be an enum type with backing of a `bit`. The wrong turn on the original `bit` type probably was the idea to be able to point to individual bits.
Jul 17
parent reply Dom DiSc <dominikus scherkl.de> writes:
On Thursday, 18 July 2024 at 00:02:15 UTC, Quirin Schroll wrote:
 The wrong turn on the original `bit` type probably was the idea 
 to be able to point to individual bits.
Why? As pointers are 64bit, the highest three bits will never be used anyway (at least not for the next 20 years). Simply shift all pointers 3bits to the right and use the lower 3 bits to indicate the bitoffset. So, it's easy to get the adress of a bit: its the byte-adress + the bitoffet, so simply all info in the pointer. In addition make the length also bitlength, so we could define 2bit or 4bit types without any problem. In the age of 64bit byte- or word-wise adressing makes no sense anymore.
Jul 18
parent reply IchorDev <zxinsworld gmail.com> writes:
On Thursday, 18 July 2024 at 08:52:35 UTC, Dom DiSc wrote:
 Why? As pointers are 64bit, the highest three bits will never 
 be used anyway (at least not for the next 20 years).
This is a stupidly unreliable and highly system-dependant hack: https://stackoverflow.com/a/18426582 How will you make this work on 32-bit systems? How will you make this work on iOS—a 64-bit system where pointers are 32-bit? How will you make this work on systems with 64-bit pointers that have all of their bits reserved?
 Simply shift all pointers 3bits to the right and use the lower 
 3 bits to indicate the bitoffset.
That will only work if they are aligned to 8-byte boundaries.
Jul 20
parent reply Dom DiSc <dominikus scherkl.de> writes:
On Saturday, 20 July 2024 at 17:32:40 UTC, IchorDev wrote:
 On Thursday, 18 July 2024 at 08:52:35 UTC, Dom DiSc wrote:
 Why? As pointers are 64bit, the highest three bits will never 
 be used anyway (at least not for the next 20 years).
This is a stupidly unreliable and highly system-dependant hack: https://stackoverflow.com/a/18426582 How will you make this work on 32-bit systems?
Use 64bit pointers anyway and translate them before handing it over to the system. How is memory above 4GB handled on such systems? The answer is: there is already a translation from process address space to hardware addresses. This need only a little adjustment.
 How will you make this work on iOS—a 64-bit system where 
 pointers are 32-bit?
Same of course.
 How will you make this work on systems with 64-bit pointers 
 that have all of their bits reserved?
Not a problem unless those bits are used for other purposes by the system.
 Simply shift all pointers 3bits to the right and use the lower 
 3 bits to indicate the bitoffset.
That will only work if they are aligned to 8-byte boundaries.
?!? Why? I mean, alignment maybe required also by other means, but that translate only to ignoring the lower bits and fiddling out the requested bits by shifts. Of course all this is just a workaround unless processors exists that support bit addresses directly. But I see no reason why that should be problematic at all. The low address lines doesn't even need to exist. On 68000 there was also no line for odd addresses, so you needed to get 16bit and then shift if access to an odd byte was necessary. And it worked.
Jul 20
parent reply IchorDev <zxinsworld gmail.com> writes:
On Saturday, 20 July 2024 at 20:18:32 UTC, Dom DiSc wrote:
 Use 64bit pointers anyway and translate them before handing it 
 over to the system.
Which will take a whole extra register, defeating the point of trying to cram data into pointers. Just using a different register is a much better solution in general, it's just a bit wasteful. If you care so much waste though, you shouldn't be using a language with slices—in D they take **2 times** more space than a pointer when you could just be using null-terminated arrays instead.
 [...] there is already a translation from process address space 
 to hardware addresses.
And this is done by the kernel, so I don't see how it's relevant to this conversation unless we are re-writing iOS' kernel?
 How will you make this work on systems with 64-bit pointers 
 that have all of their bits reserved?
Not a problem unless those bits are used for other purposes by the system.
So what you are proposing will not work for those systems.
 Simply shift all pointers 3bits to the right and use the 
 lower 3 bits to indicate the bitoffset.
That will only work if they are aligned to 8-byte boundaries.
?!? Why? I mean, alignment maybe required also by other means, but that translate only to ignoring the lower bits and fiddling out the requested bits by shifts.
Let's say we have a 64 bit pointer (U='unused'; P=pointer bits; X=zeroed bits ; C=our data): `UUUUUUUU UUUUUUUU PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP` Now we shift 3 bits to the right: `XXXUUUUU UUUUUUUU UUUPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP` We just lost the 3 least significant bits of the original pointer, so unless it was storing a multiple of 8 we just lost important data. Now we will use the lower 3 bits to indicate the bit offset: `XXXUUUUU UUUUUUUU UUUPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPCCC` We just overwrote more of the existing pointer… so actually it would have to be aligned to a multiple of 64, or we just lost important data. Also the kernel often uses these 'unused' bits in the pointer, so we would have to tread lightly, and also compiled code would have to know not just what OS it's targeting, but also what *kernel version* it's targeting in case of changes to this arrangement. In practice it's not as bad as it sounds, but it could be a frustrating headache to debug. Lastly, in general bit-packing optimisations like this are terribly slow. It's probably just not worth the tiny memory gain.
Jul 20
parent reply Dom DiSc <dominikus scherkl.de> writes:
On Sunday, 21 July 2024 at 06:14:05 UTC, IchorDev wrote:
 Let's say we have a 64 bit pointer (U='unused'; P=pointer bits; 
 X=zeroed bits ; C=our data):
 `UUUUUUUU UUUUUUUU PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP 
 PPPPPPPP`
 Now we shift 3 bits to the right:
 `XXXUUUUU UUUUUUUU UUUPPPPP PPPPPPPP PPPPPPPP PPPPPPPP PPPPPPPP 
 PPPPPPPP`
 We just lost the 3 least significant bits of the original 
 pointer, so unless it was storing a multiple of 8 we just lost 
 important data.
No. This is a misunderstanding. My idea is to store all pointers as "61bit address"+"3bit bitoffset (0..7 within a byte). No lower bits will be lost. To read we give the system only the address (so p>>3) and then mask out the requested bit. Of course this address will contain 3 leading (unused) zeroes. As long as the system don't abuse these unused bits, everything is fine. Also this is no "extra data". It's just adding the 3 missing address lines that are necessary to address single bits instead of blocks of 8 bits (yes, bytes are just a 8bit alignment, they shouldn't be fundamental - but as long as these 3 lines are missing, we need a workaround like in the 68000er times, where another line was missing). And yes, the workaround is time consuming - but really not that bad. And less bad if you require "even" types to be aligned to there size, as it is also required for bytes and shorts or even for ints on 64bit systems.
Jul 21
parent reply IchorDev <zxinsworld gmail.com> writes:
On Sunday, 21 July 2024 at 10:25:29 UTC, Dom DiSc wrote:
 bytes are just a 8bit alignment, they shouldn't be fundamental 
 - but as long as these 3 lines are missing, we need a 
 workaround like in the 68000er times
Multi-bit alignment (e.g. 8-bit) is a means of simplifying and therefore speeding up the CPU architecture. Having 3 lines for individual bits would be a huge waste of electricity. As it is, I think being able to address individual bytes is a bit antiquated, especially with new CPUs heavily favouring pointer-size alignment.
Jul 21
next sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Sunday, 21 July 2024 at 16:19:04 UTC, IchorDev wrote:
 Multi-bit alignment (e.g. 8-bit) is a means of simplifying and 
 therefore speeding up the CPU architecture. Having 3 lines for 
 individual bits would be a huge waste of electricity. As it is, 
 I think being able to address individual bytes is a bit 
 antiquated, especially with new CPUs heavily favouring 
 pointer-size alignment.
Vice Versa. The address space of 64bit is so huge, having extra lines for all the unused high bits is a waste of electricity. Using them to address individual bits enables us to write consistent, less complex code, e.g. no invalid patterns for basic types with gaps (e.g. boolean). And this code need not be slower - alignment and shifting can be optimized away in the background / hardware, the compiler can arrange variables to minimize gaps and speed up access by aligning.
Jul 21
parent claptrap <clap trap.com> writes:
On Sunday, 21 July 2024 at 20:56:00 UTC, Dom DiSc wrote:

 Vice Versa. The address space of 64bit is so huge, having extra 
 lines for all the unused high bits is a waste of electricity. 
 Using them to address individual bits enables us to write 
 consistent, less complex code, e.g. no invalid patterns for 
 basic types with gaps (e.g. boolean). And this code need not be 
 slower - alignment and shifting can be optimized away in the 
 background / hardware, the compiler can arrange variables to 
 minimize gaps and speed up access by aligning.
Neither x64 or Arm use all 64 bits, AFAIK they only use the bottom 48 bits, the upper 16 should be zeroed.
Jul 22
prev sibling parent reply claptrap <clap trap.com> writes:
On Sunday, 21 July 2024 at 16:19:04 UTC, IchorDev wrote:
 On Sunday, 21 July 2024 at 10:25:29 UTC, Dom DiSc wrote:
 bytes are just a 8bit alignment, they shouldn't be fundamental 
 - but as long as these 3 lines are missing, we need a 
 workaround like in the 68000er times
Multi-bit alignment (e.g. 8-bit) is a means of simplifying and therefore speeding up the CPU architecture. Having 3 lines for individual bits would be a huge waste of electricity. As it is, I think being able to address individual bytes is a bit antiquated, especially with new CPUs heavily favouring pointer-size alignment.
x86 CPUs favour type alignment not pointer size alignment, bytes 1 align, shorts 2 align and so on. It's mostly to avoid buggering up the store to load forwarding IIRC. Which is basically a way of making recent writes faster to retrieve when they are loaded again. And as far as I know ARM is the same.
Jul 22
parent IchorDev <zxinsworld gmail.com> writes:
On Monday, 22 July 2024 at 21:23:04 UTC, claptrap wrote:
 x86 CPUs favour type alignment not pointer size alignment, 
 bytes 1 align, shorts 2 align and so on.

 It's mostly to avoid buggering up the store to load forwarding 
 IIRC. Which is basically a way of making recent writes faster 
 to retrieve when they are loaded again.
Pointer size was me simplifying for the sake of brevity, but I was taught that i386-based processors are generally fastest with 4-byte memory alignment, and AMD64-based processors are generally fastest with 8-byte memory alignment.
Jul 23
prev sibling parent Dave P. <dave287091 gmail.com> writes:
On Wednesday, 17 July 2024 at 03:24:28 UTC, Walter Bright wrote:
 On 7/4/2024 11:34 AM, Steven Schveighoffer wrote:
 Walter, is this possible to do?
D originally had the "bit" data type, which represented a single bit. It was a great idea. But I ran into endless problems with it. It turned the type system on it's ear. You couldn't take a pointer/reference to it. It problems being the subject of a type constructor. For example, what does it mean to have one bit be const and the adjacent bit be mutable? I spent so much time trying to work it in that the benefits of it became swamped by the complexity. (Sort of like how "volatile" infects everything in C++, way beyond the scope of what volatile actually is.) So I gave up and removed it all. Haven't missed it :-/
This is a misunderstanding of my post. I didn’t intend to introduce a new type, just that the syntax changes. D prefers modifiers of types to be one the left hand side instead of C’s weird postfix array notation. So D bitfields would similarly be next to the type instead of after the field name.
Jul 20
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
"In practice, however, if one sticks to int, uint, long and ulong bitfields, they are laid out the same." This has been proven false: https://forum.dlang.org/post/gsofogcdmcvrmcfxhusy forum.dlang.org I no longer agree that this is a good feature to add, at least with the deferral of specification to the incumbent C compiler. -Steve
Jul 05
prev sibling next sibling parent Dave P. <dave287091 gmail.com> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
Is it possible to make `bool` bitfields portable? Most of the time I want to use bitfields in C is to optimize boolean fields.
Jul 20
prev sibling next sibling parent Dave P. <dave287091 gmail.com> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md

 Adds introspection capabilities.

 https://github.com/dlang/dmd/pull/16444
C23 introduces `_BitInt(N)` which adds integers of a specific bit width that don’t promote. Should D bitfields be implemented in terms of this new type instead of the legacy bitfields with all their problems? See https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf for a draft version of the C23 standard (possibly there’s a newer version but I gave up digging through their archives).
Jul 20
prev sibling next sibling parent reply IchorDev <zxinsworld gmail.com> writes:
On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc423a2c4c736a6783d77c255403a/bitfields.md
I’ve spent a long time observing the debate about bitfields in D, so now it’s time for me to give my feedback. Bitfields are an incredibly bare-bones feature that address only a small subset of the difficulties of managing bit-packed data, are difficult to expand upon, and are arbitrarily relegated to a field of an aggregate for maximum inconvenience. The DIP itself even points out that ‘many alternatives are available’ for situations where bitfields aren’t an appropriate solution. This serves as an admission that bitfields are not a very useful feature outside of C interoperability, because programmers expect and want structs to be laid out how they choose, not in some arbitrary way that’s compatible with the conventions of C compilers. You cannot choose how under/overflow are handled, have different types for different collections of bits in the field safely, or even construct a bitfield on its own unless it is wrapped in a dummy struct. I think if we add this version of bitfields to mainline D, then it should only be as a C interoperability feature. If we want to add a bitfield equivalent to D, let’s make it better than a bitfield in every possible way: let’s make it an aggregate type. I’ll call it ‘bitwise’ as an example: ``` bitwise Flavour: 4{ bool: 1 sweet, savoury, bitter; } bitwise Col16: ushort{ uint: 5 red; uint: 6 green; uint: 5 blue; } ``` Here it is slightly modified with some comments so you can understand what’s going on: ```d bitwise Flavour: 4{ //size is 4 bytes. Without specifying this, the type would be 1 byte because its contents only take 1 byte bool: 1 sweet; //1 bit that is interpreted as bool //default values for fields, and listing multiple comma-separated fields: bool: 1 savoury = true, bitter; } bitwise Col16: ushort{ //You can implicitly cast this type to a ushort, so it should be 2 bytes at most uint: 5 red; //5 bits that are interpreted as uint uint: 6 green; uint: 5 blue; } ``` Because it’s a type it can be easily referenced, passed to functions without a dummy struct, we can have template pattern matching for them, they can be re-used across structs, can be given constructors & operator overloads (e.g. for custom floats), and can have different ways of handling overflow: ```d bitwise Example{ ubyte: 1 a; //assigning 10 to a: 10 & a.max //(where a.max is 1 in this case) clamped byte: 2 b; //assigning 10 to b: clamp(10, signExtend!(b.bitwidth)(b.min), b.max); //(where b.min/max would be -2/1 in this case) } ``` But what about C interoperability? Okay, add `extern(C)` to an anonymous bitwise and it’ll become a C-interoperable bitfield: ```d struct S{ extern(C) bitwise: uint{ bool: 1 isnothrow, isnogc, isproperty, isref, isreturn, isscope, isreturninferred, Isscopeinferred, inference, islive, incomplete, inoutParam, inoutQual; uint: 5 a; uint: 3 flags; } } ``` I think this approach gives us much more room to make this a useful & versatile feature that can be expanded to meet various needs and fit various use-cases.
Jul 24
parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 24 July 2024 at 08:57:40 UTC, IchorDev wrote:
 On Sunday, 5 May 2024 at 23:08:29 UTC, Walter Bright wrote:
 https://github.com/WalterBright/documents/blob/2ec9c5966dccc213a2c4c736a6783d77c255403a/bitfields.md
I’ve spent a long time observing the debate about bitfields in D, so now it’s time for me to give my feedback. Bitfields are an incredibly bare-bones feature that address only a small subset of the difficulties of managing bit-packed data, are difficult to expand upon, and are arbitrarily relegated to a field of an aggregate for maximum inconvenience. The DIP itself even points out that ‘many alternatives are available’ for situations where bitfields aren’t an appropriate solution. This serves as an admission that bitfields are not a very useful feature outside of C interoperability, because programmers expect and want structs to be laid out how they choose, not in some arbitrary way that’s compatible with the conventions of C compilers. You cannot choose how under/overflow are handled, have different types for different collections of bits in the field safely, or even construct a bitfield on its own unless it is wrapped in a dummy struct. I think if we add this version of bitfields to mainline D, then it should only be as a C interoperability feature. If we want to add a bitfield equivalent to D, let’s make it better than a bitfield in every possible way: let’s make it an aggregate type. I’ll call it ‘bitwise’ as an example: ``` bitwise Flavour: 4{ bool: 1 sweet, savoury, bitter; } bitwise Col16: ushort{ uint: 5 red; uint: 6 green; uint: 5 blue; } ``` Here it is slightly modified with some comments so you can understand what’s going on: ```d bitwise Flavour: 4{ //size is 4 bytes. Without specifying this, the type would be 1 byte because its contents only take 1 byte bool: 1 sweet; //1 bit that is interpreted as bool //default values for fields, and listing multiple comma-separated fields: bool: 1 savoury = true, bitter; } bitwise Col16: ushort{ //You can implicitly cast this type to a ushort, so it should be 2 bytes at most uint: 5 red; //5 bits that are interpreted as uint uint: 6 green; uint: 5 blue; } ``` Because it’s a type it can be easily referenced, passed to functions without a dummy struct, we can have template pattern matching for them, they can be re-used across structs, can be given constructors & operator overloads (e.g. for custom floats), and can have different ways of handling overflow: ```d bitwise Example{ ubyte: 1 a; //assigning 10 to a: 10 & a.max //(where a.max is 1 in this case) clamped byte: 2 b; //assigning 10 to b: clamp(10, signExtend!(b.bitwidth)(b.min), b.max); //(where b.min/max would be -2/1 in this case) } ``` But what about C interoperability? Okay, add `extern(C)` to an anonymous bitwise and it’ll become a C-interoperable bitfield: ```d struct S{ extern(C) bitwise: uint{ bool: 1 isnothrow, isnogc, isproperty, isref, isreturn, isscope, isreturninferred, Isscopeinferred, inference, islive, incomplete, inoutParam, inoutQual; uint: 5 a; uint: 3 flags; } } ``` I think this approach gives us much more room to make this a useful & versatile feature that can be expanded to meet various needs and fit various use-cases.
I like it. The only thing that’s odd to me is `int: 21 foo, bar`. It looks much more like `21` is in line with `foo` and `bar`, but it’s to be read as `int: 21` `foo` `bar`. We could agree to use no space, i.e. `int:21`, or use something other than `:`, e.g. `int{21}`. That looks much more like a type and `int{21} foo, bar` looks much more like a list of variables declared to be of some type. Essentially, `int[n]` is a static array of `n` values of type `int`, whereas `int{n}` is a single `n`-bit signed value. As per `int`, `n` is 32 or lower. For `bool` only `bool{1}` would be possible (maybe also `bool{0}`, if we allow 0-length bitfields. In general, `extern(D)` bitfields should be allowed to be optimized. An attribute like ` packed` could indicate that the layout be exactly as specified. And `extern(C)` can give C compatibility.
Jul 25
parent reply IchorDev <zxinsworld gmail.com> writes:
On Friday, 26 July 2024 at 00:08:12 UTC, Quirin Schroll wrote:
 I like it. The only thing that’s odd to me is `int: 21 foo, 
 bar`. It looks much more like `21` is in line with `foo` and 
 `bar`, but it’s to be read as `int: 21` `foo` `bar`. We could 
 agree to use no space, i.e. `int:21`, or use something other 
 than `:`, e.g. `int{21}`. That looks much more like a type and 
 `int{21} foo, bar` looks much more like a list of variables 
 declared to be of some type. Essentially, `int[n]` is a static  
 array of `n` values of type `int`, whereas `int{n}` is a single 
 `n`-bit signed value. As per `int`, `n` is 32 or lower. For 
 `bool` only `bool{1}` would be possible
Good idea about `int{21}`! The `int: 21` syntax was a bit of a cop-out because I couldn’t think of anything better.
 (maybe also `bool{0}`, if we allow 0-length bitfields.
No harm in doing so I suppose. Even better if they’re actually 0 bytes, heh.
 In general, `extern(D)` bitfields should be allowed to be 
 optimized.
If you mean field reordering, then that did *not* work with structs, so I’m inclined to view this idea with some healthy skepticism. First off, we don’t want libraries where every change to a bitwise/bitfield is a breaking change. You might say that the obvious answer to this is to use the ‘don’t ruin my binary compatibility’ ` packed` attribute, but this is not how the other PoD aggregates work in D, so it might get largely overlooked. Second, I think we should trust that devs are smart enough to lay their data out themselves. Field reordering should *at least* be opt-in, but I’d argue that it’s not even a necessary feature in the first place. It’s something I can’t see myself ever wanting.
Jul 29
parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Monday, 29 July 2024 at 07:53:09 UTC, IchorDev wrote:
 On Friday, 26 July 2024 at 00:08:12 UTC, Quirin Schroll wrote:
 I like it. The only thing that’s odd to me is `int: 21 foo, 
 bar`. It looks much more like `21` is in line with `foo` and 
 `bar`, but it’s to be read as `int: 21` `foo` `bar`. We could 
 agree to use no space, i.e. `int:21`, or use something other 
 than `:`, e.g. `int{21}`. That looks much more like a type and 
 `int{21} foo, bar` looks much more like a list of variables 
 declared to be of some type. Essentially, `int[n]` is a static 
  array of `n` values of type `int`, whereas `int{n}` is a 
 single `n`-bit signed value. As per `int`, `n` is 32 or lower. 
 For `bool` only `bool{1}` would be possible
Good idea about `int{21}`! The `int: 21` syntax was a bit of a cop-out because I couldn’t think of anything better.
 (maybe also `bool{0}`, if we allow 0-length bitfields.
No harm in doing so I suppose. Even better if they’re actually 0 bytes, heh.
 In general, `extern(D)` bitfields should be allowed to be 
 optimized.
If you mean field reordering, then that did *not* work with structs, so I’m inclined to view this idea with some healthy skepticism.
I don’t; I rather meant specification-level optimization, i.e. D should find its own layout which would be fully specified and need not be compatible with C. In that regard, e.g. if you had `ubyte{7} a, b` (or `ubyte{7}[2]`), the language could require the bits to be laid out as: ``` AAAA AAAB BBBB BB00 ``` or ``` AAAA AAA0 BBBB BBB0 ``` (`A`: bit part of `a`; `B`: bit part of `b`; `0`: padding) The specification could say: “A block of bit fields occupies exactly the sum number of bits rounded up to the next multiple of 8. The difference in those numbers is called *padding bits.* If after a bit field a byte boundary is not reached and the next bit field would overstep the next byte boundary and enough padding bits are left to align the next bit field with that byte boundary, exactly that many padding bits are inserted here. Otherwise, bit fields are laid out directly next to each other. Remaining padding bits are inserted at the end of a bit field block.” This is not a suggestion, but a demonstration how a spec can optimize; layout would be predictable.
Jul 29
parent IchorDev <zxinsworld gmail.com> writes:
On Monday, 29 July 2024 at 13:26:29 UTC, Quirin Schroll wrote:
 I don’t; I rather meant specification-level optimization, i.e. 
 D should find its own layout which would be fully specified and 
 need not be compatible with C. In that regard, e.g. if you had 
 `ubyte{7} a, b` (or `ubyte{7}[2]`), the language could require 
 the bits to be laid out as:
 ```
 AAAA AAAB BBBB BB00
 ```
 or
 ```
 AAAA AAA0 BBBB BBB0
 ```
 (`A`: bit part of `a`; `B`: bit part of `b`; `0`: padding)

 The specification could say: “A block of bit fields occupies 
 exactly the sum number of bits rounded up to the next multiple 
 of 8. The difference in those numbers is called *padding bits.* 
 If after a bit field a byte boundary is not reached and the 
 next bit field would overstep the next byte boundary and enough 
 padding bits are left to align the next bit field with that 
 byte boundary, exactly that many padding bits are inserted 
 here. Otherwise, bit fields are laid out directly next to each 
 other. Remaining padding bits are inserted at the end of a bit 
 field block.” This is not a suggestion, but a demonstration how 
 a spec can optimize; layout would be predictable.
Ah, thanks for explaining! I always find padding rules to be confusing, but as long as there’s a way to disable padding (w/ `align(1)`?) I don’t mind. Also about reserving a new keyword, I was thinking something with an underscore like `bits_struct` is unlikely to break any existing code, since snake case is not conventionally used for D identifiers.
Jul 29
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Added asked-for introspection:

https://github.com/dlang/dmd/pull/17043
https://github.com/dlang/dlang.org/pull/3921
Oct 31