digitalmars.D.learn - int | missing | absent
- Antonio (13/13) Jun 02 2022 JSON properties can be
- bauss (38/51) Jun 02 2022 null and absent should be treated the same in the code, it's only
- Antonio (65/90) Jun 21 2022 The main problem is when you need to use DTO struct that
- Steven Schveighoffer (18/80) Jun 21 2022 There are 3 situations:
- Jesse Phillips (12/16) Jun 23 2022 I do a lot of reading JSON data in C#, and I heavily lean on
- Steven Schveighoffer (18/37) Jun 23 2022 Well, my json parser is slightly different -- you don't need to
- bauss (8/25) Jun 23 2022 I'm in a similar boat as you, except for that I read a lot of big
- Antonio (24/29) Jun 27 2022 May be for your case Steve.
- Steven Schveighoffer (22/51) Jun 27 2022 I see what you are saying. What needs to happen is first, you need a
JSON properties can be - a value - null - absent What's the standard way to define a serialziable/deserializable structs supporting properties of any of this 4 kinds?: * int * int | null * int | absent * int | null | absent Whats the best library to manage this JSON requirements? (all the 4 cases)? Thanks
Jun 02 2022
On Thursday, 2 June 2022 at 08:27:32 UTC, Antonio wrote:JSON properties can be - a value - null - absent What's the standard way to define a serialziable/deserializable structs supporting properties of any of this 4 kinds?: * int * int | null * int | absent * int | null | absent Whats the best library to manage this JSON requirements? (all the 4 cases)? Thanksnull and absent should be treated the same in the code, it's only when serializing you should define one or the other, if you need to have null values AND absent values then attributing accordingly is the solution. Which means deserialization only has value/null and serialization has value/null by default, but can be opt-in to also have absent. One common mistake I've seen with parsers in D is that fields are often opt-out, instead of opt-in, which means you always have to declare all fields in a json object, but that's a bad implementation. All fields should be optional and only required when attributed. An example: ``` struct A { int x; } ``` Should be able to be deserialized from this json: ``` {"x":100,"y":200} ``` However a lot of parsers in D do not support that. Instead you must declare the y member as well like: ``` struct A { int x; int y; } ``` Any decent parser should not have that problem. If a field is required then it should be determined by an attribute like: ``` struct A { JsonRequired int x; JsonRequired int y; } ``` If that attribute isn't present then it can be absent during deserialization. Sorry I got a little off-track, but I felt like pointing these things out are important as well.
Jun 02 2022
On Thursday, 2 June 2022 at 13:24:08 UTC, bauss wrote:On Thursday, 2 June 2022 at 08:27:32 UTC, Antonio wrote:The main problem is when you need to use DTO struct that "patches" data (not all the data) and absent vs null discrimination is really mandatory. A good approximation could be using SumTypes (what in Typescript or Scala is named Union Types...), an incredible example of D template power that could be used in Json serialization/deserialization without the need of custom properties attributes. Here an example of how to define DTOs discriminating absent (Undefined in javascrip) and null. i.e. ```d import std.sumtype: SumType, match; import std.datetime.date: Date; void main() { struct Undefined {} struct Null {} struct PersonPatchDTO { SumType!(long) id; SumType!(Undefined, string ) name; SumType!(Undefined, string, string[]) surname; SumType!(Undefined, Null, Date) birthday; SumType!(Undefined, Null, long) partner_id; } auto patchPerson(PersonPatchDTO patch){ import std.stdio: writeln; writeln( "Patching person in database ", patch ); } // This should come from a JSON deserialization; PersonPatchDTO patch = { id:12334, partner_id: Null() }; patchPerson(patch); } ``` Or the typical upset operation some people love to do ```d void main(){ ... struct PersonUpsetDTO { SumType!(Undefined, long) id; SumType!(Undefined, string ) name; SumType!(Undefined, string, string[]) surname; SumType!(Undefined, Null, Date) birthday; SumType!(Undefined, Null, long) partner_id; } auto upsetPerson(PersonUpsetDTO patch){ import std.stdio: writeln; patch.id.match!( (long l) => writeln("Updating person with id ", l), (_) => writeln("Creating a new person") ); } ... } ``` D ha not "union types" native support, but SumType is a nice substitution (with some overhead in generated code). **Problems?** - It is not the "standard" way expected by D Json serializers/deserializers. It requires a custom one - May be it's hard to inspect with debugger (I haven't tried yet)JSON properties can be - a value - null - absent What's the standard way to define a serialziable/deserializable structs supporting properties of any of this 4 kinds?: * int * int | null * int | absent * int | null | absent Whats the best library to manage this JSON requirements? (all the 4 cases)? Thanksnull and absent should be treated the same in the code, it's only when serializing you should define one or the other, if you need to have null values AND absent values then attributing accordingly is the solution.
Jun 21 2022
On 6/2/22 9:24 AM, bauss wrote:On Thursday, 2 June 2022 at 08:27:32 UTC, Antonio wrote:There are 3 situations: 1. field in json and struct. Obvious result. 2. field in json but not in struct. 3. field in struct but not in json. In jsoniopipe, I handle 2 by requiring a UDA on the struct to ignore the members: https://github.com/schveiguy/jsoniopipe/blob/4fa5350ed97786e34612a755f7e857544c6f9512/source/iopipe/json/serialize.d#L50-L55 Or, you can provide a `JSONValue` member, which will contain all unexpected members, that you can attribute with ` extras`: https://github.com/schveiguy/jsoniopipe/blob/4fa5350ed97786e34612a755f7e857544c6f9512/source/iopipe/json/serialize.d#L38-L43 I handle 3 by requiring a UDA on the field: https://github.com/schveiguy/jsoniopipe/blob/4fa5350ed97786e34612a755f7e857544c6f9512/source/iopipe/json/serialize.d#L26-L30 Otherwise it's an error. I feel it's too loose to make a best effort, and leave the rest up to initial values, or just ignore possibly important information during parsing. -SteveJSON properties can be - a value - null - absent What's the standard way to define a serialziable/deserializable structs supporting properties of any of this 4 kinds?: * int * int | null * int | absent * int | null | absent Whats the best library to manage this JSON requirements? (all the 4 cases)? Thanksnull and absent should be treated the same in the code, it's only when serializing you should define one or the other, if you need to have null values AND absent values then attributing accordingly is the solution. Which means deserialization only has value/null and serialization has value/null by default, but can be opt-in to also have absent. One common mistake I've seen with parsers in D is that fields are often opt-out, instead of opt-in, which means you always have to declare all fields in a json object, but that's a bad implementation. All fields should be optional and only required when attributed. An example: ``` struct A { int x; } ``` Should be able to be deserialized from this json: ``` {"x":100,"y":200} ``` However a lot of parsers in D do not support that. Instead you must declare the y member as well like: ``` struct A { int x; int y; } ``` Any decent parser should not have that problem. If a field is required then it should be determined by an attribute like: ``` struct A { JsonRequired int x; JsonRequired int y; } ```
Jun 21 2022
On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:There are 3 situations: 1. field in json and struct. Obvious result. 2. field in json but not in struct. 3. field in struct but not in json.optional over required. The reason optional is so beneficial is because I'm looking to pull out specific data points from the JSON, I have no use nor care about any other field. If I had to specify every field being provided, every time something changes, the JSON parser would be completely unusable for me. I do like the extra assuming it allows reserializing the entire JSON object. But many times that data just isn't needed and I'd like my type to trim it.
Jun 23 2022
On 6/23/22 11:20 AM, Jesse Phillips wrote:On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:Well, my json parser is slightly different -- you don't need to serialize anything. The parser provides mechanisms to jump to some specific node in the json tree, and then you can deal with the data at that point. It also provides a way to parse ahead and then rewind to a previous spot. All possible because iopipe has an expandable buffer. This is how I serialize classes where the class type is specified in an internal field. The point of my philosophy on the UDA system is that you should have to opt-in to discrepancies in the data. Yes, it can be a pain, but I'd rather it be explicit.There are 3 situations: 1. field in json and struct. Obvious result. 2. field in json but not in struct. 3. field in struct but not in json.over required. The reason optional is so beneficial is because I'm looking to pull out specific data points from the JSON, I have no use nor care about any other field. If I had to specify every field being provided, every time something changes, the JSON parser would be completely unusable for me.I do like the extra assuming it allows reserializing the entire JSON object. But many times that data just isn't needed and I'd like my type to trim it.I could probably add a specialized type that just throws everything away, and tag that as extras. Like: ```d extras Trash _; ``` Or something similar, to make it easier. -Steve
Jun 23 2022
On Thursday, 23 June 2022 at 15:20:02 UTC, Jesse Phillips wrote:On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:I'm in a similar boat as you, except for that I read a lot of big json files and I absolutely cannot read everything in the json and hold them in memory, so I must be selective in what I read from the json files, since they're read on a server and are several GB. I would be wasting a lot of RAM resources by having every field in the json file stored in memory. RAM is expensive, disk space is not.There are 3 situations: 1. field in json and struct. Obvious result. 2. field in json but not in struct. 3. field in struct but not in json.optional over required. The reason optional is so beneficial is because I'm looking to pull out specific data points from the JSON, I have no use nor care about any other field. If I had to specify every field being provided, every time something changes, the JSON parser would be completely unusable for me. I do like the extra assuming it allows reserializing the entire JSON object. But many times that data just isn't needed and I'd like my type to trim it.
Jun 23 2022
On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:On 6/2/22 9:24 AM, bauss wrote: I feel it's too loose to make a best effort, and leave the rest up to initial values, or just ignore possibly important information during parsing. -SteveMay be for your case Steve. I need to represent in a "typed" way complex structures where some properties can be "undefined" (not present in json) and where null value is a valid value (and not the same that "undefined" ones)... basically, the algebraic type Undefined | Null | T It isinefficient in memory terms (because D offers only "structs", not the Typescript object equivalent where properties accepts to be not present nativelly as part of the type definition) But it is, in my opinion, a needed feature: What if you want to "patch" an entity or query an entity requiring only some of the properties?... I don't want to build results using Json objects... D is typed language and results should be built in a rich typed structure (like: {id:"1324123", name:"peter", age:18} or {id:"1234123", age:18 } **validated by compiler**. When implementing rich REST services (with some complexity), undefined native management is a must (and not the same than null management). I am currently using vibe-d json and the way it manages optional forces me to write custom serialization for all entities (to properly manage undefined vs null)... it is anoying!!!
Jun 27 2022
On 6/27/22 9:03 AM, Antonio wrote:On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:I see what you are saying. What needs to happen is first, you need a type wrapper that does this, which defaults to undefined. Then mark it optional so it's OK if it doesn't appear. Then only if the field is not present will it be marked as undefined. It may even be useful to make the type wrapper itself always optional, rather than having to mark it optional.On 6/2/22 9:24 AM, bauss wrote: I feel it's too loose to make a best effort, and leave the rest up to initial values, or just ignore possibly important information during parsing.May be for your case Steve. I need to represent in a "typed" way complex structures where some properties can be "undefined" (not present in json) and where null value is a valid value (and not the same that "undefined" ones)... basically, the algebraic type Undefined | Null | TIt isinefficient in memory terms (because D offers only "structs", not the Typescript object equivalent where properties accepts to be not present nativelly as part of the type definition)Well, in D you need reserve a place to hold it if present, or provide a bucket to put it in (i.e. a JSON type).But it is, in my opinion, a needed feature: What if you want to "patch" an entity or query an entity requiring only some of the properties?... I don't want to build results using Json objects... D is typed language and results should be built in a rich typed structure (like: {id:"1324123", name:"peter", age:18} or {id:"1234123", age:18 } **validated by compiler**.This is, in fact, the JSON algebraic type, which is present in most JSON libraries.When implementing rich REST services (with some complexity), undefined native management is a must (and not the same than null management).Yes, `null` in the stream is different than missing. But again, most people would rather deal with concrete types rather than a dynamic JSON type. The iopipejson library does not really have a type for this, but it would be a useful addition.I am currently using vibe-d json and the way it manages optional forces me to write custom serialization for all entities (to properly manage undefined vs null)... it is anoying!!!Yeah, I can see that you would have to make a custom serialized option. Though, doing this via a type is possible, without having to write custom serialization for all entities. Maybe you can provide an example, and there may be a solution that you haven't thought of. -Steve
Jun 27 2022
On Monday, 27 June 2022 at 23:05:46 UTC, Steven Schveighoffer wrote:... Maybe you can provide an example, and there may be a solution that you haven't thought of. -SteveI first posted this "issue" to vibe-d: https://github.com/vibe-d/vibe.d/issues/2673
Jun 27 2022
On Monday, 27 June 2022 at 23:05:46 UTC, Steven Schveighoffer wrote:On 6/27/22 9:03 AM, Antonio wrote:Exactly. This issue/example in vibe-d treats about this solution and the annoying change of behavior treating "null" when optional attribute is present: https://github.com/vibe-d/vibe.d/issues/2673 The code is a "simplification" of something more complex (Special wrappers for **Null | T**, **Undefined | T** and **Undefined | Null | T** with some functional stuff for match! and null-safe access (well, trying to: It's really complex and I'm not enought experienced). I Tried to base my solution in SumType, but I didn't know how to add the required fromRepresentation/toRepresentation methods for custom serialization/deserialization...On Wednesday, 22 June 2022 at 01:09:22 UTC, Steven Schveighoffer wrote:I see what you are saying. What needs to happen is first, you need a type wrapper that does this, which defaults to undefined. Then mark it optional so it's OK if it doesn't appear. Then only if the field is not present will it be marked as undefined. It may even be useful to make the type wrapper itself always optional, rather than having to mark it optional.On 6/2/22 9:24 AM, bauss wrote: I feel it's too loose to make a best effort, and leave the rest up to initial values, or just ignore possibly important information during parsing.May be for your case Steve. I need to represent in a "typed" way complex structures where some properties can be "undefined" (not present in json) and where null value is a valid value (and not the same that "undefined" ones)... basically, the algebraic type Undefined | Null | T
Jun 27 2022