www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Compilation memory use

reply Anonymouse <zorael gmail.com> writes:
TL;DR: Is there a way to tell what module or other section of a 
codebase is eating memory when compiling?

I'm keeping track of compilation memory use using zsh `time` with 
some environmental variables. It typically looks like this.

```
$ time dub build -c dev
Performing "debug" build using /usr/bin/dmd for x86_64.
[...]
Linking...
To force a rebuild of up-to-date targets, run again with --force.
dub build -c dev   9.47s  user 1.53s system 105% cpu 10.438 total
avg shared (code):         0 KB
avg unshared (data/stack): 0 KB
total (sum):               0 KB
max memory:                4533 MB
page faults from disk:     1
other page faults:         1237356
```

So it tells me the maximum memory that was required to compile it 
all. However, it only tells me just that; there's no way to know 
what part of the code is expensive and what part isn't.

I can copy dub's dmd command and run it with `-v` and try to 
infer that the modules that are slow to pass semantic3 are also 
the hungry ones. But are they?

Is there a better metric?
May 04 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Monday, 4 May 2020 at 17:00:21 UTC, Anonymouse wrote:
 TL;DR: Is there a way to tell what module or other section of a 
 codebase is eating memory when compiling?

 I'm keeping track of compilation memory use using zsh `time` 
 with some environmental variables. It typically looks like this.

 ```
 $ time dub build -c dev
 Performing "debug" build using /usr/bin/dmd for x86_64.
 [...]
 Linking...
 To force a rebuild of up-to-date targets, run again with 
 --force.
 dub build -c dev   9.47s  user 1.53s system 105% cpu 10.438 
 total
 avg shared (code):         0 KB
 avg unshared (data/stack): 0 KB
 total (sum):               0 KB
 max memory:                4533 MB
 page faults from disk:     1
 other page faults:         1237356
 ```

 So it tells me the maximum memory that was required to compile 
 it all. However, it only tells me just that; there's no way to 
 know what part of the code is expensive and what part isn't.

 I can copy dub's dmd command and run it with `-v` and try to 
 infer that the modules that are slow to pass semantic3 are also 
 the hungry ones. But are they?

 Is there a better metric?
I do have a custom dmd build with tracing functionality, but the profiles are not very user friendly and woefully under-documented. https://github.com/UplinkCoder/dmd/tree/tracing_dmd You can use the source of the file `src/printTraceHeader.d` to see how the profile is written, and by extension read. The actual trace file format is in `src/dmd/trace_file.di` you have to throw the -trace=$yourfilename switch when compiling. I am happy to assist with interpreting the results. though for big projects it's usually too much of a mess to really figure out.
May 04 2020
prev sibling parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Monday, 4 May 2020 at 17:00:21 UTC, Anonymouse wrote:
 TL;DR: Is there a way to tell what module or other section of a 
 codebase is eating memory when compiling?

 [...]
maybe with the massif tool of valgrind?
May 05 2020