digitalmars.D - "BareBones" VersionCondition identifier for druntime
- Kevin Lamonte (53/53) Oct 17 2014 Hi all,
- Walter Bright (11/14) Oct 18 2014 It's a good idea, but having a bunch of versions quickly devolves to an
- Paulo Pinto (14/29) Oct 19 2014 On the Microsoft Office talk at CppCon about writting portable code,
- Kevin Lamonte (10/22) Oct 19 2014 I'm starting to see how that works now. For example, even though
- Walter Bright (3/6) Oct 19 2014 An easier way is use the paths to have the same import statement import ...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (37/43) Oct 19 2014 Sounds to me like the "nogc" check should be generalized so that
Hi all, Following up on the very fun work of Adam Ruppe, Mike (JinShil)'s Cortex M howto, XomB, and of course the "D Bare Bones" tutorial on wiki.osdev.org, I have started a brand-new kernel in D over at: https://github.com/klamonte/cycle At the moment I am running on i386 (qemu), but aiming for arch independence. I started on GDC but am using DMD now. The current kernel image does little more than allocate/free memory and emit strings to VGA memory, BUT it can use D-language templates, structs, enums, and (some) classes, all in less than 50k overhead. For the runtime, I am using a very minimal 2.065 druntime in the "kdruntime" directory (k is for kernel, not my name :) ). I think that kdruntime is (seriously) less than 100 functions away from being capable of supporting almost all of the functionality described in the D Language Reference, except for: Garbage Collection, Vector Extensions, and Interfacing to C++. I further think that the deviations from druntime are in total not that large, and that at some future date druntime could be de-coupled from libc/posix/windows (if one is willing to re-implement threads/synchronized). *** I am not asking anyone else to do any new work or change course on anything right now. Rather, I would like to ask the druntime devs if there is interest in defining a VersionCondition identifier, what I am currently calling "BareBones". *** "BareBones" loosely means "druntime without posix or windows". Other words that could be used to capture similar ideas: * "Affirmative" definitions: BareMetal, Baseline -- a short list of "these features/functions are the absolute minimum required to support D2 syntax, after which one could build the rest". * "Negative" definitions: NoLibC, NoThreads, NoGC -- "everything currently in druntime minus these things". If we had a word like BareBones now, then later on people working on kernels (or embedded, or games, or ...) could start offering pull requests to druntime to make the job of new mini-runtimes much easier. druntime itself would NOT be expected to compile or link with BareBones set. Most of druntime could be protected from invasive changes with just static asserts in modules like core.stdc.*, to prevent accidental inclusion in BareBones systems. Other spots that handle platform-specific stuff (like sections.d importing sections_linux.d) could import the non-existent sections_barebones.d. New runtimes could be born simply by cloning druntime, setting BareBones, removing all the files that static assert fail, and then implementing all the functions that aren't found during link. In the near-to-mid term, very little would change for druntime. In the longer term, one or more BareBones-version runtimes could emerge to put D in new places. What do y'all think? Would you be comfortable with saying to people implementing new runtimes, "please version your differences from druntime in this particular way" ?
Oct 17 2014
On 10/17/2014 12:04 PM, Kevin Lamonte wrote:What do y'all think? Would you be comfortable with saying to people implementing new runtimes, "please version your differences from druntime in this particular way" ?It's a good idea, but having a bunch of versions quickly devolves to an unmaintainable mess, in my experience. For one issue, when one adds a new piece of code, which versions apply in what ways? Once the number of versions exceeds a certain level, I've never seen it done right. A better solution is to have modules that "plug in" or not. The gc is designed this way. Also, dmd's source code is also (largely) done this way. Stuff that would normally be #ifdef'd is instead abstracted away to an interface. My experience with such techniques is they work well, are relatively problem free, and are much easier on the eyes.
Oct 18 2014
Am 19.10.2014 um 07:34 schrieb Walter Bright:On 10/17/2014 12:04 PM, Kevin Lamonte wrote:On the Microsoft Office talk at CppCon about writting portable code, they state "#ifdef OS" are forbiden in the Office team. Exactly because of this. It starts up just in one or two places, and eventually grows to a mess of pre-processor flow that no one can understand what the compiler is actually seeing. They went through lots of pain when writting Word 6.0. In C and C++, my approach is to have a common header file and implementation, with implementation specific code being named implementation_os.cpp Incidently this is how Go toolchain works for writing OS and CPU specific code. Go packages are seen as name_os_architecture.go -- PauloWhat do y'all think? Would you be comfortable with saying to people implementing new runtimes, "please version your differences from druntime in this particular way" ?It's a good idea, but having a bunch of versions quickly devolves to an unmaintainable mess, in my experience. For one issue, when one adds a new piece of code, which versions apply in what ways? Once the number of versions exceeds a certain level, I've never seen it done right. A better solution is to have modules that "plug in" or not. The gc is designed this way. Also, dmd's source code is also (largely) done this way. Stuff that would normally be #ifdef'd is instead abstracted away to an interface. My experience with such techniques is they work well, are relatively problem free, and are much easier on the eyes.
Oct 19 2014
On Sunday, 19 October 2014 at 05:35:39 UTC, Walter Bright wrote:It's a good idea, but having a bunch of versions quickly devolves to an unmaintainable mess, in my experience. For one issue, when one adds a new piece of code, which versions apply in what ways? Once the number of versions exceeds a certain level, I've never seen it done right. A better solution is to have modules that "plug in" or not. The gc is designed this way. Also, dmd's source code is also (largely) done this way. Stuff that would normally be #ifdef'd is instead abstracted away to an interface. My experience with such techniques is they work well, are relatively problem free, and are much easier on the eyes.I'm starting to see how that works now. For example, even though I am implementing gc_malloc and friends as separate functions for now (which forces me to understand what they do), it's looking like I might be able to get to the same point with a proxy in the garbage collector (that never collects). But how would this kind of modular design enable avoiding importing core.stdc.* at link time? Maybe the more general form of the question: can one avoid an import at compile and link time without "static if()" or "version()" ?
Oct 19 2014
On 10/19/2014 3:46 PM, Kevin Lamonte wrote:But how would this kind of modular design enable avoiding importing core.stdc.* at link time? Maybe the more general form of the question: can one avoid an import at compile and link time without "static if()" or "version()" ?An easier way is use the paths to have the same import statement import a different file.
Oct 19 2014
On Friday, 17 October 2014 at 19:04:59 UTC, Kevin Lamonte wrote:* "Affirmative" definitions: BareMetal, Baseline -- a short list of "these features/functions are the absolute minimum required to support D2 syntax, after which one could build the rest". * "Negative" definitions: NoLibC, NoThreads, NoGC -- "everything currently in druntime minus these things".Sounds to me like the "nogc" check should be generalized so that libraries state what features they use or assume, per function or per module. I think the language should enforce it, so that you on the module level you state what features you use, and then turn off features on a class/struct/function level. The runtime should provide a meta-level description of what it supports and also optionally contain version blocks that strips it down. This gives the following process for whole-program compilation: 1. Compiler loads a manifest file for the project with additional configuration information and statistical information for function calls, memory allocation dynamics etc. 2. Compiler search through the user code and records what features are being used (like it should do for nogc) and also additional probabilistic information. Compiler checks that the requirements from the manifest are satisfied. 3. Compiler compiles the runtime with the meta-information from (1) and (2) and the runtime CTFE can do whatever it wants with it. 4. Compiler compiles the rest of the program and can inline "reduced" runtime-parts where they make sense. It makes a lot of sense to provide non-application code with extra information that can be used to tune the code and you are quite right that you need to define a set of meaningful descriptors. I don't think "BareBones" would be the best solution. I think libraries/runtime could deduce that from more fine-grained information, and you could have CTFE libraries that computes those values for you. I also don't think this should be specific just for the runtime, it is relevant for all reusable code that is written independently from the application. It might also be nice to have a way to generate C-header files with the same meta information so that you can configure C-code through the same means. And in the long run perhaps even modify an existing C-compiler to provide the D-compiler with information about the C-code too (like escape analysis).
Oct 19 2014