www.digitalmars.com         C & C++   DMDScript  

D - [RT] Runtime Reflection and Compiled Languages

reply Berin Loritsch <bloritsch d-haven.org> writes:
For the uninitiated RT means Random Thought.  It merely intended to spark
discussion, and determine what level of support an idea or concept would
have--if someone stepped up to the plate.

The basic misconception is that in order for runtime reflection to work,

this approach with slightly different semantics.  Java directly interprets
the byte code, and once it has done some "hot spot" monitoring, it then

the byte code once and recompile it for the local machine--using the recompiled
library from then on.  Another key difference is that Java embeds the metadata

all the metadata in the library.

The key reasons for doing these things is to make cross-platform operation
a bit easier--and in the case of CLR based languages, to have one runtime
optimized for whatever Windows OS is running on the client.  However, neither
of these are the goal for D, and in playing the devil's advocate I would say
that they should not be the goals for D.

However there are some things we can learn and leverage for D programs.  The
first is that while we have identified the need for a common D binary code
format (i.e. the name mangling is done identically on all platforms), we have
not looked at a common D library format.  Why would one even be necessary when
there are dynamicly loadable libraries on Linux, Windows, etc?

The key reason would be the purpose of the library.  To enable pluggable
functionality, or to share the same library accross Linux and Windows (provided
the architecture was similar), we need a form of runtime reflection.  That
runtime reflection would be enabled by metadata in the library.  I don't want
to ruin the ability to use system libraries, but this is a tool that can work
for many things.

The types of metadata that would have to be included in a "dlib" (as opposed to
a DLL or SO library) would include the type information that is meant to be
public, or eternally callable.  That type information includes the constructor
signatures, the methods with their signatures, and any other information that
might be necessary.

Besides the obvious metadata, we could use the function to enable user specified
attributes--which will help in other special purpose applications.  For example,
tying an D interface to an implementation of OpenGL for a particular platform
would require finding the instance of the implementation objects that fit for
the platform.

Other features would include embedding the platform for which the "dlib" was
designed for.  That information is important when you consider the possibility
of distributing precompiled binaries.

Yet another benefit of the concept is the ability to compile against a dlib
without requiring the source code.  As long as the only meta data exposed in a
library is for the public interface, all the private code is kept secret and
no one has to worry about the external source code paths.

Whadaya think?
Dec 01 2003
next sibling parent reply "Achilleas Margaritis" <axilmar b-online.gr> writes:
Runtime reflection is no big deal to implement. Just have the object's
vtable point to a list of method and field entries.
For example:

enum ACCESS {
    PUBLIC,
    PROTECTED,
    PRIVATE
}

struct TYPE {
    int id;
    string name;
}

struct ARGUMENT {
    string name;
    TYPE *type;
}

struct METHOD {
    string name;
    ARGUMENT *arguments[];
    TYPE *result;
    void *proc;
    ACCESS access;
}

struct FIELD {
    TYPE *type;
    ACCESS access;
}

struct VTABLE {
    METHOD *methods[];
    FIELD *fields[];
}

Then, one could write:

Object.field[0].type.name, to get the name of the type of the first field.

etc.

Having a VM is unrelated.

"Berin Loritsch" <bloritsch d-haven.org> wrote in message
news:bqg1jh$1mvv$1 digitaldaemon.com...
 For the uninitiated RT means Random Thought.  It merely intended to spark
 discussion, and determine what level of support an idea or concept would
 have--if someone stepped up to the plate.

 The basic misconception is that in order for runtime reflection to work,

 this approach with slightly different semantics.  Java directly interprets
 the byte code, and once it has done some "hot spot" monitoring, it then

load
 the byte code once and recompile it for the local machine--using the
recompiled
 library from then on.  Another key difference is that Java embeds the
metadata

 all the metadata in the library.

 The key reasons for doing these things is to make cross-platform operation
 a bit easier--and in the case of CLR based languages, to have one runtime
 optimized for whatever Windows OS is running on the client.  However,
neither
 of these are the goal for D, and in playing the devil's advocate I would
say
 that they should not be the goals for D.

 However there are some things we can learn and leverage for D programs.
The
 first is that while we have identified the need for a common D binary code
 format (i.e. the name mangling is done identically on all platforms), we
have
 not looked at a common D library format.  Why would one even be necessary
when
 there are dynamicly loadable libraries on Linux, Windows, etc?

 The key reason would be the purpose of the library.  To enable pluggable
 functionality, or to share the same library accross Linux and Windows
(provided
 the architecture was similar), we need a form of runtime reflection.  That
 runtime reflection would be enabled by metadata in the library.  I don't
want
 to ruin the ability to use system libraries, but this is a tool that can
work
 for many things.

 The types of metadata that would have to be included in a "dlib" (as
opposed to
 a DLL or SO library) would include the type information that is meant to
be
 public, or eternally callable.  That type information includes the
constructor
 signatures, the methods with their signatures, and any other information
that
 might be necessary.

 Besides the obvious metadata, we could use the function to enable user
specified
 attributes--which will help in other special purpose applications.  For
example,
 tying an D interface to an implementation of OpenGL for a particular
platform
 would require finding the instance of the implementation objects that fit
for
 the platform.

 Other features would include embedding the platform for which the "dlib"
was
 designed for.  That information is important when you consider the
possibility
 of distributing precompiled binaries.

 Yet another benefit of the concept is the ability to compile against a
dlib
 without requiring the source code.  As long as the only meta data exposed
in a
 library is for the public interface, all the private code is kept secret
and
 no one has to worry about the external source code paths.

 Whadaya think?
Dec 01 2003
parent Berin Loritsch <bloritsch d-haven.org> writes:
Although, I would like something fairly flexible that would allow user defined
attributes to be embedded.

Achilleas Margaritis wrote:

 Runtime reflection is no big deal to implement. Just have the object's
 vtable point to a list of method and field entries.
 For example:
 
 enum ACCESS {
     PUBLIC,
     PROTECTED,
     PRIVATE
 }
 
 struct TYPE {
     int id;
     string name;
 }
 
 struct ARGUMENT {
     string name;
     TYPE *type;
 }
 
 struct METHOD {
     string name;
     ARGUMENT *arguments[];
     TYPE *result;
     void *proc;
     ACCESS access;
 }
 
 struct FIELD {
     TYPE *type;
     ACCESS access;
 }
 
 struct VTABLE {
     METHOD *methods[];
     FIELD *fields[];
 }
 
 Then, one could write:
 
 Object.field[0].type.name, to get the name of the type of the first field.
 
 etc.
 
 Having a VM is unrelated.
 
Dec 01 2003
prev sibling parent reply Ilya Minkov <Ilya_member pathlink.com> writes:
I'm pretty much sure reflection is a wrong name for the feature, since it
doesn't let you modify/create code. It's just a kind of introspection.

Walter promised to implement it, if there is enough use. So, the only thing we
are waiting for are examples of possible/plausible usage. So far I'm not really
convinced this "feature" is good for anything. But probably i'm missing
something.

We already have a counterpart for fields, and it seems to me that this is fairly
enough, at least it gives us a serialisation capability.

So, give me examples and i'll try to counter. ;)

BTW, Delphi supports run-time introspection of fields and properties - and
property is actually a couple of methods (functions). Nontheless, noone speaks
of "reflection" in Delphi.

-eye
Dec 02 2003
parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Ilya Minkov wrote:

 I'm pretty much sure reflection is a wrong name for the feature, since it
 doesn't let you modify/create code. It's just a kind of introspection.
?! Reflection is all about "introspection", however its name is there because it allows other classes to examine the class. At that point it ceases to be "introspection", and it becomes something else--introspection is about self examination.
 
 Walter promised to implement it, if there is enough use. So, the only thing we
 are waiting for are examples of possible/plausible usage. So far I'm not really
 convinced this "feature" is good for anything. But probably i'm missing
 something.
Ready? Component Oriented Programming (COP) is built on the concepts of traditional OOP, but it restricts the toolset a little bit. Why would anyone want to do that, I hear you ask. By shifting some responsibility to an entity called a container, we can separate the concern areas of developing software. Certain tasks such as component instantiation and sharing is managed by the container, along with other tasks like instrumentation or managing a pool of components. The approach is very powerful, and it does require a certain level of introspection. The components follow some simple rules: 1) They have a standard mechanism for moving through their lifecycle (initialization, active use, and destruction). 2) They *can* have a more complex lifecycles (though not necessary). 3) All components are accessed through an interface. 4) There can be many types of components that implement a particular interface, and it is up to the container to pick the one it will provide to a client component. At the very least, the container needs to know if a component implements a work interface (point 3), so that it can provide the correct type to the implementation. Next the container needs the ability to create an instance of the component without really directly knowing its type. A particular pattern that I have used well is a generic Factory and Deployment Manager for any particular component. The following example is in Java: class ComponentFactory { public ComponentFactory(Class class) { m_class = class; } public Object create() throws Exception { // simple case: no argument constructor Object component = m_class.newInstance(); // do more initialization stuff.... } public void destroy(Object obj) { ContainerUtil.dispose(obj); } } The part I did not show here is the code for the container to determine which component classes can satisfy any particular requirement. The basic reason is that it is very complex. The decision process may include user defined meta info to help make its decision. In Java, we have to hack that support in by adapting the JavaDoc tool to generate the adhoc metadata, is built in. One thing that makes things easier to do dynamic component mapping is the ability collect all the classes that are components (usually marked in some way), and be able to sort them based on the interfaces (the services) that they support. Lastly, we need to be able to manage component dependencies. For example, a component can depend on another componet, and it is the container's job to supply that component. With dynamic resolution, we need more than just the interface name on which we depend, with static resolution (i.e. from a configuration file) the interface name is sufficient. Different containers can map information differently. For example the Avalon containers (which I work on http://avalon.apache.org) will use metadata and a lookup service to perform the mapping). Other containers will either examine the methods for ones that ask for a component's interface or examine the constructor for the arguments to be passed in. All of these things require different types of metainfo. The basic stuff required to make this a reality is the ability to see if a class or object instance implements an interface, what the method signatures are (at least the public ones), what the constructor signatures are, and lastly the ability to create an instance of a class with a no argument constructor or any supplied constructor. Reflection also allows for certain other fairly cool features. The JavaBean spec is designed around reflection, so that you can automatically build an API based on the public setters and getters. True separation from a particular library to protect from certain types of licensing issues. You shouldn't have to link against a particular library to use something inside of it. Dynamic proxies (for D it would have to be language level). In Java the Dynamic proxy uses reflection to invoke the methods on another object while the dynamic object that is exposed implements only the interfaces you tell it to. This is a very nice security feature, as well as a way to introduce interceptors. In D I would prefer that the generated class would be equivalent to an interface that binds delegates to the implementation (most likely much faster). Of course, we need to walk before we can run so things like proxies and interceptors would be farther down the line.
Dec 02 2003
parent "Jeroen van Bemmel" <someone somewhere.com> writes:
 I'm pretty much sure reflection is a wrong name for the feature, since
it
 doesn't let you modify/create code. It's just a kind of introspection.
?! Reflection is all about "introspection", however its name is there
because
 it allows other classes to examine the class.  At that point it ceases to
be
 "introspection", and it becomes something else--introspection is about
self
 examination.
"Reflection" in this context means representation at run-time. The design structure (classes etc) as available in source files is 'reflected', i.e. can be examined, at run-time. My question is if there is something fundamentally different about reflection, are there things impossible to do without it? In terms of implementation, I think a crude form of reflection can already be provided at low cost. It should be a compiler option (just like RTTI is for C++ compilers), or possibly a per-class property ( "implements Reflection" comes to mind as a flag to the compiler ) One of the big advantages of language built-in reflection, is that you can take someone elses code and/or libraries and do whatever you want to reflect. I am pretty sure most use cases for reflection (for example, interceptors) could also be implemented in other ("proprietary") ways, but the fact that reflection happens automatically makes the difference. Of course, the latter means that it should then not be a compiler option for maximum reuse...
Dec 02 2003