www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [GSOC] Database API draft proposal

reply Christian Manning <cmanning999 gmail.com> writes:
Hello all,

This is my first draft proposal for a Database API for Google Summer Of 
Code. I have never written a document such as this so any and all 
feedback is welcome.

Thanks
---------------------------------

Synopsis
--------
An API for databases is a common component of many languages' standard 
library, though Phobos currently lacks this. This project will remedy 
this by providing such an API and also begin to utilise it with 
interfaces for some Database Management Systems (DBMS). I believe this 
will benefit the D community greatly and will help bring attention and 
developers to the language.

Details
-------
This project takes influence from the Java Database Connectivity API 
(JDBC), the Python Database API v2 and other similar interfaces. The 
idea is that any database interface created for use with D will follow 
the API so that the only thing to change is the database back-end being 
used. This will make working with databases in D a much easier experience.

I plan to have several interfaces in a database module which are then 
implemented for specific DBMSs.
For example:

module database;

interface Connection {
     //method definitions for connecting to databases go here.
}

Then in an implementation of MySQL for example:

module mysql;

import database;

class Connect : Connection {
     //implement defined methods tailoring to MySQL.
}

What goes in to these interfaces will be decided in conjunction with the 
D community so that there is minimal conflict and it will benefit as 
many circumstances as possible. I believe this to be the best route to 
take as I cannot speak for everyone who will be using this.

Using the API created I plan to create an example implementation, 
initially wrapping around the MySQL C API. This will be a good starting 
point for this project and more can be created, time permitting.

About Me
--------
My name is Christian Manning and I am a second year undergraduate 
studying Computer Science at De Montfort University.
I've become interested in D over time after reading about it several 
years ago. I got myself "The D Programming Language" and went from 
there. Although I've not done anything useful in D as I've learnt mainly 
C and Java and am unable to use D for my university projects, I think 
I'm capable of achieving the goals of this project.
Apr 02 2011
next sibling parent reply spir <denis.spir gmail.com> writes:
On 04/02/2011 10:03 PM, Christian Manning wrote:

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }
I would recommend to use slightly longer names for generic interfaces, eg "IConnection" or "DBConnection". Then, authors of libraries / implementations for specific DBMS like MySQL can use the shorter ones, eg "Connection", which will be all what library clients will see and use. This also avoids the need for "lexical hacks" like "Connection" versus "Connect". What do you think?
 What goes in to these interfaces will be decided in conjunction with the D
 community so that there is minimal conflict and it will benefit as many
 circumstances as possible. I believe this to be the best route to take as I
 cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation, initially
 wrapping around the MySQL C API. This will be a good starting point for this
 project and more can be created, time permitting.
I have no idea of the actual size of such an interface design, but I doubt it can make you busy for 3 months full time, especially since there are (probably good) precedents for other languages. Maybe the example implementation should be specified as part of the project? Denis -- _________________ vita es estrany spir.wikidot.com
Apr 03 2011
parent reply Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 13:10, spir wrote:
 On 04/02/2011 10:03 PM, Christian Manning wrote:

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }
I would recommend to use slightly longer names for generic interfaces, eg "IConnection" or "DBConnection". Then, authors of libraries / implementations for specific DBMS like MySQL can use the shorter ones, eg "Connection", which will be all what library clients will see and use. This also avoids the need for "lexical hacks" like "Connection" versus "Connect". What do you think?
When I was writing that it really didn't sit well and "DBConnection" in particular is a much better way of doing it to reduce some confusion there.
 What goes in to these interfaces will be decided in conjunction with
 the D
 community so that there is minimal conflict and it will benefit as many
 circumstances as possible. I believe this to be the best route to take
 as I
 cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially
 wrapping around the MySQL C API. This will be a good starting point
 for this
 project and more can be created, time permitting.
I have no idea of the actual size of such an interface design, but I doubt it can make you busy for 3 months full time, especially since there are (probably good) precedents for other languages. Maybe the example implementation should be specified as part of the project?
I'm aware that it wouldn't take 3 months, but I don't know how long it will take to have the API agreed upon so that there's a general consensus. Another way I could do it is to decide on the API myself and begin implementing DBMSs with it and then adapt to the ideas brought forth by the community. Then, everyone's happy, just in a different time frame. Though, if there are a lot of changes wanted I'd have to change all of my implementations depending on how far I am at the time. What do you think about that path? Thanks for the feedback, it's much appreciated :) Chris
Apr 03 2011
next sibling parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
Well the comments in there are what is important, and will need to be  
specified better IMHO.

The most important part in my opinion is how one chooses to represent  
a record.
A big design choice is if the various fields are defined at compile  
time or at runtime.
Also how does one add special behavior to a record? Do you use a  
subclasses of the generic record type (as ruby does for example)?

D2 adds some more method to allow for generic accessors, so one can  
have a dynamic implementation, while still using static accessors.
Maybe one should allow for both dynamic records and static ones.
The efficient storage of results of a db query is an important point.

Are you aware of http://dsource.org/projects/ddbi for D1?

If one wants to have a nice efficient and well tested interface,  
supporting more than one DB then I think that there is enough work to  
do.

Fawzi
On 3-apr-11, at 14:33, Christian Manning wrote:

 On 03/04/2011 13:10, spir wrote:
 On 04/02/2011 10:03 PM, Christian Manning wrote:

 I plan to have several interfaces in a database module which are  
 then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }
I would recommend to use slightly longer names for generic interfaces, eg "IConnection" or "DBConnection". Then, authors of libraries / implementations for specific DBMS like MySQL can use the shorter ones, eg "Connection", which will be all what library clients will see and use. This also avoids the need for "lexical hacks" like "Connection" versus "Connect". What do you think?
When I was writing that it really didn't sit well and "DBConnection" in particular is a much better way of doing it to reduce some confusion there.
 What goes in to these interfaces will be decided in conjunction with
 the D
 community so that there is minimal conflict and it will benefit as  
 many
 circumstances as possible. I believe this to be the best route to  
 take
 as I
 cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially
 wrapping around the MySQL C API. This will be a good starting point
 for this
 project and more can be created, time permitting.
I have no idea of the actual size of such an interface design, but I doubt it can make you busy for 3 months full time, especially since there are (probably good) precedents for other languages. Maybe the example implementation should be specified as part of the project?
I'm aware that it wouldn't take 3 months, but I don't know how long it will take to have the API agreed upon so that there's a general consensus. Another way I could do it is to decide on the API myself and begin implementing DBMSs with it and then adapt to the ideas brought forth by the community. Then, everyone's happy, just in a different time frame. Though, if there are a lot of changes wanted I'd have to change all of my implementations depending on how far I am at the time. What do you think about that path? Thanks for the feedback, it's much appreciated :) Chris
Apr 03 2011
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 Well the comments in there are what is important, and will need to be
 specified better IMHO.

 The most important part in my opinion is how one chooses to represent a
 record.
 A big design choice is if the various fields are defined at compile time
 or at runtime.
 Also how does one add special behavior to a record? Do you use a
 subclasses of the generic record type (as ruby does for example)?
I'm working on DB API for few months in my spare time. I'm delayed that much by my other projects. Please take a look at my ideas: http://github.com/pszturmaj/ddb Documentation: http://pszturmaj.github.com/ddb/db.html http://pszturmaj.github.com/ddb/postgres.html In my code, row is represented using struct DBRow!(Specs...). Fields may be known at compile time or not. DBRow besides base types, may be instantiated using structs, tuples or arrays. Untyped row (no compile time information) is DBRow!(Variant[]). Typed rows are very useful, for example you don't need to manually cast columns to your types, it's done automatically, e.g.: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery!(string, "typName", int, "len"); foreach (row; result) { // here, row DBRow subtypes // a Tuple!(string, "typName", int, "len") writeln(row.typName, ", ", row.len); } What do you think? :)
Apr 03 2011
parent reply Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 14:42, Piotr Szturmaj wrote:
 Fawzi Mohamed wrote:
 Well the comments in there are what is important, and will need to be
 specified better IMHO.

 The most important part in my opinion is how one chooses to represent a
 record.
 A big design choice is if the various fields are defined at compile time
 or at runtime.
 Also how does one add special behavior to a record? Do you use a
 subclasses of the generic record type (as ruby does for example)?
I'm working on DB API for few months in my spare time. I'm delayed that much by my other projects. Please take a look at my ideas: http://github.com/pszturmaj/ddb Documentation: http://pszturmaj.github.com/ddb/db.html http://pszturmaj.github.com/ddb/postgres.html In my code, row is represented using struct DBRow!(Specs...). Fields may be known at compile time or not. DBRow besides base types, may be instantiated using structs, tuples or arrays. Untyped row (no compile time information) is DBRow!(Variant[]). Typed rows are very useful, for example you don't need to manually cast columns to your types, it's done automatically, e.g.: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery!(string, "typName", int, "len"); foreach (row; result) { // here, row DBRow subtypes // a Tuple!(string, "typName", int, "len") writeln(row.typName, ", ", row.len); } What do you think? :)
I was going to reply with a link to your work but you beat me to it. I think it's a great design and incorporating it or something similar into the API may be the way to go.
Apr 03 2011
next sibling parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 15:59, Christian Manning wrote:

 On 03/04/2011 14:42, Piotr Szturmaj wrote:
 Fawzi Mohamed wrote:
 Well the comments in there are what is important, and will need to  
 be
 specified better IMHO.

 The most important part in my opinion is how one chooses to  
 represent a
 record.
 A big design choice is if the various fields are defined at  
 compile time
 or at runtime.
 Also how does one add special behavior to a record? Do you use a
 subclasses of the generic record type (as ruby does for example)?
I'm working on DB API for few months in my spare time. I'm delayed that much by my other projects. Please take a look at my ideas: http://github.com/pszturmaj/ddb Documentation: http://pszturmaj.github.com/ddb/db.html http://pszturmaj.github.com/ddb/postgres.html In my code, row is represented using struct DBRow!(Specs...). Fields may be known at compile time or not. DBRow besides base types, may be instantiated using structs, tuples or arrays. Untyped row (no compile time information) is DBRow!(Variant[]). Typed rows are very useful, for example you don't need to manually cast columns to your types, it's done automatically, e.g.: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery!(string, "typName", int, "len"); foreach (row; result) { // here, row DBRow subtypes // a Tuple!(string, "typName", int, "len") writeln(row.typName, ", ", row.len); } What do you think? :)
I was going to reply with a link to your work but you beat me to it. I think it's a great design and incorporating it or something similar into the API may be the way to go.
Indeed ddb looks really nice (I hadn't looked at it yet), given it though, I have to agree that just adding mySQL support is too little and not really innovative for 3 months work... Fawzi
Apr 03 2011
prev sibling next sibling parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Christian Manning wrote:
 On 03/04/2011 14:42, Piotr Szturmaj wrote:
 Fawzi Mohamed wrote:
 Well the comments in there are what is important, and will need to be
 specified better IMHO.

 The most important part in my opinion is how one chooses to represent a
 record.
 A big design choice is if the various fields are defined at compile time
 or at runtime.
 Also how does one add special behavior to a record? Do you use a
 subclasses of the generic record type (as ruby does for example)?
I'm working on DB API for few months in my spare time. I'm delayed that much by my other projects. Please take a look at my ideas: http://github.com/pszturmaj/ddb Documentation: http://pszturmaj.github.com/ddb/db.html http://pszturmaj.github.com/ddb/postgres.html In my code, row is represented using struct DBRow!(Specs...). Fields may be known at compile time or not. DBRow besides base types, may be instantiated using structs, tuples or arrays. Untyped row (no compile time information) is DBRow!(Variant[]). Typed rows are very useful, for example you don't need to manually cast columns to your types, it's done automatically, e.g.: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery!(string, "typName", int, "len"); foreach (row; result) { // here, row DBRow subtypes // a Tuple!(string, "typName", int, "len") writeln(row.typName, ", ", row.len); } What do you think? :)
I was going to reply with a link to your work but you beat me to it. I think it's a great design and incorporating it or something similar into the API may be the way to go.
Thanks. At this time, you can write an interface for MySQL, SQLite or other relational databases, using the same DBRow struct. Naming of course may be changed to DataRow, Row or other, depending on the choice of community. In regards of base interfaces like IConnection or (semi-)abstract class DBConnection, I think we should have common API for all clients, but only to some extent. There are many features available in some database servers, while not available in others, for example OIDs (object identifiers) are fundamental thing in PostgreSQL, but they simply don't exist in MySQL. So, PGCommand would give you information on lastInsertedOID, while MySQLCommand would not. This is also proven in ADO.NET, where each client is rarely used with common base interface, because it blocks many of its useful features. I think base interface should be defined only after some of the most popular RDBMS clients are finished. Also interface should be choosen to cover the most featured/advanced database client. This is why I started with PostgreSQL, as its the most powerful open-source RDBMS. If base interface will cover it, it will also cover some less powerful RDBMSes.
Apr 03 2011
parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 16:52, Piotr Szturmaj wrote:

 [...]
 Thanks. At this time, you can write an interface for MySQL, SQLite  
 or other relational databases, using the same DBRow struct. Naming  
 of course may be changed to DataRow, Row or other, depending on the  
 choice of community.

 In regards of base interfaces like IConnection or (semi-)abstract  
 class DBConnection, I think we should have common API for all  
 clients, but only to some extent. There are many features available  
 in some database servers, while not available in others, for example  
 OIDs (object identifiers) are fundamental thing in PostgreSQL, but  
 they simply don't exist in MySQL. So, PGCommand would give you  
 information on lastInsertedOID, while MySQLCommand would not.
 This is also proven in ADO.NET, where each client is rarely used  
 with common base interface, because it blocks many of its useful  
 features.

 I think base interface should be defined only after some of the most  
 popular RDBMS clients are finished. Also interface should be choosen  
 to cover the most featured/advanced database client. This is why I  
 started with PostgreSQL, as its the most powerful open-source RDBMS.  
 If base interface will cover it, it will also cover some less  
 powerful RDBMSes.
I think that you project looks nice, but see some of the comments in my other message. I would for example consider separating table definition from row object, and while your row object is really nice, often one has either a single DB model, described in a few model files or goes with a fully dynamic model. In large project one does not/should not, define RowTypes on the fly everywhere in the code. So I would try to improve the way one describes a table, or a full database. Your DBRow type is definitely nice, and is a good starting point, but there is definitely more work to do (not that you had said otherwise :). Fawzi
Apr 03 2011
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 I think that you project looks nice, but see some of the comments in my
 other message.
 I would for example consider separating table definition from row
 object, and while your row object is really nice, often one has either a
 single DB model, described in a few model files or goes with a fully
 dynamic model.
 In large project one does not/should not, define RowTypes on the fly
 everywhere in the code.
There's no need to declare all row types. DBRow support both static and dynamic models. For dynamic rows, DBRow uses Variant[] as its underlying type. This is previous sample code, but changed to use dynamic row: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery; foreach (row; result) { // here, row subtypes a Variant[] writeln(row[0], ", ", row[1]); } Btw. I've just updated documentation, so you can take another look :)
Apr 03 2011
parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 18:37, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 I think that you project looks nice, but see some of the comments  
 in my
 other message.
 I would for example consider separating table definition from row
 object, and while your row object is really nice, often one has  
 either a
 single DB model, described in a few model files or goes with a fully
 dynamic model.
 In large project one does not/should not, define RowTypes on the fly
 everywhere in the code.
There's no need to declare all row types. DBRow support both static and dynamic models. For dynamic rows, DBRow uses Variant[] as its underlying type. This is previous sample code, but changed to use dynamic row: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery; foreach (row; result) { // here, row subtypes a Variant[] writeln(row[0], ", ", row[1]); } Btw. I've just updated documentation, so you can take another look :)
Yes I saw that, that is exactly the reason I was telling about splitting the table definition in another object, so that also in the dynamic case one can use the column names (that normally are known, or can be retrieved from the db schema). That would only add a pointer to each row (to its description), and would make it much nicer to use. Your DBRow is very nice to use, and I like how it can accommodate both types, but it degrades too much for dynamic types imho. Fawzi
Apr 03 2011
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 On 3-apr-11, at 18:37, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 I think that you project looks nice, but see some of the comments in my
 other message.
 I would for example consider separating table definition from row
 object, and while your row object is really nice, often one has either a
 single DB model, described in a few model files or goes with a fully
 dynamic model.
 In large project one does not/should not, define RowTypes on the fly
 everywhere in the code.
There's no need to declare all row types. DBRow support both static and dynamic models. For dynamic rows, DBRow uses Variant[] as its underlying type. This is previous sample code, but changed to use dynamic row: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery; foreach (row; result) { // here, row subtypes a Variant[] writeln(row[0], ", ", row[1]); } Btw. I've just updated documentation, so you can take another look :)
Yes I saw that, that is exactly the reason I was telling about splitting the table definition in another object, so that also in the dynamic case one can use the column names (that normally are known, or can be retrieved from the db schema). That would only add a pointer to each row (to its description), and would make it much nicer to use. Your DBRow is very nice to use, and I like how it can accommodate both types, but it degrades too much for dynamic types imho.
Ah, I see what you mean :) This is yet to be done feature :) I assume you mean something like row["typname"]. Soon, I will add support for this.
Apr 03 2011
parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 19:54, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 On 3-apr-11, at 18:37, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 I think that you project looks nice, but see some of the comments  
 in my
 other message.
 I would for example consider separating table definition from row
 object, and while your row object is really nice, often one has  
 either a
 single DB model, described in a few model files or goes with a  
 fully
 dynamic model.
 In large project one does not/should not, define RowTypes on the  
 fly
 everywhere in the code.
There's no need to declare all row types. DBRow support both static and dynamic models. For dynamic rows, DBRow uses Variant[] as its underlying type. This is previous sample code, but changed to use dynamic row: auto cmd = new PGCommand(conn, "SELECT typname, typlen FROM pg_type"); auto result = cmd.executeQuery; foreach (row; result) { // here, row subtypes a Variant[] writeln(row[0], ", ", row[1]); } Btw. I've just updated documentation, so you can take another look :)
Yes I saw that, that is exactly the reason I was telling about splitting the table definition in another object, so that also in the dynamic case one can use the column names (that normally are known, or can be retrieved from the db schema). That would only add a pointer to each row (to its description), and would make it much nicer to use. Your DBRow is very nice to use, and I like how it can accommodate both types, but it degrades too much for dynamic types imho.
Ah, I see what you mean :) This is yet to be done feature :) I assume you mean something like row["typname"]. Soon, I will add support for this.
yes exactly, great
Apr 03 2011
prev sibling parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
	charset=US-ASCII;
	format=flowed;
	delsp=yes
Content-Transfer-Encoding: 7bit


On 3-apr-11, at 16:44, Fawzi Mohamed wrote:

 On 3-apr-11, at 15:59, Christian Manning wrote:

 [...]
 I was going to reply with a link to your work but you beat me to it.
 I think it's a great design and incorporating it or something  
 similar into the API may be the way to go.
Indeed ddb looks really nice (I hadn't looked at it yet), given it though, I have to agree that just adding mySQL support is too little and not really innovative for 3 months work...
Looking more maybe I was a bit too harsh, if you define clearly the goals of your API then yes it might be a good project. The api doesn't have to be defined yet, but a more detailed definition of its goals should be there, maybe with code example of some usages. Questions that should be answered: * support for static and dynamic types. how access of dynamic and static types differs, should be as little as possible, and definitely the access one uses for dynamic types should work without changes on static types * class or struct for row object * support for table specific classes? * reference to description of the table (to be able to get also dynamic types by column name, but avoid using too much memory for the structure) * Nice to define table structure, and what happens if the db has another structure. * you want to support only access or also db creation and modification? I feel that these things should be addressed in a complete proposal, with possible answers that might be changed later on depending on how things actually go. Fawzi
Apr 03 2011
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 Looking more maybe I was a bit too harsh, if you define clearly the
 goals of your API then yes it might be a good project.
 The api doesn't have to be defined yet, but a more detailed definition
 of its goals should be there, maybe with code example of some usages.
 Questions that should be answered:
I know your response is'nt to me, but please let me answer these questions from my point of view, based on my recent work on ddb.
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as little as
 possible, and definitely the access one uses for dynamic types should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
 * reference to description of the table (to be able to get also dynamic
 types by column name, but avoid using too much memory for the structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
 * Nice to define table structure, and what happens if the db has another
 structure.
This is a problem for ORM, but at first, we need standard query API.
 * you want to support only access or also db creation and modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Apr 03 2011
parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 19:28, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 Looking more maybe I was a bit too harsh, if you define clearly the
 goals of your API then yes it might be a good project.
 The api doesn't have to be defined yet, but a more detailed  
 definition
 of its goals should be there, maybe with code example of some usages.
 Questions that should be answered:
I know your response is'nt to me, but please let me answer these questions from my point of view, based on my recent work on ddb.
I think that your responses are very relevant, as it seems to me that your work is nice, and I find that if a GSoC is done in that direction it should definitely work together with the good work that is already done, let's don't create multiple competing projects if people are willing to work together.
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as little  
 as
 possible, and definitely the access one uses for dynamic types should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
yes but as I said I find the support for dynamic data rows weak.
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
well with the current approach it is ugly because your calls would be another type, thus either you remove all typing or you can't have generic functions, accepting rows, everything has to be a template, looping on a table or a row you always need a template.
 * reference to description of the table (to be able to get also  
 dynamic
 types by column name, but avoid using too much memory for the  
 structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
ok that is nice, and my point is that the type that the user sees by default should automatically take advantage of that
 * Nice to define table structure, and what happens if the db has  
 another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
 * you want to support only access or also db creation and  
 modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
Apr 03 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 03.04.2011 20:15, schrieb Fawzi Mohamed:
 On 3-apr-11, at 19:28, Piotr Szturmaj wrote:
 
 Fawzi Mohamed wrote:
 Looking more maybe I was a bit too harsh, if you define clearly the
 goals of your API then yes it might be a good project.
 The api doesn't have to be defined yet, but a more detailed definition
 of its goals should be there, maybe with code example of some usages.
 Questions that should be answered:
I know your response is'nt to me, but please let me answer these questions from my point of view, based on my recent work on ddb.
I think that your responses are very relevant, as it seems to me that your work is nice, and I find that if a GSoC is done in that direction it should definitely work together with the good work that is already done, let's don't create multiple competing projects if people are willing to work together.
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as little as
 possible, and definitely the access one uses for dynamic types should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
yes but as I said I find the support for dynamic data rows weak.
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
well with the current approach it is ugly because your calls would be another type, thus either you remove all typing or you can't have generic functions, accepting rows, everything has to be a template, looping on a table or a row you always need a template.
 * reference to description of the table (to be able to get also dynamic
 types by column name, but avoid using too much memory for the structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
ok that is nice, and my point is that the type that the user sees by default should automatically take advantage of that
 * Nice to define table structure, and what happens if the db has another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
 * you want to support only access or also db creation and modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
I'd hate not having a rows-and-tables view onto the database. An Object-Relational-Mapper is nice to have of course, but I agree with Piotr that a traditional view onto the DB is a good start to built an ORM on and I think that the traditional view should also be available to the user (it'll be there internally anyway, at least for traditional relational databases). Also: How are you gonna write queries with only the ORM view? Parse your own SQL-like-syntax that uses the Object type? Or have the SQL operators as methods? And then generate the apropriate SQL string? What about differences in SQL-syntax between different databases? What about tweaks that may be possible when you write the SQL yourself and not have it generated from your ORM? No, being able to write the SQL-queries yourself and having a "low level" view (tables and rows, like it's saved in the DB) is quite important. However: Since Piotr already seems to have much work done, maybe Christian Manning could polish Piotrs work (if necessary) and create a ORM on top of it? Oh, and just an Idea: Maybe something like LINQ is feasible for ORM? So you can write a query that includes local containers/ranges, remote Databases (=> part of it will internally be translated to SQL) and maybe even XML (but that could be added later once the std.xml replacement is ready)? Cheers, - Daniel
Apr 03 2011
parent Fawzi Mohamed <fawzi gmx.ch> writes:
On 3-apr-11, at 22:54, Daniel Gibson wrote:

 Am 03.04.2011 20:15, schrieb Fawzi Mohamed:
 On 3-apr-11, at 19:28, Piotr Szturmaj wrote:

 * Nice to define table structure, and what happens if the db has  
 another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
 * you want to support only access or also db creation and  
 modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
I'd hate not having a rows-and-tables view onto the database. An Object-Relational-Mapper is nice to have of course, but I agree with Piotr that a traditional view onto the DB is a good start to built an ORM on and I think that the traditional view should also be available to the user (it'll be there internally anyway, at least for traditional relational databases).
I fully agree, I probably did not express me clearly enough, a basic table view is a must, but the ORM that one wants to realize might influence how exactly the basic view looks like. For example it would be nice if a basic row would also somehow be the basic object of the ORM with a dynamic description, and automatically specialized if the db description is available at compiletime. As I had said before "the object level can influence the choice of the best low level interface", this does not imply that a lower level interface is not needed :).
 Also: How are you gonna write queries with only the ORM view? Parse  
 your own
 SQL-like-syntax that uses the Object type? Or have the SQL operators  
 as methods?
 And then generate the apropriate SQL string?
 What about differences in SQL-syntax between different databases?
 What about tweaks that may be possible when you write the SQL  
 yourself and not
 have it generated from your ORM?

 No, being able to write the SQL-queries yourself and having a "low  
 level" view
 (tables and rows, like it's saved in the DB) is quite important.
again I fully agree, but if we want to be able to store business logic in objects that come from the database, and be able to express them easily (for example like ruby does), can be very useful. At the ORM level one should express at most simple queries, for more complex stuff SQL is a must (there is no point to define another DSL when SQL is already one (but having special methods with common queries can be useful to more easily support non SQL dbs).
 However: Since Piotr already seems to have much work done, maybe  
 Christian
 Manning could polish Piotrs work (if necessary) and create a ORM on  
 top of it?
if accepted I definitely think that Piotrs and Christian will have to coordinate their work
 Oh, and just an Idea: Maybe something like LINQ is feasible for ORM?  
 So you can
 write a query that includes local containers/ranges, remote  
 Databases (=> part
 of it will internally be translated to SQL) and maybe even XML (but  
 that could
 be added later once the std.xml replacement is ready)?
well simple queries, not sure if a full LINQ implementation is too much to ask, but simple queries should be feasible. Fawzi
Apr 03 2011
prev sibling parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 On 3-apr-11, at 19:28, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 Looking more maybe I was a bit too harsh, if you define clearly the
 goals of your API then yes it might be a good project.
 The api doesn't have to be defined yet, but a more detailed definition
 of its goals should be there, maybe with code example of some usages.
 Questions that should be answered:
I know your response is'nt to me, but please let me answer these questions from my point of view, based on my recent work on ddb.
I think that your responses are very relevant, as it seems to me that your work is nice, and I find that if a GSoC is done in that direction it should definitely work together with the good work that is already done, let's don't create multiple competing projects if people are willing to work together.
I'm ready to cooperate :)
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as little as
 possible, and definitely the access one uses for dynamic types should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
yes but as I said I find the support for dynamic data rows weak.
I've just added row["column"] bracket syntax for dynamic rows.
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
well with the current approach it is ugly because your calls would be another type, thus either you remove all typing or you can't have generic functions, accepting rows, everything has to be a template, looping on a table or a row you always need a template.
Could you elaborate? I don't know what do you mean.
 * reference to description of the table (to be able to get also dynamic
 types by column name, but avoid using too much memory for the structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
ok that is nice, and my point is that the type that the user sees by default should automatically take advantage of that
 * Nice to define table structure, and what happens if the db has another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
A "defined" DBRow or static one, if used on result which has inequal number of columns or their types aren't convertible to row fields then it's an error. But, if someone uses a static fields, he should also take care that the query result is consistent with those fields.
 * you want to support only access or also db creation and modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
I should state here, that end-user usability is very important to me. I should also clarify that my code isn't completely finished and of course it is a subject to change. Any suggestions and critics are welcome :)
Apr 03 2011
next sibling parent Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Piotr Szturmaj wrote:
 Any suggestions and critics are welcome :)
Of course I meant critique.
Apr 03 2011
prev sibling parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 4-apr-11, at 02:01, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 [...]
 I think that your responses are very relevant, as it seems to me that
 your work is nice, and I find that if a GSoC is done in that  
 direction
 it should definitely work together with the good work that is already
 done, let's don't create multiple competing projects if people are
 willing to work together.
I'm ready to cooperate :)
great :)
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as  
 little as
 possible, and definitely the access one uses for dynamic types  
 should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
yes but as I said I find the support for dynamic data rows weak.
I've just added row["column"] bracket syntax for dynamic rows.
excellent, ideally that should work also for untyped, because one wants to be able to switch to a typed Row without needing to change its code (and it should work exactly the same, so the typed rows will need to wrap things in Variants when using that interface).
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
well with the current approach it is ugly because your calls would be another type, thus either you remove all typing or you can't have generic functions, accepting rows, everything has to be a template, looping on a table or a row you always need a template.
Could you elaborate? I don't know what do you mean.
Well I am not totally sure either, having the row handle better the dynamic case i already a nice step forward, I still fear that we will have problems with the ORM level, I am not 100% sure, that is the reason I would like to try to flesh out the ORM level a bit more. I would likethat one can loop on all the tables and for each one get the either the generic or the specialized object depending on what is needed. If one wants to have business logic in the specialized object it should be difficult to bypass them. Maybe I am asking too much and the ORM level should never expose the rows directly, because if we use structs we cannot have a common type representing a generic row of a DB which might be specialized or not (without major hacking).
 * reference to description of the table (to be able to get also  
 dynamic
 types by column name, but avoid using too much memory for the  
 structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
ok that is nice, and my point is that the type that the user sees by default should automatically take advantage of that
 * Nice to define table structure, and what happens if the db has  
 another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
A "defined" DBRow or static one, if used on result which has inequal number of columns or their types aren't convertible to row fields then it's an error. But, if someone uses a static fields, he should also take care that the query result is consistent with those fields.
For example doe we want lazy loading of an object from the db? if yes how we represent it with current Rows objects?
 * you want to support only access or also db creation and  
 modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
I should state here, that end-user usability is very important to me. I should also clarify that my code isn't completely finished and of course it is a subject to change. Any suggestions and critics are welcome :)
very good :)
Apr 04 2011
parent Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Fawzi Mohamed wrote:
 On 4-apr-11, at 02:01, Piotr Szturmaj wrote:

 Fawzi Mohamed wrote:
 [...]
 I think that your responses are very relevant, as it seems to me that
 your work is nice, and I find that if a GSoC is done in that direction
 it should definitely work together with the good work that is already
 done, let's don't create multiple competing projects if people are
 willing to work together.
I'm ready to cooperate :)
great :)
 * support for static and dynamic types.
 how access of dynamic and static types differs, should be as little as
 possible, and definitely the access one uses for dynamic types should
 work without changes on static types
If you mean statically or dynamically typed data row then I can say my DBRow support both.
yes but as I said I find the support for dynamic data rows weak.
I've just added row["column"] bracket syntax for dynamic rows.
excellent, ideally that should work also for untyped, because one wants to be able to switch to a typed Row without needing to change its code
I used to think the same, but currently this is technically impossible. When I started working on this I wanted one common interface, but tuples use static indexing to their fields. You can't write such code: Tuple!(int, string) t; int index = 1; // try access string field: t[index] = "abc"; // error but this works: t[1] = "abc"; // ok This problem also applies to structs (FieldTypeTuple). To overcome that we need to split opIndex to compile-time one and run-time one (add static opIndex).
 (and it should work exactly the same, so the typed rows will need to
 wrap things in Variants when using that interface).
Yes, I tried hard to do it. It worked, but it broke Tuple index access - it was hidden by opIndex.
 * class or struct for row object
I'm using struct, because I think row received from database is a value type rather than reference. If one selects rows from one table then yes, it is possible to do some referencing based on primary key, but anyway I think updates should be done explicitly, because row could be deleted in the meantime. In more complex queries, not all of selected rows are materialized, i.e. they may be from computed columns, view columns, aggregate functions and so on. Allocation overhead is also lower for structs.
 * support for table specific classes?
Table specific classes may be written by user and somehow wrap underlying row type.
well with the current approach it is ugly because your calls would be another type, thus either you remove all typing or you can't have generic functions, accepting rows, everything has to be a template, looping on a table or a row you always need a template.
Could you elaborate? I don't know what do you mean.
Well I am not totally sure either, having the row handle better the dynamic case i already a nice step forward, I still fear that we will have problems with the ORM level, I am not 100% sure, that is the reason I would like to try to flesh out the ORM level a bit more. I would likethat one can loop on all the tables and for each one get the either the generic or the specialized object depending on what is needed. If one wants to have business logic in the specialized object it should be difficult to bypass them.
Well, it should be possible right now: struct MyData { int a; int b; int multiply() { return a * b; } } auto cmd = new PGConnection(conn, "SELECT a, b FROM numbers") auto result = cmd.executeQuery!MyData; foreach (row; result) writeln(row.multiply);
 Maybe I am asking too much and the ORM level should never expose the
 rows directly, because if we use structs we cannot have a common type
 representing a generic row of a DB which might be specialized or not
 (without major hacking).
ORM level may of course expose rows. It should be an additional level of abstraction built on top of SQL api. So one can mix SQL and ORM interfaces. In regards to common type, it's currently impossible to wrap a Tuple or struct and use [index] access to fields. No matter if we use struct or not.
 * reference to description of the table (to be able to get also
 dynamic
 types by column name, but avoid using too much memory for the
 structure)
My PostgreSQL client already supports that. Class PGCommand has member "fields", which contain information about returned columns. You can even check what columns will be returned from a query, before actually executing it.
ok that is nice, and my point is that the type that the user sees by default should automatically take advantage of that
 * Nice to define table structure, and what happens if the db has
 another
 structure.
This is a problem for ORM, but at first, we need standard query API.
I am not so sure about this, yes these (also classes for tables) are part of the ORM, but the normal users will more often be at the ORM level I think, and how exactly we want the things look like that the object level can influence the choice of the best low level interface.
A "defined" DBRow or static one, if used on result which has inequal number of columns or their types aren't convertible to row fields then it's an error. But, if someone uses a static fields, he should also take care that the query result is consistent with those fields.
For example doe we want lazy loading of an object from the db? if yes how we represent it with current Rows objects?
Could you post an example of lazy loading of an object?
 * you want to support only access or also db creation and
 modification?
First, I'm preparing base "traditional" API. Then I want to write simple object-relational mapping. I've already written some code that generated CREATE TABLE for structs at compile time. Static typing of row fields is very helpful here.
Very good I think that working on getting the API right there and having it nice to use is important. Maybe you are right and the current DBRow is indeed the best abstraction, but I am not yet 100% sure, to me it looks like it isn't the best end user abstraction (but it might be an excellent low level object)
I should state here, that end-user usability is very important to me. I should also clarify that my code isn't completely finished and of course it is a subject to change. Any suggestions and critics are welcome :)
very good :)
Apr 04 2011
prev sibling parent reply spir <denis.spir gmail.com> writes:
On 04/03/2011 02:33 PM, Christian Manning wrote:
 On 03/04/2011 13:10, spir wrote:
 On 04/02/2011 10:03 PM, Christian Manning wrote:

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }
I would recommend to use slightly longer names for generic interfaces, eg "IConnection" or "DBConnection". Then, authors of libraries / implementations for specific DBMS like MySQL can use the shorter ones, eg "Connection", which will be all what library clients will see and use. This also avoids the need for "lexical hacks" like "Connection" versus "Connect". What do you think?
When I was writing that it really didn't sit well and "DBConnection" in particular is a much better way of doing it to reduce some confusion there.
 What goes in to these interfaces will be decided in conjunction with
 the D
 community so that there is minimal conflict and it will benefit as many
 circumstances as possible. I believe this to be the best route to take
 as I
 cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially
 wrapping around the MySQL C API. This will be a good starting point
 for this
 project and more can be created, time permitting.
I have no idea of the actual size of such an interface design, but I doubt it can make you busy for 3 months full time, especially since there are (probably good) precedents for other languages. Maybe the example implementation should be specified as part of the project?
I'm aware that it wouldn't take 3 months, but I don't know how long it will take to have the API agreed upon so that there's a general consensus. Another way I could do it is to decide on the API myself and begin implementing DBMSs with it and then adapt to the ideas brought forth by the community. Then, everyone's happy, just in a different time frame. Though, if there are a lot of changes wanted I'd have to change all of my implementations depending on how far I am at the time. What do you think about that path?
I would go for the second, especially because there is a Python example (probably one of the best languages out there for such design questions). Just think at usual qualities: clarity / simplicity / consistency (and currently discussed Phobos style guidelines). Also: * Implementation example(s) is source of feedback for the interface quality. * Once you've done it, rewriting the exact same feature with a different design can be very fast (esp if the change is only about interface), because you master the application. I personly would appreciate an example for a simpler and/or non-relational, DBMS (maybe it's only me) (I'm thinking at key:value like Berkeley DB, object DBMS, SQLite...). Denis -- _________________ vita es estrany spir.wikidot.com
Apr 03 2011
parent Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 14:16, spir wrote:
 On 04/03/2011 02:33 PM, Christian Manning wrote:
 On 03/04/2011 13:10, spir wrote:
 On 04/02/2011 10:03 PM, Christian Manning wrote:

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }
I would recommend to use slightly longer names for generic interfaces, eg "IConnection" or "DBConnection". Then, authors of libraries / implementations for specific DBMS like MySQL can use the shorter ones, eg "Connection", which will be all what library clients will see and use. This also avoids the need for "lexical hacks" like "Connection" versus "Connect". What do you think?
When I was writing that it really didn't sit well and "DBConnection" in particular is a much better way of doing it to reduce some confusion there.
 What goes in to these interfaces will be decided in conjunction with
 the D
 community so that there is minimal conflict and it will benefit as many
 circumstances as possible. I believe this to be the best route to take
 as I
 cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially
 wrapping around the MySQL C API. This will be a good starting point
 for this
 project and more can be created, time permitting.
I have no idea of the actual size of such an interface design, but I doubt it can make you busy for 3 months full time, especially since there are (probably good) precedents for other languages. Maybe the example implementation should be specified as part of the project?
I'm aware that it wouldn't take 3 months, but I don't know how long it will take to have the API agreed upon so that there's a general consensus. Another way I could do it is to decide on the API myself and begin implementing DBMSs with it and then adapt to the ideas brought forth by the community. Then, everyone's happy, just in a different time frame. Though, if there are a lot of changes wanted I'd have to change all of my implementations depending on how far I am at the time. What do you think about that path?
I would go for the second, especially because there is a Python example (probably one of the best languages out there for such design questions). Just think at usual qualities: clarity / simplicity / consistency (and currently discussed Phobos style guidelines). Also: * Implementation example(s) is source of feedback for the interface quality. * Once you've done it, rewriting the exact same feature with a different design can be very fast (esp if the change is only about interface), because you master the application. I personly would appreciate an example for a simpler and/or non-relational, DBMS (maybe it's only me) (I'm thinking at key:value like Berkeley DB, object DBMS, SQLite...). Denis
SQLite could definitely be on the table. However, I don't want to be over-ambitious at this stage and then not complete the project, and all advice I've read on applying for GSOC suggests this too. If I could be more certain on the time it would take for the API alone, then I would propose more. Would it be suitable to have something like: "If the API is not in a good state by xx/xx/2011 then y implementation will not be undertaken" ?
Apr 03 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/2/11 3:03 PM, Christian Manning wrote:
 Hello all,

 This is my first draft proposal for a Database API for Google Summer Of
 Code. I have never written a document such as this so any and all
 feedback is welcome.

 Thanks
[snip] Thanks for your interest and for sharing your draft proposal. Fawzi is doing an excellent job at making suggestions for improving the proposal. Let me add some. Generally you need to create a compelling case that you know what your project entails, you have thoroughly studied the state of the art, and you are able to take the project to completion. Digital Mars' reputation is at stake here - we need to make sure that we're using Google's money and everybody's time to good end. Here are some more additions to the proposal that would improve it: * What is your level of understanding of D? How do you believe you could use D's templates for improving the API compared to JDBC? If you choose to copy JDBC's interface, how do you justify relying on dynamic typing alone? * What coursework did you complete? As a second-year student this makes it easier for us to assess where you are in terms of expertise. Scores would help as well. * Since you now know of existing work, have you contacted Piotr for collaboration? Would he give you his API to work on? Would he be available to help as a formal mentor or informally? What is the integration plan? * If there project were totally successful, what features do you expect it has and what would be the impact? * What is the absolute minimum level of functionality that would still qualify the project as successful? * Also include Fawzi's suggestions focused on details of API definition. Thanks, Andrei
Apr 03 2011
parent reply Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)? I'll be working on my proposal to include yours and Fawzi's suggestions and post it as a reply to my first draft. Thanks for the help so far in this stage, Andrei, Fawzi and Piotr. Chris
Apr 03 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Apr 03 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 03.04.2011 22:53, schrieb Andrei Alexandrescu:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Put probably in private (a Mail to you, not in this list), right? I personally wouldn't want to expose these informations to the whole internet.. Cheers, - Daniel
Apr 03 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/3/11 3:56 PM, Daniel Gibson wrote:
 Am 03.04.2011 22:53, schrieb Andrei Alexandrescu:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Put probably in private (a Mail to you, not in this list), right? I personally wouldn't want to expose these informations to the whole internet.. Cheers, - Daniel
Either way is fine. FWIW many students put such information in resumes available online. Andrei
Apr 03 2011
next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 03.04.2011 22:57, schrieb Andrei Alexandrescu:
 On 4/3/11 3:56 PM, Daniel Gibson wrote:
 Am 03.04.2011 22:53, schrieb Andrei Alexandrescu:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Put probably in private (a Mail to you, not in this list), right? I personally wouldn't want to expose these informations to the whole internet.. Cheers, - Daniel
Either way is fine. FWIW many students put such information in resumes available online. Andrei
Ok. At my university they're very reluctant to publish test results online, not even with the kind-of-anonymous matriculation number. Workarounds are to either publish it in a private website that only the members of the corresponding course can access or to let students choose a secret alias when writing the test, so they can just publish "secret_alias: 5.0" - or even both (with alias on private website). Cheers, - Daniel
Apr 03 2011
prev sibling parent Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 21:57, Andrei Alexandrescu wrote:
 On 4/3/11 3:56 PM, Daniel Gibson wrote:
 Am 03.04.2011 22:53, schrieb Andrei Alexandrescu:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this
 makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Put probably in private (a Mail to you, not in this list), right? I personally wouldn't want to expose these informations to the whole internet.. Cheers, - Daniel
Either way is fine. FWIW many students put such information in resumes available online. Andrei
I'm not particularly bothered about it, so I'll probably put them in this thread. I'll gather all the scores that I can tomorrow, but the last 4 of my assignments have yet to be marked, 2 of them do have preliminary marks though. This is unfortunate as they are the biggest pieces of work I've done thus far.
Apr 03 2011
prev sibling parent reply Christian Manning <cmanning999 gmail.com> writes:
On 03/04/2011 21:53, Andrei Alexandrescu wrote:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Ok, here are the ones I have available. Internet Software Development: - XSLT/JSP: 91% - JSP/MySQL: 70%+ (preliminary grade given in demo) OO Software Design & Development: - Data model: 83.33% - Jetman (create a score + high score system and a configuration panel, MVC style): 80% (preliminary given in demo) Database Design & Implementation: - Data Modelling assignment (ERD, normalisation and the like): 69.17% - Database implementation (of the solution to the previous, in Oracle): not yet marked. Data Structures & Algorithms: - Circular doubly linked list with cursor in C: not yet marked. The only one I can find from last year is a caesar cipher in Haskell: 98% Sorry about the unmarked ones, these were very recent, but I hope the rest helps. Chris
Apr 05 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/5/11 12:43 PM, Christian Manning wrote:
 On 03/04/2011 21:53, Andrei Alexandrescu wrote:
 On 4/3/11 3:33 PM, Christian Manning wrote:
 On 03/04/2011 19:30, Andrei Alexandrescu wrote:
 * What coursework did you complete? As a second-year student this makes
 it easier for us to assess where you are in terms of expertise. Scores
 would help as well.
By this do you mean you'd like to see my completed courseworks? Or just descriptions and scores (where available)?
Description and score. Andrei
Ok, here are the ones I have available. Internet Software Development: - XSLT/JSP: 91% - JSP/MySQL: 70%+ (preliminary grade given in demo) OO Software Design & Development: - Data model: 83.33% - Jetman (create a score + high score system and a configuration panel, MVC style): 80% (preliminary given in demo) Database Design & Implementation: - Data Modelling assignment (ERD, normalisation and the like): 69.17% - Database implementation (of the solution to the previous, in Oracle): not yet marked. Data Structures & Algorithms: - Circular doubly linked list with cursor in C: not yet marked. The only one I can find from last year is a caesar cipher in Haskell: 98% Sorry about the unmarked ones, these were very recent, but I hope the rest helps. Chris
Thanks. You may want to paste this in your application. Andrei
Apr 05 2011
prev sibling next sibling parent reply Christian Manning <cmanning999 gmail.com> writes:
Hello all,

This is the second draft and a lot of changes have been made. Hopefully 
it's a better overall proposal and I look forward to anybody's feedback :)
---------------------------------

Synopsis
--------
An API for databases is a common component of many languages' standard 
library, though Phobos currently lacks this. This project will remedy 
this by providing such an API and also begin to utilise it with 
interfaces for some Database Management Systems (DBMS). I believe this 
will benefit the D community greatly and will help bring attention and 
developers to the language.

Details
-------
Piotr Szturmaj has began working on DDB [1] which has a PostgreSQL 
clietn written in D as well as some database neutral features such as 
the DBRow type for storing rows from a database. Piotr and I have agreed 
to collaborate such that DDB will continue with Piotr at the helm, and I 
will begin implementing other DBMS clients based around his work. Once 
there is another implementation, work will then begin on extracting a 
common interface which will form the API.
For example:

module database;

interface DBConnection {
     //method definitions for connecting to databases go here.
}

Then in an implementation of MySQL:

module mysql;

import database;

class Connection : DBConnection {
     //implement defined methods tailoring to MySQL.
}

Exactly what will go in to these interfaces will depend on the 
differences between the DBMSs, but they all share many things. The API 
should also be developed in conjunction with the D community to minimise 
any fallout of decisions made.

The DBMSs I plan to implement are MySQL and SQLite. Unlike PostgreSQL, 
MySQL doesn't seem to have a long-term and stable client-server 
protocol. As a result of this I will be wrapping around the MySQL C API 
(v5.1) to bring it to D. SQLite will also undergo the same process. 
Because of this, these clients are not likely to get into Phobos and so, 
if the API does then these will be an external package.

If this project is completely successful, there will be a database API 
and at least three DBMS clients ready for use in D applications. The 
minimum amount of functionality for this to be considered successful 
would be an API that is mostly utilised by the PostgreSql and MySQL 
clients. In this scenario there will still be two usable clients, 
however, perhaps the API is not totally complete and neither is the 
SQLite client.

About Me
--------
My name is Christian Manning and I am a second year undergraduate 
studying Computer Science at De Montfort University.
I've become interested in D over time after reading about it several 
years ago. I got myself "The D Programming Language" and went from 
there. Although I've not done anything useful in D as I've learnt mainly 
C and Java and am unable to use D for my university projects, I think 
I'm capable of achieving the goals of this project.

Grades From The Past Year
-------------------------
Internet Software Development:
- XSLT/JSP: 91%
- JSP/MySQL: 70%+ (preliminary grade given in demo)

OO Software Design & Development:
- Data model: 83.33%
- Jetman (create a score + high score system and a configuration panel,
MVC style): 80% (preliminary given in demo)

Database Design & Implementation:
- Data Modelling assignment (ERD, normalisation and the like): 69.17%
- Database implementation (of the solution to the previous, in Oracle):
not yet marked.

Data Structures & Algorithms:
- Circular doubly linked list with cursor in C: not yet marked.

Computational Modelling (1st year):
- Caesar cipher in Haskell: 98%

References
----------
[1] https://github.com/pszturmaj/ddb http://pszturmaj.github.com/ddb/db.html
Apr 05 2011
parent reply "Masahiro Nakagawa" <repeatedly gmail.com> writes:
On Wed, 06 Apr 2011 05:38:02 +0900, Christian Manning  
<cmanning999 gmail.com> wrote:

 Hello all,

 This is the second draft and a lot of changes have been made. Hopefully  
 it's a better overall proposal and I look forward to anybody's feedback  
 :)
 ---------------------------------

 Synopsis
 --------
 An API for databases is a common component of many languages' standard  
 library, though Phobos currently lacks this. This project will remedy  
 this by providing such an API and also begin to utilise it with  
 interfaces for some Database Management Systems (DBMS). I believe this  
 will benefit the D community greatly and will help bring attention and  
 developers to the language.

 Details
 -------
 Piotr Szturmaj has began working on DDB [1] which has a PostgreSQL  
 clietn written in D as well as some database neutral features such as  
 the DBRow type for storing rows from a database. Piotr and I have agreed  
 to collaborate such that DDB will continue with Piotr at the helm, and I  
 will begin implementing other DBMS clients based around his work. Once  
 there is another implementation, work will then begin on extracting a  
 common interface which will form the API.
 For example:

 module database;

 interface DBConnection {
      //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL:

 module mysql;

 import database;

 class Connection : DBConnection {
      //implement defined methods tailoring to MySQL.
 }

 Exactly what will go in to these interfaces will depend on the  
 differences between the DBMSs, but they all share many things. The API  
 should also be developed in conjunction with the D community to minimise  
 any fallout of decisions made.

 The DBMSs I plan to implement are MySQL and SQLite. Unlike PostgreSQL,  
 MySQL doesn't seem to have a long-term and stable client-server  
 protocol. As a result of this I will be wrapping around the MySQL C API  
 (v5.1) to bring it to D. SQLite will also undergo the same process.  
 Because of this, these clients are not likely to get into Phobos and so,  
 if the API does then these will be an external package.

 If this project is completely successful, there will be a database API  
 and at least three DBMS clients ready for use in D applications. The  
 minimum amount of functionality for this to be considered successful  
 would be an API that is mostly utilised by the PostgreSql and MySQL  
 clients. In this scenario there will still be two usable clients,  
 however, perhaps the API is not totally complete and neither is the  
 SQLite client.
[snip]
 References
 ----------
 [1] https://github.com/pszturmaj/ddb  
 http://pszturmaj.github.com/ddb/db.html
Hmm.. In what way is your new module different from DDBI? What's the new features? Masahiro
Apr 05 2011
parent Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Masahiro Nakagawa wrote:
 [1] https://github.com/pszturmaj/ddb 
 http://pszturmaj.github.com/ddb/db.html
Hmm.. In what way is your new module different from DDBI? What's the new features?
I should state here that work on DDB is in progress, so it's subject to change. However, notable differences from DDBI are typed rows, where one can map structs/tuples/arrays or base types directly to the result. For example: enum Axis { x, y, z } struct SubRow1 { string s; int[] nums; int num; } alias Tuple!(int, "num", string, "s") SubRow2; struct Row { SubRow1 left; SubRow2[] right; Axis axis; string text; } auto cmd = new PGCommand(conn, "SELECT ROW('text', ARRAY[1, 2, 3], 100), ARRAY[ROW(1, 'str'), ROW(2, 'aab')], 'x', 'anotherText'"); auto row = cmd.executeRow!Row; // map result to Row struct assert(row.left.s == "text"); assert(row.left.nums == [1, 2, 3]); assert(row.left.num == 100); assert(row.right[0].num == 1 && row.right[0].s == "str"); assert(row.right[1].num == 2 && row.right[1].s == "aab"); assert(row.axis == Axis.x); assert(row.s == "anotherText"); This is done without intermediate state, such as Variant. In case of PostgreSQL binary encoding, values are directly read into struct fields. Also, typed rows form the basis of the ORM. Dynamic rows are also first class citizens: cmd = new PGCommand(conn, "SELECT * FROM table"); auto result = cmd.executeQuery; // range of DBRow!(Variant[]) foreach (row; result) { writeln(row["column"]); } result.close; Some parts of the Connection/Command classes are currently modeled after ADO.NET.
Apr 06 2011
prev sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 02.04.2011 22:03, schrieb Christian Manning:
 Hello all,
 
 This is my first draft proposal for a Database API for Google Summer Of
 Code. I have never written a document such as this so any and all
 feedback is welcome.
 
 Thanks
 ---------------------------------
 
 Synopsis
 --------
 An API for databases is a common component of many languages' standard
 library, though Phobos currently lacks this. This project will remedy
 this by providing such an API and also begin to utilise it with
 interfaces for some Database Management Systems (DBMS). I believe this
 will benefit the D community greatly and will help bring attention and
 developers to the language.
 
 Details
 -------
 This project takes influence from the Java Database Connectivity API
 (JDBC), the Python Database API v2 and other similar interfaces. The
 idea is that any database interface created for use with D will follow
 the API so that the only thing to change is the database back-end being
 used. This will make working with databases in D a much easier experience.
 
 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:
 
 module database;
 
 interface Connection {
     //method definitions for connecting to databases go here.
 }
 
 Then in an implementation of MySQL for example:
 
 module mysql;
 
 import database;
 
 class Connect : Connection {
     //implement defined methods tailoring to MySQL.
 }
 
 What goes in to these interfaces will be decided in conjunction with the
 D community so that there is minimal conflict and it will benefit as
 many circumstances as possible. I believe this to be the best route to
 take as I cannot speak for everyone who will be using this.
 
 Using the API created I plan to create an example implementation,
 initially wrapping around the MySQL C API. This will be a good starting
 point for this project and more can be created, time permitting.
 
 About Me
 --------
 My name is Christian Manning and I am a second year undergraduate
 studying Computer Science at De Montfort University.
 I've become interested in D over time after reading about it several
 years ago. I got myself "The D Programming Language" and went from
 there. Although I've not done anything useful in D as I've learnt mainly
 C and Java and am unable to use D for my university projects, I think
 I'm capable of achieving the goals of this project.
Something I just posted in another thread and I think is quite important for D's Database support: I think most databases (and their libs) are under a license that is not free enough for Phobos (SQLite is an exception - it's Public domain - and thus can and should be shipped with Phobos). So I guess Phobos' DB support should be written in a way that allows plugging in a DB driver that is distributed independently and under a different license (this makes sense anyway, because maintaining drivers for dozens of databases in Phobos is too much work). Maybe we'd need proper DLL support for that? This model is used by ODBC and JDBC as well. So you should probably think about how external drivers (not shipped with Phobos and not known when Phobos is compiled) can be implemented and loaded - but maybe this needs proper DLL/shared library support that is not yet available afaik. Cheers, - Daniel
Apr 11 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 4/11/2011 10:01 PM, Daniel Gibson wrote:
 Am 02.04.2011 22:03, schrieb Christian Manning:
 Hello all,

 This is my first draft proposal for a Database API for Google Summer Of
 Code. I have never written a document such as this so any and all
 feedback is welcome.

 Thanks
 ---------------------------------

 Synopsis
 --------
 An API for databases is a common component of many languages' standard
 library, though Phobos currently lacks this. This project will remedy
 this by providing such an API and also begin to utilise it with
 interfaces for some Database Management Systems (DBMS). I believe this
 will benefit the D community greatly and will help bring attention and
 developers to the language.

 Details
 -------
 This project takes influence from the Java Database Connectivity API
 (JDBC), the Python Database API v2 and other similar interfaces. The
 idea is that any database interface created for use with D will follow
 the API so that the only thing to change is the database back-end being
 used. This will make working with databases in D a much easier experience.

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
      //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
      //implement defined methods tailoring to MySQL.
 }

 What goes in to these interfaces will be decided in conjunction with the
 D community so that there is minimal conflict and it will benefit as
 many circumstances as possible. I believe this to be the best route to
 take as I cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially wrapping around the MySQL C API. This will be a good starting
 point for this project and more can be created, time permitting.

 About Me
 --------
 My name is Christian Manning and I am a second year undergraduate
 studying Computer Science at De Montfort University.
 I've become interested in D over time after reading about it several
 years ago. I got myself "The D Programming Language" and went from
 there. Although I've not done anything useful in D as I've learnt mainly
 C and Java and am unable to use D for my university projects, I think
 I'm capable of achieving the goals of this project.
Something I just posted in another thread and I think is quite important for D's Database support: I think most databases (and their libs) are under a license that is not free enough for Phobos (SQLite is an exception - it's Public domain - and thus can and should be shipped with Phobos). So I guess Phobos' DB support should be written in a way that allows plugging in a DB driver that is distributed independently and under a different license (this makes sense anyway, because maintaining drivers for dozens of databases in Phobos is too much work). Maybe we'd need proper DLL support for that? This model is used by ODBC and JDBC as well. So you should probably think about how external drivers (not shipped with Phobos and not known when Phobos is compiled) can be implemented and loaded - but maybe this needs proper DLL/shared library support that is not yet available afaik. Cheers, - Daniel
Makes sense. I think 110% that SQLite should be the top priority w.r.t. database stuff. SQLite bindings and a good D API with some dependency inversion so the high-level API can be reused with other database backends would be a great GSoC project, even if nothing involving other backends is actually implemented. According to this page (http://sqlite.org/mostdeployed.html) SQLite is probably the most popular database out there and it's definitely the most amenable to being fully supported by a standard library (i.e. no other dependencies). I don't know how many times I've wanted to create a quick in-memory database and gone with some stupid ad-hoc class with a bunch of hashtables and stuff just because I didn't have an SQLite API conveniently available. Yeah, SQLite's not the most scalable thing in the world but **you don't always need scalability** and when you do, you usually have the resources to deal with a little extra hassle like writing some bindings.
Apr 11 2011
next sibling parent spir <denis.spir gmail.com> writes:
On 04/12/2011 04:15 AM, dsimcha wrote:
 I think 110% that SQLite should be the top priority w.r.t. database stuff.
 SQLite bindings and a good D API with some dependency inversion so the
 high-level API can be reused with other database backends would be a great GSoC
 project, even if nothing involving other backends is actually implemented.
Agreed...
 According to this page (http://sqlite.org/mostdeployed.html) SQLite is probably
 the most popular database out there and it's definitely the most amenable to
 being fully supported by a standard library (i.e. no other dependencies).  I
 don't know how many times I've wanted to create a quick in-memory database and
 gone with some stupid ad-hoc class with a bunch of hashtables and stuff just
 because I didn't have an SQLite API conveniently available.  Yeah, SQLite's not
 the most scalable thing in the world but **you don't always need scalability**
 and when you do, you usually have the resources to deal with a little extra
 hassle like writing some bindings.
...as well. Denis -- _________________ vita es estrany spir.wikidot.com
Apr 12 2011
prev sibling parent Piotr Szturmaj <bncrbme jadamspam.pl> writes:
dsimcha wrote:
 On 4/11/2011 10:01 PM, Daniel Gibson wrote:
 Am 02.04.2011 22:03, schrieb Christian Manning:
 Hello all,

 This is my first draft proposal for a Database API for Google Summer Of
 Code. I have never written a document such as this so any and all
 feedback is welcome.

 Thanks
 ---------------------------------

 Synopsis
 --------
 An API for databases is a common component of many languages' standard
 library, though Phobos currently lacks this. This project will remedy
 this by providing such an API and also begin to utilise it with
 interfaces for some Database Management Systems (DBMS). I believe this
 will benefit the D community greatly and will help bring attention and
 developers to the language.

 Details
 -------
 This project takes influence from the Java Database Connectivity API
 (JDBC), the Python Database API v2 and other similar interfaces. The
 idea is that any database interface created for use with D will follow
 the API so that the only thing to change is the database back-end being
 used. This will make working with databases in D a much easier
 experience.

 I plan to have several interfaces in a database module which are then
 implemented for specific DBMSs.
 For example:

 module database;

 interface Connection {
 //method definitions for connecting to databases go here.
 }

 Then in an implementation of MySQL for example:

 module mysql;

 import database;

 class Connect : Connection {
 //implement defined methods tailoring to MySQL.
 }

 What goes in to these interfaces will be decided in conjunction with the
 D community so that there is minimal conflict and it will benefit as
 many circumstances as possible. I believe this to be the best route to
 take as I cannot speak for everyone who will be using this.

 Using the API created I plan to create an example implementation,
 initially wrapping around the MySQL C API. This will be a good starting
 point for this project and more can be created, time permitting.

 About Me
 --------
 My name is Christian Manning and I am a second year undergraduate
 studying Computer Science at De Montfort University.
 I've become interested in D over time after reading about it several
 years ago. I got myself "The D Programming Language" and went from
 there. Although I've not done anything useful in D as I've learnt mainly
 C and Java and am unable to use D for my university projects, I think
 I'm capable of achieving the goals of this project.
Something I just posted in another thread and I think is quite important for D's Database support: I think most databases (and their libs) are under a license that is not free enough for Phobos (SQLite is an exception - it's Public domain - and thus can and should be shipped with Phobos). So I guess Phobos' DB support should be written in a way that allows plugging in a DB driver that is distributed independently and under a different license (this makes sense anyway, because maintaining drivers for dozens of databases in Phobos is too much work). Maybe we'd need proper DLL support for that? This model is used by ODBC and JDBC as well. So you should probably think about how external drivers (not shipped with Phobos and not known when Phobos is compiled) can be implemented and loaded - but maybe this needs proper DLL/shared library support that is not yet available afaik. Cheers, - Daniel
Makes sense. I think 110% that SQLite should be the top priority w.r.t. database stuff. SQLite bindings and a good D API with some dependency inversion so the high-level API can be reused with other database backends would be a great GSoC project, even if nothing involving other backends is actually implemented.
I agree that SQLite should be here but I think DB API should be prototyped using the most featureful/advanced database system, i.e. Oracle / PostgreSQL. An API covering those databases would certainly support less advanced ones, such as SQLite.
Apr 12 2011