www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - H1 2015 - db access support in Phobos

reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
I would like to propose Java way for implementation of DB access 
(JDBC - Java DataBase Connectors).

Standard library must contain only
* set of interfaces for uniform DB access
     * Connection - can create Statement or PreparedStatement, 
control transactions
     * Statement - can run update or query
     * PreparedStatement - same as Statement, but with parameters
     * ResultSet - DB query result row access
     * DataSource - can create connections, w/o parameters
     * Driver - interface which implements connection factory for 
particular DB types
     * Some useful classes like ConnectionPool
* factory method to create connection by URL and parameter set
* some method to register DB driver for factory - e.g. when 
called from driver's module __gshared static this()

Drivers may be placed in separate packages in standard library or 
in some DUB packages.

Look at https://github.com/buggins/ddbc
Currently it contains drivers for sqlite, MySQL and PostgreSQL.


If Andrey/Walter wish, I can prepare pool request based on code 
from ddbc.

Best regards,
      Vadim
Feb 01 2015
next sibling parent reply "AndyC" <andy squeakycode.net> writes:
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).

 Standard library must contain only
 * set of interfaces for uniform DB access
     * Connection - can create Statement or PreparedStatement, 
 control transactions
     * Statement - can run update or query
     * PreparedStatement - same as Statement, but with parameters
     * ResultSet - DB query result row access
     * DataSource - can create connections, w/o parameters
     * Driver - interface which implements connection factory 
 for particular DB types
     * Some useful classes like ConnectionPool
 * factory method to create connection by URL and parameter set
 * some method to register DB driver for factory - e.g. when 
 called from driver's module __gshared static this()

 Drivers may be placed in separate packages in standard library 
 or in some DUB packages.

 Look at https://github.com/buggins/ddbc
 Currently it contains drivers for sqlite, MySQL and PostgreSQL.


 If Andrey/Walter wish, I can prepare pool request based on code 
 from ddbc.

 Best regards,
      Vadim
-1 on copy Java. It seems over complicated. We need a base Database Class, then derive PostgresDatabase from that. No need for Driver class. PostgresDatabase class contains all the knowledge to connect to the actual database so what's left for Driver? If we have Database class, why do we need Connection or DataSource? They don't offer anything Database shouldn't already have. Database (and children), ResultSet and Statement are all that's needed. Statement can have a prepare() function. No need to make an entire PreparedStatement class, seems overkill. I pray we never ever have factories. I like verbs, they make sense. -Andy
Feb 02 2015
parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Monday, 2 February 2015 at 18:00:28 UTC, AndyC wrote:
 On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).

 Standard library must contain only
 * set of interfaces for uniform DB access
    * Connection - can create Statement or PreparedStatement, 
 control transactions
    * Statement - can run update or query
    * PreparedStatement - same as Statement, but with parameters
    * ResultSet - DB query result row access
    * DataSource - can create connections, w/o parameters
    * Driver - interface which implements connection factory 
 for particular DB types
    * Some useful classes like ConnectionPool
 * factory method to create connection by URL and parameter set
 * some method to register DB driver for factory - e.g. when 
 called from driver's module __gshared static this()

 Drivers may be placed in separate packages in standard library 
 or in some DUB packages.

 Look at https://github.com/buggins/ddbc
 Currently it contains drivers for sqlite, MySQL and PostgreSQL.


 If Andrey/Walter wish, I can prepare pool request based on 
 code from ddbc.

 Best regards,
     Vadim
-1 on copy Java. It seems over complicated. We need a base Database Class, then derive PostgresDatabase from that. No need for Driver class. PostgresDatabase class contains all the knowledge to connect to the actual database so what's left for Driver? If we have Database class, why do we need Connection or DataSource? They don't offer anything Database shouldn't already have. Database (and children), ResultSet and Statement are all that's needed. Statement can have a prepare() function. No need to make an entire PreparedStatement class, seems overkill. I pray we never ever have factories. I like verbs, they make sense. -Andy
Did you ever write complex DB applications? Connection is important part for DB access. Statements are always being executed on particular connection. E.g. sequence temporary table operations, transaction management, and other operations require to work on the same physical connection. Connection is a resource. It should be managed. It's mandatory to close physical connection when no more used. But reopening of physical connection to DB is high cost operation. Therefore usually connections are being reused using Connection Pools. Often application needs several connections to do operations in parallel. Some kind of factory is needed anyway, to allow application be independent from particular DB kind (e.g. to allow configuring DB type and parameters in some configuration file). Driver is just an abstraction which can be used for creation of DB connections (you just call it Database). PreparedStatement is important thing - it's not only ability to specify query parameters, but it is being compiled (prepared) on server side and can be reused and be executed several times with higher performance than usual query. Prepared statements are resources - and should have an ability to be closed when no longer needed. ResultSets are probably redundant and may be embedded into statements, but in this case it will not be possible to deal with multiple result sets from same query.
Feb 02 2015
prev sibling next sibling parent reply "Daniel Kozak" <kozzi11 gmail.com> writes:
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).

 Standard library must contain only
 * set of interfaces for uniform DB access
     * Connection - can create Statement or PreparedStatement, 
 control transactions
     * Statement - can run update or query
     * PreparedStatement - same as Statement, but with parameters
     * ResultSet - DB query result row access
     * DataSource - can create connections, w/o parameters
     * Driver - interface which implements connection factory 
 for particular DB types
     * Some useful classes like ConnectionPool
 * factory method to create connection by URL and parameter set
 * some method to register DB driver for factory - e.g. when 
 called from driver's module __gshared static this()

 Drivers may be placed in separate packages in standard library 
 or in some DUB packages.

 Look at https://github.com/buggins/ddbc
 Currently it contains drivers for sqlite, MySQL and PostgreSQL.


 If Andrey/Walter wish, I can prepare pool request based on code 
 from ddbc.

 Best regards,
      Vadim
Yep this is way to go, I have start implementing JPA for Dlang(ddpa - dlang data persistence api): Entity (Table.name("nazev_tabulky").schema("nazev_schema")) class C { Id (GeneratedValue.strategy(GenerationType.IDENTITY)) long id; auto getId() { return this.id; } void setId(long id) { this.id = id; } } void main() { EntityManagerFactory emf = Persistence.createEntityManagerFactory("test"); EntityManager em = emf.createEntityManager(); C obj = em.find!C(1); }
Feb 03 2015
parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 08:23:56 UTC, Daniel Kozak wrote:
 On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).

 Standard library must contain only
 * set of interfaces for uniform DB access
    * Connection - can create Statement or PreparedStatement, 
 control transactions
    * Statement - can run update or query
    * PreparedStatement - same as Statement, but with parameters
    * ResultSet - DB query result row access
    * DataSource - can create connections, w/o parameters
    * Driver - interface which implements connection factory 
 for particular DB types
    * Some useful classes like ConnectionPool
 * factory method to create connection by URL and parameter set
 * some method to register DB driver for factory - e.g. when 
 called from driver's module __gshared static this()

 Drivers may be placed in separate packages in standard library 
 or in some DUB packages.

 Look at https://github.com/buggins/ddbc
 Currently it contains drivers for sqlite, MySQL and PostgreSQL.


 If Andrey/Walter wish, I can prepare pool request based on 
 code from ddbc.

 Best regards,
     Vadim
Yep this is way to go, I have start implementing JPA for Dlang(ddpa - dlang data persistence api): Entity (Table.name("nazev_tabulky").schema("nazev_schema")) class C { Id (GeneratedValue.strategy(GenerationType.IDENTITY)) long id; auto getId() { return this.id; } void setId(long id) { this.id = id; } } void main() { EntityManagerFactory emf = Persistence.createEntityManagerFactory("test"); EntityManager em = emf.createEntityManager(); C obj = em.find!C(1); }
I did almost the same :) https://github.com/buggins/hibernated // Annotations of entity classes class User { long id; string name; Customer customer; ManyToMany // cannot be inferred, requires annotation LazyCollection!Role roles; } class Customer { int id; string name; // Embedded is inferred from type of Address Address address; Lazy!AccountType accountType; // ManyToOne inferred User[] users; // OneToMany inferred this() { address = new Address(); } } Embeddable class Address { string zip; string city; string streetAddress; } class AccountType { int id; string name; } class Role { int id; string name; ManyToMany // w/o this annotation will be OneToMany by convention LazyCollection!User users; }
Feb 03 2015
prev sibling next sibling parent "Gary Willoughby" <dev nomad.so> writes:
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).

 Standard library must contain only
 * set of interfaces for uniform DB access
     * Connection - can create Statement or PreparedStatement, 
 control transactions
     * Statement - can run update or query
     * PreparedStatement - same as Statement, but with parameters
     * ResultSet - DB query result row access
     * DataSource - can create connections, w/o parameters
     * Driver - interface which implements connection factory 
 for particular DB types
     * Some useful classes like ConnectionPool
 * factory method to create connection by URL and parameter set
 * some method to register DB driver for factory - e.g. when 
 called from driver's module __gshared static this()

 Drivers may be placed in separate packages in standard library 
 or in some DUB packages.

 Look at https://github.com/buggins/ddbc
 Currently it contains drivers for sqlite, MySQL and PostgreSQL.


 If Andrey/Walter wish, I can prepare pool request based on code 
 from ddbc.

 Best regards,
      Vadim
Seems like a good idea. I'd also look at Go's implementation as they too only define an interface in the stdlib and rely on others to provide the drivers using the interface.
Feb 03 2015
prev sibling next sibling parent reply "Robert burner Schadek" <rburners gmail.com> writes:
IMO the java way is to uncreative. We have UDA, CTFE and string 
mixins anything short of perfect SQL generated at compile time 
without anything else than UDA annotation to my structs is an 
epic lose for D. I mean, who wants to write string statements 
when the lib can generate them for you. For every sql incarnation 
you can think of.
Feb 03 2015
parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 08:37:04 UTC, Robert burner 
Schadek wrote:
 IMO the java way is to uncreative. We have UDA, CTFE and string 
 mixins anything short of perfect SQL generated at compile time 
 without anything else than UDA annotation to my structs is an 
 epic lose for D. I mean, who wants to write string statements 
 when the lib can generate them for you. For every sql 
 incarnation you can think of.
What you are talking about is a bit different, higher level thing, ORM. It's what done in my project hibernated or in project of Daniel Kozak he mentioned above. E.g. hibernated uses CTFE and mixins to collect metadata, and prepare parts of SQL queries. But for such high level DB library must be based on some lower level DB API (connector). Like JDBC for JPA or Hibernate in Java. D may offer more convenient way for reading of field values from result set, and for setting of parameters - using templates, and other cool D features. Parsing of SQL doesn't make sense in low level. It can determine number of fields expected in query result (if * is not used), but it doesn't know column types anyway.
Feb 03 2015
parent reply "Robert burner Schadek" <rburners gmail.com> writes:
 But for such high level DB library must be based on some lower 
 level DB API (connector).
 Like JDBC for JPA or Hibernate in Java.
really, does it? there is no need for an abstraction layer. you can have functions generate the correct source for mysql, sqlite, you name it. e.g. just generate the mysql statement at CT and pass the pointer to the sql c function. pass the parameter. done. Dream big.
 D may offer more convenient way for reading of field values ...
 not used), but it doesn't know column types anyway.
I don't get your point
Feb 03 2015
parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 09:16:14 UTC, Robert burner 
Schadek wrote:
 But for such high level DB library must be based on some lower 
 level DB API (connector).
 Like JDBC for JPA or Hibernate in Java.
really, does it? there is no need for an abstraction layer. you can have functions generate the correct source for mysql, sqlite, you name it. e.g. just generate the mysql statement at CT and pass the pointer to the sql c function. pass the parameter. done. Dream big.
ORM operations are not a simple single query/statement. They often use several queries to load dependent objects.
 D may offer more convenient way for reading of field values ...
 not used), but it doesn't know column types anyway.
I don't get your point
E.g. you can write template method for ResultSet for easy getting column values into variables. E.g. read method can have variable number of ref args to place read data to. Template will check types of args, and do necessary conversions of data when reading. // read into vars long id; string name; int flags; rs = statement.executeQuery("SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name"); while (rs.next(id, name, flags)) { writeln(id, " ", name, " ", flags); } or // read into structure or class fields struct User { long id; string name; int flags; } User row; rs = statement.executeQuery("SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name"); while (rs.next(row)) { writeln(row.id, " ", row.name, " ", row.flags); } Of course, some CTFE/UDAs may be used for generation of field list, but it is not very helpful IMO. // automatically generated SQL - based on type of struct/class params User row; rs = statement.executeSelect(row, "WHERE flags = 5 ORDER BY name"); // actually, sql "SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name" will be used while (rs.next(row)) { // every time row is read, struct fields will be updated with new values writeln(row.id, " ", row.name, " ", row.flags); } UDAs can be used to change mapping of class/struct name to table name, field names to column names, mark some fields/properties as non persistent.
Feb 03 2015
parent reply "Robert burner Schadek" <rburners gmail.com> writes:
On Tuesday, 3 February 2015 at 10:33:25 UTC, Vadim Lopatin wrote:
 ORM operations are not a simple single query/statement.
 They often use several queries to load dependent objects.
make it an output range
 Of course, some CTFE/UDAs may be used for generation of field 
 list, but it is not very helpful IMO.
IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.
Feb 03 2015
parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner 
Schadek wrote:
 On Tuesday, 3 February 2015 at 10:33:25 UTC, Vadim Lopatin 
 wrote:
 ORM operations are not a simple single query/statement.
 They often use several queries to load dependent objects.
make it an output range
 Of course, some CTFE/UDAs may be used for generation of field 
 list, but it is not very helpful IMO.
IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Feb 03 2015
parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:
 On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner
 IMO writing:

 foreach(it; db.select<User>("...")) {
 }

 is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0
Feb 03 2015
next sibling parent reply Daniel =?UTF-8?B?S296w6Fr?= via Digitalmars-d writes:
V Tue, 03 Feb 2015 15:20:40 +0000
Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> napsáno:

 On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:
 On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner
 IMO writing:

 foreach(it; db.select<User>("...")) {
 }

 is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0
This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
Feb 03 2015
next sibling parent "Kagamin" <spam here.lot> writes:
In my experience such optimizations are usually needed only for 
reading, so the query may return a plain tuple instead of true 
mapped object. Well, this can depend on whether you have an ORM 
container.
Feb 03 2015
prev sibling next sibling parent reply FG <home fgda.pl> writes:
On 2015-02-03 at 16:30, Daniel Kozák via Digitalmars-d wrote:
 V Tue, 03 Feb 2015 15:20:40 +0000
 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> napsáno:
 Update in v0.2.19: you can specify field list for select

       writeln("reading all user table rows, but fetching only id
 and name (you will see default value 0 in flags field)");
       foreach(ref e; stmt.select!(User, "id", "name")) {
           writeln("id:", e.id, " name:", e.name, " flags:",
 e.flags);
       }

 output:

 reading all user table rows, but fetching only id and name (you
 will see default value 0 in flags field)
 id:1 name:John flags:0
 id:2 name:Andrei flags:0
 id:3 name:Walter flags:0
 id:4 name:Rikki flags:0
 id:5 name:Iain flags:0
 id:6 name:Robert flags:0
This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use with records containing blobs or big amounts of other data. If the object was retrieved only for read and to update a few fields, pulling all columns would be very wasteful. So now, for the purpose of performing an update, each field could be marked as dirty (if its value has changed) or inconsistent (if it wasn't pulled from the database at all). And BTW, each should be Nullable to prevent loss of information for columns that aren't marked as NOT NULL. ORM easily becomes complicated...
Feb 03 2015
next sibling parent =?utf-8?Q?Daniel_Koz=C3=A1k?= via Digitalmars-d writes:
For blobs or other unneeded data you can setup column as lazy fetch. What I=
 mean is situation when you have object in inconsistent state.

----- P=C5=AFvodn=C3=AD zpr=C3=A1va -----
Od:"FG via Digitalmars-d" <digitalmars-d puremagic.com>
Odesl=C3=A1no:=E2=80=8E3. =E2=80=8E2. =E2=80=8E2015 17:45
Komu:"digitalmars-d puremagic.com" <digitalmars-d puremagic.com>
P=C5=99edm=C4=9Bt:Re: H1 2015 - db access support in Phobos

On 2015-02-03 at 16:30, Daniel Koz=C3=A1k via Digitalmars-d wrote:
 V Tue, 03 Feb 2015 15:20:40 +0000
 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> naps=C3=A1n=
o:
 Update in v0.2.19: you can specify field list for select

       writeln("reading all user table rows, but fetching only id
 and name (you will see default value 0 in flags field)");
       foreach(ref e; stmt.select!(User, "id", "name")) {
           writeln("id:", e.id, " name:", e.name, " flags:",
 e.flags);
       }

 output:

 reading all user table rows, but fetching only id and name (you
 will see default value 0 in flags field)
 id:1 name:John flags:0
 id:2 name:Andrei flags:0
 id:3 name:Walter flags:0
 id:4 name:Rikki flags:0
 id:5 name:Iain flags:0
 id:6 name:Robert flags:0
This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
=20 A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use = with records containing blobs or big amounts of other data. If the object w= as retrieved only for read and to update a few fields, pulling all columns = would be very wasteful. So now, for the purpose of performing an update, each field could be marked= as dirty (if its value has changed) or inconsistent (if it wasn't pulled f= rom the database at all). And BTW, each should be Nullable to prevent loss = of information for columns that aren't marked as NOT NULL. ORM easily becom= es complicated...
Feb 03 2015
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-02-03 17:42, FG wrote:

 A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much
 use with records containing blobs or big amounts of other data. If the
 object was retrieved only for read and to update a few fields, pulling
 all columns would be very wasteful.
Martin Nowak showed, somewhere here on the forums, how this can be nicely done. The final query will only contain the fields which are actually used in the code. -- /Jacob Carlborg
Feb 03 2015
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Tuesday, 3 February 2015 at 16:42:24 UTC, FG wrote:
 A misfeature? If the ORM only allowed SELECT *, it wouldn't be 
 of much use with records containing blobs or big amounts of 
 other data. If the object was retrieved only for read and to 
 update a few fields, pulling all columns would be very wasteful.
The more interesting application is prefetching for associated relations when needed, to avoid the infamous N+1 problem.
Feb 04 2015
prev sibling parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 15:30:42 UTC, Daniel Kozák wrote:
 V Tue, 03 Feb 2015 15:20:40 +0000
 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> 
 napsáno:

 On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin 
 wrote:
 On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner
 IMO writing:

 foreach(it; db.select<User>("...")) {
 }

 is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0
This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
It's not considered as a main API. It's not madatory to import this module. It's a set of helpers to simplify getting field values from ResultSet into struct or tuple, or write to prepared statement parameters, a kind of syntax sugar. For ORM purposes, real ORM must be used.
Feb 03 2015
parent =?utf-8?Q?Daniel_Koz=C3=A1k?= via Digitalmars-d writes:
Ok than it is ok :)

----- P=C5=AFvodn=C3=AD zpr=C3=A1va -----
Od:"Vadim Lopatin via Digitalmars-d" <digitalmars-d puremagic.com>
Odesl=C3=A1no:=E2=80=8E3. =E2=80=8E2. =E2=80=8E2015 18:00
Komu:"digitalmars-d puremagic.com" <digitalmars-d puremagic.com>
P=C5=99edm=C4=9Bt:Re: H1 2015 - db access support in Phobos

On Tuesday, 3 February 2015 at 15:30:42 UTC, Daniel Koz=C3=A1k wrote:
 V Tue, 03 Feb 2015 15:20:40 +0000
 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com>=20
 naps=C3=A1no:

 On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin=20
 wrote:
 On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner
 IMO writing:

 foreach(it; db.select<User>("...")) {
 }

 is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds =3D new ConnectionPoolDataSourceImpl(new=20 SQLITEDriver(), "ddbctest.sqlite"); auto conn =3D ds.getConnection(); scope(exit) conn.close(); Statement stmt =3D conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER=20 PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags)=20 VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2),=20 (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id <=20 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, "=20 flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads=20 all rows in default order. Possible improvements: ability to specify field list to read=20 only necessary fields.
=20 Update in v0.2.19: you can specify field list for select =20 writeln("reading all user table rows, but fetching only=20 id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:",=20 e.flags); } =20 output: =20 reading all user table rows, but fetching only id and name=20 (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0 =20
This is dangerous zone, you shoud not do this ;-). In my work I=20 have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be=20 against such a misfeature.
It's not considered as a main API. It's not madatory to import=20 this module. It's a set of helpers to simplify getting field values from=20 ResultSet into struct or tuple, or write to prepared statement=20 parameters, a kind of syntax sugar. For ORM purposes, real ORM must be used.
Feb 03 2015
prev sibling next sibling parent "Robert burner Schadek" <rburners gmail.com> writes:
nice!

get it into phobos or vibe
Feb 03 2015
prev sibling parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 15:20:41 UTC, Vadim Lopatin wrote:
 On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin 
 wrote:
 On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner
 IMO writing:

 foreach(it; db.select<User>("...")) {
 }

 is epic. you have entered std.(range|algorithm) land.
Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0
Small addition for D-style selects: // now it's possible to use vars as destination long id; string name; foreach(e; stmt.select!()("SELECT id, name FROM user", id, name)) { writeln("id:", id, " name:", name); } Result: id:1 name:John id:2 name:Andrei id:3 name:Walter id:4 name:Rikki id:5 name:Iain id:6 name:Robert
Feb 04 2015
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
I see problems with licensing and external dependencies.
Feb 03 2015
parent reply "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 12:06:25 UTC, Kagamin wrote:
 I see problems with licensing and external dependencies.
License on API? (like Sun sued Google for implementation of Java API?) Or on third party libraries used by DB drivers.
Feb 03 2015
parent reply "Kagamin" <spam here.lot> writes:
The database client library or the database itself in case of 
sqlite, i.e. code you link with. Dependency also means you should 
get it somewhere, possibly compiling with dmc.
Feb 03 2015
parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Tuesday, 3 February 2015 at 14:03:22 UTC, Kagamin wrote:
 The database client library or the database itself in case of 
 sqlite, i.e. code you link with. Dependency also means you 
 should get it somewhere, possibly compiling with dmc.
If only interfaces included into Phobos, it's not a problem of Phobos.
Feb 03 2015
prev sibling next sibling parent reply "Szymon Gatner" <noemail gmail.com> writes:
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).
Please no. If anything, *any* new library for D should be based on C++ version and then make it nicer with D features. Basing things on Java is major step back even in C++. Next thing we know we will need tons of XML to configure database connections and ORM. This should have happened from the start with logging library too (should have been based on boost.log) and in this case one should look into SOCI (http://soci.sourceforge.net/) and not Java versions.
Feb 03 2015
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 3 February 2015 at 19:54:52 UTC, Szymon Gatner wrote:
 On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC - Java DataBase Connectors).
Please no. If anything, *any* new library for D should be based on C++ version and then make it nicer with D features. Basing things on Java is major step back even in C++. Next thing we know we will need tons of XML to configure database connections and ORM. This should have happened from the start with logging library too (should have been based on boost.log) and in this case one should look into SOCI (http://soci.sourceforge.net/) and not Java versions.
The only usable C++ version I know is ADO, even if it is Windows only, which is quite simiar to JDBC. Never saw anything that was better than it for C++. Poet appears to be gone. -- Paulo
Feb 03 2015
prev sibling parent "Kagamin" <spam here.lot> writes:
On Tuesday, 3 February 2015 at 19:54:52 UTC, Szymon Gatner wrote:
 This should have happened from the start with logging library 
 too (should have been based on boost.log) and in this case one 
 should look into SOCI (http://soci.sourceforge.net/) and not 
 Java versions.
https://github.com/SOCI/soci/blob/master/src/core/soci-backend.h the backend interface roughly follows ADO design, but is rather big and uses polymorphic types extensively too.
Feb 04 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/1/15 8:00 PM, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB access (JDBC -
 Java DataBase Connectors).
[snip] I think we should use ODBC as the foundation and build neat D-ish stuff on top of it. It's a mature technology with tried and tested drivers available on all major OSs. Mark Isaacson built an ODBC driver for Presto in D while an intern at Facebook: https://github.com/prestodb/presto-odbc. We should write the corresponding ODBC client part and create D abstractions on top of it. Andrei
Feb 04 2015
parent "Vadim Lopatin" <coolreader.org gmail.com> writes:
On Wednesday, 4 February 2015 at 16:46:19 UTC, Andrei 
Alexandrescu wrote:
 On 2/1/15 8:00 PM, Vadim Lopatin wrote:
 I would like to propose Java way for implementation of DB 
 access (JDBC -
 Java DataBase Connectors).
[snip] I think we should use ODBC as the foundation and build neat D-ish stuff on top of it. It's a mature technology with tried and tested drivers available on all major OSs. Mark Isaacson built an ODBC driver for Presto in D while an intern at Facebook: https://github.com/prestodb/presto-odbc. We should write the corresponding ODBC client part and create D abstractions on top of it. Andrei
IMHO, ODBC support may be implemented as a driver, as it's done in other DB access libraries. E.g. we have native mysql client written in D - mysqln. It allows to access MySQL w/o external dependencies. As well, accessing of SQLite DBs via ODBC seems weird.
Feb 04 2015