digitalmars.D - H1 2015 - db access support in Phobos
- Vadim Lopatin (24/24) Feb 01 2015 I would like to propose Java way for implementation of DB access
- AndyC (15/39) Feb 02 2015 -1 on copy Java. It seems over complicated.
- Vadim Lopatin (27/72) Feb 02 2015 Did you ever write complex DB applications?
- Daniel Kozak (24/48) Feb 03 2015 Yep this is way to go, I have start implementing JPA for
- Vadim Lopatin (38/92) Feb 03 2015 I did almost the same :) https://github.com/buggins/hibernated
- Gary Willoughby (4/28) Feb 03 2015 Seems like a good idea. I'd also look at Go's implementation as
- Robert burner Schadek (6/6) Feb 03 2015 IMO the java way is to uncreative. We have UDA, CTFE and string
- Vadim Lopatin (17/23) Feb 03 2015 What you are talking about is a bit different, higher level
- Robert burner Schadek (6/11) Feb 03 2015 really, does it? there is no need for an abstraction layer. you
- Vadim Lopatin (47/58) Feb 03 2015 ORM operations are not a simple single query/statement.
- Robert burner Schadek (6/10) Feb 03 2015 IMO writing:
- Vadim Lopatin (41/54) Feb 03 2015 Implemented in v0.2.17 for select.
- Vadim Lopatin (17/63) Feb 03 2015 Update in v0.2.19: you can specify field list for select
- Daniel =?UTF-8?B?S296w6Fr?= via Digitalmars-d (6/81) Feb 03 2015 This is dangerous zone, you shoud not do this ;-). In my work I have
- Kagamin (4/4) Feb 03 2015 In my experience such optimizations are usually needed only for
- FG (4/30) Feb 03 2015
- =?utf-8?Q?Daniel_Koz=C3=A1k?= via Digitalmars-d (19/45) Feb 03 2015 For blobs or other unneeded data you can setup column as lazy fetch. Wha...
- Jacob Carlborg (6/10) Feb 03 2015 Martin Nowak showed, somewhere here on the forums, how this can be
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (3/7) Feb 04 2015 The more interesting application is prefetching for associated
- Vadim Lopatin (7/92) Feb 03 2015 It's not considered as a main API. It's not madatory to import
- =?utf-8?Q?Daniel_Koz=C3=A1k?= via Digitalmars-d (13/98) Feb 03 2015 Ok than it is ok :)
- Robert burner Schadek (2/2) Feb 03 2015 nice!
- Vadim Lopatin (16/87) Feb 04 2015 Small addition for D-style selects:
- Kagamin (1/1) Feb 03 2015 I see problems with licensing and external dependencies.
- Vadim Lopatin (4/5) Feb 03 2015 License on API? (like Sun sued Google for implementation of Java
- Kagamin (3/3) Feb 03 2015 The database client library or the database itself in case of
- Vadim Lopatin (3/6) Feb 03 2015 If only interfaces included into Phobos, it's not a problem of
- Szymon Gatner (10/12) Feb 03 2015 Please no. If anything, *any* new library for D should be based
- Paulo Pinto (7/20) Feb 03 2015 The only usable C++ version I know is ADO, even if it is Windows
- Kagamin (4/8) Feb 04 2015 https://github.com/SOCI/soci/blob/master/src/core/soci-backend.h
- Andrei Alexandrescu (9/11) Feb 04 2015 [snip]
- Vadim Lopatin (7/20) Feb 04 2015 IMHO, ODBC support may be implemented as a driver, as it's done
I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, Vadim
Feb 01 2015
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, Vadim-1 on copy Java. It seems over complicated. We need a base Database Class, then derive PostgresDatabase from that. No need for Driver class. PostgresDatabase class contains all the knowledge to connect to the actual database so what's left for Driver? If we have Database class, why do we need Connection or DataSource? They don't offer anything Database shouldn't already have. Database (and children), ResultSet and Statement are all that's needed. Statement can have a prepare() function. No need to make an entire PreparedStatement class, seems overkill. I pray we never ever have factories. I like verbs, they make sense. -Andy
Feb 02 2015
On Monday, 2 February 2015 at 18:00:28 UTC, AndyC wrote:On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:Did you ever write complex DB applications? Connection is important part for DB access. Statements are always being executed on particular connection. E.g. sequence temporary table operations, transaction management, and other operations require to work on the same physical connection. Connection is a resource. It should be managed. It's mandatory to close physical connection when no more used. But reopening of physical connection to DB is high cost operation. Therefore usually connections are being reused using Connection Pools. Often application needs several connections to do operations in parallel. Some kind of factory is needed anyway, to allow application be independent from particular DB kind (e.g. to allow configuring DB type and parameters in some configuration file). Driver is just an abstraction which can be used for creation of DB connections (you just call it Database). PreparedStatement is important thing - it's not only ability to specify query parameters, but it is being compiled (prepared) on server side and can be reused and be executed several times with higher performance than usual query. Prepared statements are resources - and should have an ability to be closed when no longer needed. ResultSets are probably redundant and may be embedded into statements, but in this case it will not be possible to deal with multiple result sets from same query.I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, Vadim-1 on copy Java. It seems over complicated. We need a base Database Class, then derive PostgresDatabase from that. No need for Driver class. PostgresDatabase class contains all the knowledge to connect to the actual database so what's left for Driver? If we have Database class, why do we need Connection or DataSource? They don't offer anything Database shouldn't already have. Database (and children), ResultSet and Statement are all that's needed. Statement can have a prepare() function. No need to make an entire PreparedStatement class, seems overkill. I pray we never ever have factories. I like verbs, they make sense. -Andy
Feb 02 2015
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, VadimYep this is way to go, I have start implementing JPA for Dlang(ddpa - dlang data persistence api): Entity (Table.name("nazev_tabulky").schema("nazev_schema")) class C { Id (GeneratedValue.strategy(GenerationType.IDENTITY)) long id; auto getId() { return this.id; } void setId(long id) { this.id = id; } } void main() { EntityManagerFactory emf = Persistence.createEntityManagerFactory("test"); EntityManager em = emf.createEntityManager(); C obj = em.find!C(1); }
Feb 03 2015
On Tuesday, 3 February 2015 at 08:23:56 UTC, Daniel Kozak wrote:On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:I did almost the same :) https://github.com/buggins/hibernated // Annotations of entity classes class User { long id; string name; Customer customer; ManyToMany // cannot be inferred, requires annotation LazyCollection!Role roles; } class Customer { int id; string name; // Embedded is inferred from type of Address Address address; Lazy!AccountType accountType; // ManyToOne inferred User[] users; // OneToMany inferred this() { address = new Address(); } } Embeddable class Address { string zip; string city; string streetAddress; } class AccountType { int id; string name; } class Role { int id; string name; ManyToMany // w/o this annotation will be OneToMany by convention LazyCollection!User users; }I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, VadimYep this is way to go, I have start implementing JPA for Dlang(ddpa - dlang data persistence api): Entity (Table.name("nazev_tabulky").schema("nazev_schema")) class C { Id (GeneratedValue.strategy(GenerationType.IDENTITY)) long id; auto getId() { return this.id; } void setId(long id) { this.id = id; } } void main() { EntityManagerFactory emf = Persistence.createEntityManagerFactory("test"); EntityManager em = emf.createEntityManager(); C obj = em.find!C(1); }
Feb 03 2015
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors). Standard library must contain only * set of interfaces for uniform DB access * Connection - can create Statement or PreparedStatement, control transactions * Statement - can run update or query * PreparedStatement - same as Statement, but with parameters * ResultSet - DB query result row access * DataSource - can create connections, w/o parameters * Driver - interface which implements connection factory for particular DB types * Some useful classes like ConnectionPool * factory method to create connection by URL and parameter set * some method to register DB driver for factory - e.g. when called from driver's module __gshared static this() Drivers may be placed in separate packages in standard library or in some DUB packages. Look at https://github.com/buggins/ddbc Currently it contains drivers for sqlite, MySQL and PostgreSQL. If Andrey/Walter wish, I can prepare pool request based on code from ddbc. Best regards, VadimSeems like a good idea. I'd also look at Go's implementation as they too only define an interface in the stdlib and rely on others to provide the drivers using the interface.
Feb 03 2015
IMO the java way is to uncreative. We have UDA, CTFE and string mixins anything short of perfect SQL generated at compile time without anything else than UDA annotation to my structs is an epic lose for D. I mean, who wants to write string statements when the lib can generate them for you. For every sql incarnation you can think of.
Feb 03 2015
On Tuesday, 3 February 2015 at 08:37:04 UTC, Robert burner Schadek wrote:IMO the java way is to uncreative. We have UDA, CTFE and string mixins anything short of perfect SQL generated at compile time without anything else than UDA annotation to my structs is an epic lose for D. I mean, who wants to write string statements when the lib can generate them for you. For every sql incarnation you can think of.What you are talking about is a bit different, higher level thing, ORM. It's what done in my project hibernated or in project of Daniel Kozak he mentioned above. E.g. hibernated uses CTFE and mixins to collect metadata, and prepare parts of SQL queries. But for such high level DB library must be based on some lower level DB API (connector). Like JDBC for JPA or Hibernate in Java. D may offer more convenient way for reading of field values from result set, and for setting of parameters - using templates, and other cool D features. Parsing of SQL doesn't make sense in low level. It can determine number of fields expected in query result (if * is not used), but it doesn't know column types anyway.
Feb 03 2015
But for such high level DB library must be based on some lower level DB API (connector). Like JDBC for JPA or Hibernate in Java.really, does it? there is no need for an abstraction layer. you can have functions generate the correct source for mysql, sqlite, you name it. e.g. just generate the mysql statement at CT and pass the pointer to the sql c function. pass the parameter. done. Dream big.D may offer more convenient way for reading of field values ... not used), but it doesn't know column types anyway.I don't get your point
Feb 03 2015
On Tuesday, 3 February 2015 at 09:16:14 UTC, Robert burner Schadek wrote:ORM operations are not a simple single query/statement. They often use several queries to load dependent objects.But for such high level DB library must be based on some lower level DB API (connector). Like JDBC for JPA or Hibernate in Java.really, does it? there is no need for an abstraction layer. you can have functions generate the correct source for mysql, sqlite, you name it. e.g. just generate the mysql statement at CT and pass the pointer to the sql c function. pass the parameter. done. Dream big.E.g. you can write template method for ResultSet for easy getting column values into variables. E.g. read method can have variable number of ref args to place read data to. Template will check types of args, and do necessary conversions of data when reading. // read into vars long id; string name; int flags; rs = statement.executeQuery("SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name"); while (rs.next(id, name, flags)) { writeln(id, " ", name, " ", flags); } or // read into structure or class fields struct User { long id; string name; int flags; } User row; rs = statement.executeQuery("SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name"); while (rs.next(row)) { writeln(row.id, " ", row.name, " ", row.flags); } Of course, some CTFE/UDAs may be used for generation of field list, but it is not very helpful IMO. // automatically generated SQL - based on type of struct/class params User row; rs = statement.executeSelect(row, "WHERE flags = 5 ORDER BY name"); // actually, sql "SELECT id, name, flags FROM user WHERE flags = 5 ORDER BY name" will be used while (rs.next(row)) { // every time row is read, struct fields will be updated with new values writeln(row.id, " ", row.name, " ", row.flags); } UDAs can be used to change mapping of class/struct name to table name, field names to column names, mark some fields/properties as non persistent.D may offer more convenient way for reading of field values ... not used), but it doesn't know column types anyway.I don't get your point
Feb 03 2015
On Tuesday, 3 February 2015 at 10:33:25 UTC, Vadim Lopatin wrote:ORM operations are not a simple single query/statement. They often use several queries to load dependent objects.make it an output rangeOf course, some CTFE/UDAs may be used for generation of field list, but it is not very helpful IMO.IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.
Feb 03 2015
On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner Schadek wrote:On Tuesday, 3 February 2015 at 10:33:25 UTC, Vadim Lopatin wrote:Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.ORM operations are not a simple single query/statement. They often use several queries to load dependent objects.make it an output rangeOf course, some CTFE/UDAs may be used for generation of field list, but it is not very helpful IMO.IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.
Feb 03 2015
On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burnerUpdate in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Feb 03 2015
V Tue, 03 Feb 2015 15:20:40 +0000 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> napsáno:On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burnerUpdate in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Feb 03 2015
In my experience such optimizations are usually needed only for reading, so the query may return a plain tuple instead of true mapped object. Well, this can depend on whether you have an ORM container.
Feb 03 2015
On 2015-02-03 at 16:30, Daniel Kozák via Digitalmars-d wrote:V Tue, 03 Feb 2015 15:20:40 +0000 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> napsáno:A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use with records containing blobs or big amounts of other data. If the object was retrieved only for read and to update a few fields, pulling all columns would be very wasteful. So now, for the purpose of performing an update, each field could be marked as dirty (if its value has changed) or inconsistent (if it wasn't pulled from the database at all). And BTW, each should be Nullable to prevent loss of information for columns that aren't marked as NOT NULL. ORM easily becomes complicated...Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
Feb 03 2015
For blobs or other unneeded data you can setup column as lazy fetch. What I= mean is situation when you have object in inconsistent state. ----- P=C5=AFvodn=C3=AD zpr=C3=A1va ----- Od:"FG via Digitalmars-d" <digitalmars-d puremagic.com> Odesl=C3=A1no:=E2=80=8E3. =E2=80=8E2. =E2=80=8E2015 17:45 Komu:"digitalmars-d puremagic.com" <digitalmars-d puremagic.com> P=C5=99edm=C4=9Bt:Re: H1 2015 - db access support in Phobos On 2015-02-03 at 16:30, Daniel Koz=C3=A1k via Digitalmars-d wrote:V Tue, 03 Feb 2015 15:20:40 +0000 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> naps=C3=A1n=o:=20 A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use = with records containing blobs or big amounts of other data. If the object w= as retrieved only for read and to update a few fields, pulling all columns = would be very wasteful. So now, for the purpose of performing an update, each field could be marked= as dirty (if its value has changed) or inconsistent (if it wasn't pulled f= rom the database at all). And BTW, each should be Nullable to prevent loss = of information for columns that aren't marked as NOT NULL. ORM easily becom= es complicated...Update in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.
Feb 03 2015
On 2015-02-03 17:42, FG wrote:A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use with records containing blobs or big amounts of other data. If the object was retrieved only for read and to update a few fields, pulling all columns would be very wasteful.Martin Nowak showed, somewhere here on the forums, how this can be nicely done. The final query will only contain the fields which are actually used in the code. -- /Jacob Carlborg
Feb 03 2015
On Tuesday, 3 February 2015 at 16:42:24 UTC, FG wrote:A misfeature? If the ORM only allowed SELECT *, it wouldn't be of much use with records containing blobs or big amounts of other data. If the object was retrieved only for read and to update a few fields, pulling all columns would be very wasteful.The more interesting application is prefetching for associated relations when needed, to avoid the infamous N+1 problem.
Feb 04 2015
On Tuesday, 3 February 2015 at 15:30:42 UTC, Daniel Kozák wrote:V Tue, 03 Feb 2015 15:20:40 +0000 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com> napsáno:It's not considered as a main API. It's not madatory to import this module. It's a set of helpers to simplify getting field values from ResultSet into struct or tuple, or write to prepared statement parameters, a kind of syntax sugar. For ORM purposes, real ORM must be used.On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:This is dangerous zone, you shoud not do this ;-). In my work I have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be against such a misfeature.On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burnerUpdate in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Feb 03 2015
Ok than it is ok :) ----- P=C5=AFvodn=C3=AD zpr=C3=A1va ----- Od:"Vadim Lopatin via Digitalmars-d" <digitalmars-d puremagic.com> Odesl=C3=A1no:=E2=80=8E3. =E2=80=8E2. =E2=80=8E2015 18:00 Komu:"digitalmars-d puremagic.com" <digitalmars-d puremagic.com> P=C5=99edm=C4=9Bt:Re: H1 2015 - db access support in Phobos On Tuesday, 3 February 2015 at 15:30:42 UTC, Daniel Koz=C3=A1k wrote:V Tue, 03 Feb 2015 15:20:40 +0000 Vadim Lopatin via Digitalmars-d <digitalmars-d puremagic.com>=20 naps=C3=A1no:It's not considered as a main API. It's not madatory to import=20 this module. It's a set of helpers to simplify getting field values from=20 ResultSet into struct or tuple, or write to prepared statement=20 parameters, a kind of syntax sugar. For ORM purposes, real ORM must be used.On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin=20 wrote:This is dangerous zone, you shoud not do this ;-). In my work I=20 have writen our ORM-like db layer which has this funcionality (mainly performance reasons) but if I can go back in time. I will be=20 against such a misfeature.On Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burner=20 Update in v0.2.19: you can specify field list for select =20 writeln("reading all user table rows, but fetching only=20 id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:",=20 e.flags); } =20 output: =20 reading all user table rows, but fetching only id and name=20 (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0 =20IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.Implemented in v0.2.17 for select. test: auto ds =3D new ConnectionPoolDataSourceImpl(new=20 SQLITEDriver(), "ddbctest.sqlite"); auto conn =3D ds.getConnection(); scope(exit) conn.close(); Statement stmt =3D conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER=20 PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags)=20 VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2),=20 (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id <=20 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, "=20 flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads=20 all rows in default order. Possible improvements: ability to specify field list to read=20 only necessary fields.
Feb 03 2015
On Tuesday, 3 February 2015 at 15:20:41 UTC, Vadim Lopatin wrote:On Tuesday, 3 February 2015 at 14:41:02 UTC, Vadim Lopatin wrote:Small addition for D-style selects: // now it's possible to use vars as destination long id; string name; foreach(e; stmt.select!()("SELECT id, name FROM user", id, name)) { writeln("id:", id, " name:", name); } Result: id:1 name:John id:2 name:Andrei id:3 name:Walter id:4 name:Rikki id:5 name:Iain id:6 name:RobertOn Tuesday, 3 February 2015 at 10:49:07 UTC, Robert burnerUpdate in v0.2.19: you can specify field list for select writeln("reading all user table rows, but fetching only id and name (you will see default value 0 in flags field)"); foreach(ref e; stmt.select!(User, "id", "name")) { writeln("id:", e.id, " name:", e.name, " flags:", e.flags); } output: reading all user table rows, but fetching only id and name (you will see default value 0 in flags field) id:1 name:John flags:0 id:2 name:Andrei flags:0 id:3 name:Walter flags:0 id:4 name:Rikki flags:0 id:5 name:Iain flags:0 id:6 name:Robert flags:0IMO writing: foreach(it; db.select<User>("...")) { } is epic. you have entered std.(range|algorithm) land.Implemented in v0.2.17 for select. test: auto ds = new ConnectionPoolDataSourceImpl(new SQLITEDriver(), "ddbctest.sqlite"); auto conn = ds.getConnection(); scope(exit) conn.close(); Statement stmt = conn.createStatement(); scope(exit) stmt.close(); // test data preparation stmt.executeUpdate("DROP TABLE IF EXISTS user"); stmt.executeUpdate("CREATE TABLE user (id INTEGER PRIMARY KEY, name VARCHAR(255) NOT NULL, flags int null)"); stmt.executeUpdate(`INSERT INTO user (id, name, flags) VALUES (1, "John", 5), (2, "Andrei", 2), (3, "Walter", 2), (4, "Rikki", 3), (5, "Iain", 0), (6, "Robert", 1)`); // our POD struct struct User { long id; string name; int flags; } // let's execute select and nump query result foreach(user; stmt.select!User.where("id < 6").orderBy("name desc")) { writeln("id:", user.id, " name:", user.name, " flags:", user.flags); } Output is: ------------------ id:3 name:Walter flags:2 id:4 name:Rikki flags:3 id:1 name:John flags:5 id:5 name:Iain flags:0 id:2 name:Andrei flags:2 ------------------ Where and orderBy are optional - without them always reads all rows in default order. Possible improvements: ability to specify field list to read only necessary fields.
Feb 04 2015
I see problems with licensing and external dependencies.
Feb 03 2015
On Tuesday, 3 February 2015 at 12:06:25 UTC, Kagamin wrote:I see problems with licensing and external dependencies.License on API? (like Sun sued Google for implementation of Java API?) Or on third party libraries used by DB drivers.
Feb 03 2015
The database client library or the database itself in case of sqlite, i.e. code you link with. Dependency also means you should get it somewhere, possibly compiling with dmc.
Feb 03 2015
On Tuesday, 3 February 2015 at 14:03:22 UTC, Kagamin wrote:The database client library or the database itself in case of sqlite, i.e. code you link with. Dependency also means you should get it somewhere, possibly compiling with dmc.If only interfaces included into Phobos, it's not a problem of Phobos.
Feb 03 2015
On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors).Please no. If anything, *any* new library for D should be based on C++ version and then make it nicer with D features. Basing things on Java is major step back even in C++. Next thing we know we will need tons of XML to configure database connections and ORM. This should have happened from the start with logging library too (should have been based on boost.log) and in this case one should look into SOCI (http://soci.sourceforge.net/) and not Java versions.
Feb 03 2015
On Tuesday, 3 February 2015 at 19:54:52 UTC, Szymon Gatner wrote:On Monday, 2 February 2015 at 04:00:31 UTC, Vadim Lopatin wrote:The only usable C++ version I know is ADO, even if it is Windows only, which is quite simiar to JDBC. Never saw anything that was better than it for C++. Poet appears to be gone. -- PauloI would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors).Please no. If anything, *any* new library for D should be based on C++ version and then make it nicer with D features. Basing things on Java is major step back even in C++. Next thing we know we will need tons of XML to configure database connections and ORM. This should have happened from the start with logging library too (should have been based on boost.log) and in this case one should look into SOCI (http://soci.sourceforge.net/) and not Java versions.
Feb 03 2015
On Tuesday, 3 February 2015 at 19:54:52 UTC, Szymon Gatner wrote:This should have happened from the start with logging library too (should have been based on boost.log) and in this case one should look into SOCI (http://soci.sourceforge.net/) and not Java versions.https://github.com/SOCI/soci/blob/master/src/core/soci-backend.h the backend interface roughly follows ADO design, but is rather big and uses polymorphic types extensively too.
Feb 04 2015
On 2/1/15 8:00 PM, Vadim Lopatin wrote:I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors).[snip] I think we should use ODBC as the foundation and build neat D-ish stuff on top of it. It's a mature technology with tried and tested drivers available on all major OSs. Mark Isaacson built an ODBC driver for Presto in D while an intern at Facebook: https://github.com/prestodb/presto-odbc. We should write the corresponding ODBC client part and create D abstractions on top of it. Andrei
Feb 04 2015
On Wednesday, 4 February 2015 at 16:46:19 UTC, Andrei Alexandrescu wrote:On 2/1/15 8:00 PM, Vadim Lopatin wrote:IMHO, ODBC support may be implemented as a driver, as it's done in other DB access libraries. E.g. we have native mysql client written in D - mysqln. It allows to access MySQL w/o external dependencies. As well, accessing of SQLite DBs via ODBC seems weird.I would like to propose Java way for implementation of DB access (JDBC - Java DataBase Connectors).[snip] I think we should use ODBC as the foundation and build neat D-ish stuff on top of it. It's a mature technology with tried and tested drivers available on all major OSs. Mark Isaacson built an ODBC driver for Presto in D while an intern at Facebook: https://github.com/prestodb/presto-odbc. We should write the corresponding ODBC client part and create D abstractions on top of it. Andrei
Feb 04 2015