Thursday, May 6, 2010

Delphi Non-Core Feature Survey: You Can Help

Delphi developers, I want to hear from you. Are you using Delphi's frameworks for unit testing, audits, metrics, and design patterns? If no, why? If yes, to what extent do you use these features? Click here to take the survey

Here is a little background. Since the release of Delphi 2005, there have been a number of interesting support features introduced in Delphi. For example, Delphi 2005 added support for easily creating unit tests. And audits, metrics, and design pattern support was added in Delphi 2006.

Initially Delphi's support for unit testing was available in all versions (Professional, Enterprise, and Architect). By comparison, the support for audits, metrics, and design patterns required Together, and this product was shipped only with the high-end versions of Delphi. And, on top of that, the Together product stayed with Borland when CodeGear was spun off. So what did that mean for these features as Delphi evolved?

I have to admit that I have not used the audits, metrics, and design pattern features of Delphi, though I have noticed the associated menu items in Delphi's menus. So, when I got a request from a client to include discussion of audits and metrics in an upcoming Delphi class that I am going to deliver, it was time to do some research.

What I found was interesting and puzzling. There is not a whole lot of information out there about these features. And, I did discover that not only are these features available in the absence of Together, but are now even included (with limited support) in the Professional sku of Delphi 2010.

I have now committed to writing training material on these topics, and this will also lead to my adapting this material for this blog as well for some magazine articles that I am writing. And this is where your help comes.

I want to hear from you. If you are a Delphi developer, I want to know which of these features you use, and to what extent. If you don't use these features as they ship in the product, do you use third-party tools that provide similar support?

I have created a short, 10 question survey that should take only a couple of minutes to complete. Please help me by filling out this survey. In addition, please ask your Delphi colleagues to help out as well. Click here to take the survey

Copyright (c) 2010 Cary Jensen. All Rights Reserved.

Wednesday, May 5, 2010

In-Memory DataSets: ClientDataSets and .NET DataTables Part 6: Applying Updates to a Database

In the preceding article in this series I discussed various techniques that you can use to manage the change cache. In this installment I will conclude that discussion by looking at how you can apply the changes held in the change cache to the underlying database from which the data was originally loaded.

Call the ApplyUpdates method of a ClientDataSet to save any changes made to the in-memory data to the underlying database. Specifically, if you edit data obtained through a dataset provider, and then close or free a client dataset without calling ApplyUpdates, any changes stored in the change log are lost.

When ApplyUpdates is called, the contents are sent back to the dataset provider for resolution in the context of a transaction (so long as you pass a non-negative integer as the sole parameter of the ApplyUpdates method. When you pass –1, no transaction is initiated). The dataset provider, in turn, generates the necessary calls to apply the updates to the underlying dataset, applying the changes to the underlying dataset one record at a time.

By default this process is handled by a SQLResolver instance, which is created by the dataset provider. The SQL resolver determines the database that needs to be updated, and then creates the necessary SQL statement to apply the changes based on the contents of the change log. Specifically, one SQL statement will be generated for each change that needs to be applied.

Alternatively, the dataset provider can be configured to use the dataset from which it originally read the data to apply the changes. This approach is only possible when the dataset from which the records were read permits data changes.

For example, if the dataset provider gets its data from a TTable, the dataset provider can edit the TTable directly, inserting, deleting, or posting the changes using the TDataSet interface. Again, these changes are applied one at a time. It should be noted that when the dataset provider resolves the changes through the dataset, the dataset's event handlers, such as BeforeDelete and BeforePost can be used to perform data validation.

By comparison, if the dataset provider gets its data from a dataset that does not permit data editing, the dataset provider cannot resolve the data to the dataset. For example, if the dataset provider gets its data from a SQLDataSet, a dataset that retrieves its data using a unidirectional cursor and which does not permit editing, the dataset provider cannot resolve the changes directly to the dataset.

In these cases, there are two options. Either the default SQLResolver described earlier can be used, or you can write a BeforeUpdateRecord event handler. From within this event handler your code is given a reference to the changes that must be applied, and your code can take any action necessarily to explicitly apply these changes. This approach is the most flexible, although the most difficult to implement.

What kind of SQL (or direct edit) is generated is controlled by the UpdateMode of the DataSetProvider. If set to upWhereAll, a record is updated only if that exact record currently exists in the database. If set to upWhereChanged, the record is updated if a record with the same key fields and same values in the modified fields are found (this is a merge). When set to upWhereKeyOnly, an update is made if a record with the same key is found (last to post wins).

Updating .NET DataSets is similar to updating ClientDataSets. In that case, however, the DbDataAdapter class is the one typically used to apply the updates. DbDataAdapters, such as DataStoreDataAdapter, have four DbCommand properties. The SelectCommand property contains the SQL statement that returns the result set that is inserted into the DataTable when you call the Fill method. The other DbCommand properties, DeleteCommand, InsertCommand, and UpdateCommand, are designed to hold parameterized queries that will get their parameter values from the change log at runtime when you call the DbDataAdapter.Update method.

Some developers write the SQL statements that define the DeleteCommand, InsertCommand, and UpdateCommand objects manually. Doing so give them control over the queries, permitting the queries to be optimized for the underlying database. Other developers use a CommandBuilder to generate these queries. CommandBuilders are easy to use, but do not always generate optimized queries.

To use a CommandBuilder, you call its constructor, passing to it the DbDataAdapter whose SelectCommand command has already been defined. Based on this query, the CommandBuilder generates the DeleteCommand, InsertCommand, and UpdateCommand queries.

The following code shows the configuration of a CommandBuilder:

Connection1 := DataStoreConnection.Create(
'host=LocalHost;user=sysdba; ' +
'password=masterkey;database="C:\Users\Public\Documents\' +
'Delphi Prism\Demos\database\databases\BlackfishSQL\employee"';
Connection1.Open();
//Sql statements are executed by IDbCommand objects
Command1 := DataStoreCommand.Create('SELECT * FROM customers',
Connection1);
//DbDataAdapters are used to populate DataTables and resolve data
DataAdapter1 := SqlDataAdapter.Create(Command1);
//CommandBuilders create DbCommand objects for a DataAdapter's
//DeleteCommand, InsertCommand, and UpdateCommand properties based
//on the DbDataAdapter.SelectCommand IDbCommand property
CommandBuilder1 := DataStoreCommandBuilder.Create(DataAdapter1);
DataSet1 := DataSet.Create;
DataAdapter1.Fill(DataSet1);

The following code shows how the changes made to the DataTable are applied.

var
DataTable1: DataTable;
begin
DataTable1 := DataSet1.Tables[0].GetChanges;
if DataTable1 <> nil then
DataAdapter1.Update(DataSet1.Tables[0]);

Applying Updates and Persisted Data

Probably one of the most important characteristics of an in-memory dataset's ability to apply its updates to the underlying database is its concurrent ability to persist its data and state to a file, stream, or database. Together, these features permit an in-memory dataset to apply its updates at some future time, regardless of whether it has been persisted or not (and independent of its duration of persistence).

In order for a dataset to be able to apply its updates to a database subsequent to its persistence, the dataset's change log must be intact. Without this information, you lack the data required to determine which changes have been made to the dataset since it was originally populated.

If you are relying on a DataSetProvider (for ClientDataSets) or an DbDataAdapter implementation (for .NET) to apply the updates to the database, the object must be in a state compatible with applying the dataset's updates. For example, a DataSetProvider that you use to apply a previously persisted ClientDataSet's updates to a database must point to a TDataSet whose structure is consistent with the one that was used to originally load the ClientDataSet (unless you are using the DataSetAdapter's BeforeUpdateRecord event handler to programmatically apply the update, in which case it's all up to your code).

In the case of a .NET DataTable, the DbDataAdapter that is used to apply its updates must hold DbCommand instances in its DeleteCommand, InsertCommand, and UpdateCommand properties that contain parameterized queries sufficient to the task of applying those updates. This can be achieved by defining these queries manually, or by having an adequate DbDataAdapter.SelectCommand instance that can be used by a CommandBuilder to construct the necessary queries (of course, you must then use a CommandBuilder to create those delete, insert, and update queries objects). Otherwise, you once again must take matters into your own hands and generate all calls to update the underlying database by programmatically examining the change log and generating the necessary queries.

What If Updates Cannot Be Applied?

There is another issue that is worth mentioning. Specifically, even when you can construct the necessary queries to apply updates contained in a persisted in-memory dataset, those updates may not be possible. Specifically, it is conceivable that between the time that the data was originally loaded into the in-memory dataset and the point in time at which you want to write it back to the underlying database, the corresponding records in the database have been changed.

While many developers worry about this possibility, it is normally less of a concern than it might at first appear. For example, if you delete a record from an in-memory dataset, and then attempt to apply that deletion (at a later time), but find that the record was already deleted by someone else, who cares? It's gone. Mission accomplished (by someone, at least).

Similarly, if you attempt to apply a record insertion, only to find a record with that key already inserted, then the insertion is not necessary. But what, you might ask, if another user inserted a record with the same key as you (which causes your insertion to fail), but the other user's inserted record is different than the one you intended?

This is really an architectural issue, isn't it? Many of today's developers avoid this issue altogether by assigning arbitrary (and more or less meaningless) primary keys to each and every record. For example, many developers use GUIDs (globally unique identifiers ¾ 128-bit values that are guaranteed to be unique), to each record. In these cases, duplicate records are impossible, no matter who inserts the record.

Updates are a bit more problematic, but nonetheless generally workable. In most cases it is once again an architectural issue. There are, in fact, three options. In the first case, an attempt to update a record that you are holding in memory fails if someone else has changed any part of that record (in the underlying database) since the time you originally loaded it into memory. In this situation, your update query should include a test for all original field values in the WHERE clause of the update query.

In reality, failure to update a record that has been subsequently updated by another user, when those updates must be entirely exclusive, is rarely a tragedy.

The second case is to permit two or more users to update a record, so long as neither user changes the key field values, or a common field. This is a fine compromise in many situations. For example, if, since the time that you read your record into memory and the time that you are attempting to apply your update, another user has changed that record, but not one of the fields that your user changed, who cares? Simply include the record's key fields, and your changed fields, in the WHERE clause of the update query, ignoring those non-key fields you did not change. (If you use an arbitrary key field, such as a GUID, it is unreasonable, and should be prohibited, that anyone should ever change the key field value).

The third option, and one that is appropriate for some database applications, is simply that the last user to attempt to write to a record be permitted to do so, overwriting any previous user's applied updates. For example, imagine that five different sensors record the current temperature in a given area. If all sensors are equally correct, the most current update attempt is the most accurate. Who cares about a temperature that was written ten minutes ago? A similar analogy can be made with respect to stock prices. A more current stock price is more accurate. Past updates are old news.

Copyright (c) 2010 Cary Jensen. All Rights Reserved.