Monday, November 29, 2010

Hey! Debugger. Leave My Exception Alone!

(with appologies to Roger Waters)

In a recent post (Breakpoints with Side Effects) I wrote about breakpoints with side effects. One of the side effects that I mentioned was the ability to ignore all subsequent exceptions, as well as restore the normal response of the debugger to encountered exceptions (pause the execution of your code).

In response, a reader posted a comment asking how to instruct the debugger to ignore exceptions, particularly those that cannot be anticipated. He wrote "when debugging unit-tests that have to trigger exceptions ... I'm not interested in the expected exceptions, I'm interested in the unexpected ones! Having the debugger break on them all, can be quite inconvenient."

Fortunately, Delphi and its debugger provide you with a number of mechanisms for ignoring exceptions. These techniques can be divided into two general categories. The first is to disable breaking on exceptions, even though the debugger is active. The second is to run your code without activating the debugger (but with the option to active it at runtime). Let's begin with the first of these two approaches.

Disabling Exception Breaking

When you are running your application from the Delphi IDE, and the debugger is active, there are a number of ways to instruct the debugger to ignore exceptions. And by "ignore," I mean to instruct the debugger to not pause execution of your application when an exception is encountered.

Under normal circumstances, when you are running an application with the debugger active, an encountered exception causes the debugger to pause the execution of your code and display a dialog box describing the exception that occurred. This happens whether or not the exception is caught by an exception block.

For example, consider this overly simplistic code segment.

var
i, j, k, l, m: Integer;
begin
i := 23;
j := 0;
l := 2;
m := 4;
try
k := i div j * (l div m);
except
k := 0;
end;
ShowMessage(IntToStr(k));

In this code segment, an EDivByZero exception is raised within the try block. But even though that exception is handled by an except clause, if you are running this code from the IDE with the debugger active, the debugger will pause your code, and display a message like the one shown in the following figure.


Figure 1. An exception has caused the debugger to pause execution

At this point, you have the option of selecting Break, which places you in debug mode, or selecting Continue, which resumes the execution of your code. The point here is that even though the exception is already handled by your code, the debugger has taken upon itself to get involved and pause the execution of your program.

The problem is even worse if the exception is not raised within the try block of a try-except. In those cases, selecting Continue causes the program to resume, at which point it immediately displays the exception in the default exception handler dialog box, the one intended to display the exception to the end user.

In situations where you do not want the debugger to pause the execution of your code in response to an exception, you have several options. As described in my previous article, you can set a non-breaking breakpoint at some point in your code prior to the execution of the code segment that raises the exception, and use the Ignore subsequent exceptions Advanced breakpoint option, shown in Figure 2, to instruct the debugger to ignore the exception.


Figure 2. The Breakpoint properties dialog box in Advanced mode

If you want to restored the normal operation of the debugger, you would then place a breakpoint downstream from the exception you want it to ignore and place another non-breaking breakpoint, and set its Advanced breakpoint property to Handle subsequent exceptions.

This approach, while effective, is time consuming to put into place. It requires you to anticipate those locations in your code where you want to ignore exceptions, and place corresponding pairs of non-breaking breakpoints on either side of these code segments to ignore and then to once again handle exceptions.

Ignoring Specific Exceptions

If you are only concerned about specific exceptions, there is a much easier alternative, which first became available in Delphi 2005. From the Language Exceptions node of the Options dialog box, shown in Figure 3, you can instruct the debugger to globally ignore specific exception hierarchies.


Figure 3. The Language Exceptions node of the Options dialog box

To do this, click the Add button to display the Add Language Exception dialog box, shown in figure 4. and add the name of the exception class that you want to ignore.


Figure 4. The Add Language Exception dialog box

Once added, this exception appears in the Exception Types to Ignore list of the Language Exceptions node of the Options dialog box. For instance, if you use the Add Language Exception dialog box to add an exception named EDivByZero, the Exceptions types to Ignore list will include EDivbyZero, as shown in Figure 5.


Figure 5. A newly added exception hierarchy will be ignored by the debugger

If you now run the previous code segment, the debugger will not pause your code upon encountering the division by zero exception. Likewise, it will also ignore any exceptions that descend from EDivByZero, meaning that this mechanism makes it easy to disable entire hierarchies of exceptions.

If, at a later time, you want to re-enable the debugger's normal behavior with respect to this exception (as well as those that descend from it), display the Language Exceptions page node of the Options dialog box, and remove the checkmark from the checkbox next to EDivByZero. Alternatively, you can remove this exception class from the Exceptions to Ignore list altogether by selecting that exception in the list and clicking the Remove button.

If you want to instruct the debugger to ignore every exception that it encounters, the process is even easier. Simply uncheck the Notify on language exceptions checkbox that appears at the bottom of the Language Exceptions node of the Options dialog box. When this checkbox is unchecked, the debugger ignores all language exceptions. (This checkbox can be seen in Figures 3 and 5).

To be honest, it would be unusual to add an exception, such as EDivByZero, to the Exceptions to Ignore list. Instead, you are more likely to add custom exceptions, those that you define for your own application, to this list. Specifically, you are more likely to add exceptions that you explicitly raise within your application by calling the raise keyword.

In general, it is considered a poor programming practice to raise an instance of one of Delphi's exception classes in your code. Instead, you should define your own exceptions classes, and raise one of them.

For example, you may define two classes of exceptions that you potentially raise at runtime from the code within your application. One of these may be raised due to simple user errors, errors that the user needs to be informed about. The other may be serious failures, ones that you cannot handle normally.

Here is an example of such a declaration:

type
ECustomException class(Exception)
end;
EWarningException = class(ECustomException)
end;
ECriticalException = class(ECustomException)
end;

Assuming that an EWarningException is raised by your code in response to a user's invalid input, this exception is one that you probably do not want the debugger to notify you of (while you are debugging the application). As a result, you probably do not want the debugger to display a dialog box like the one shown in Figure 1.

The simple solution for this problem is to instruct the debugger to ignore EWarningException instances, which is so say that you instruct the debugger to ignore your non-critical exceptions. That's the real power of the Exceptions to Ignore list.

Running without Debugging

When the debugger is active, unless you take steps to disable exceptions, raised exceptions cause the debugger to pause the execution of your code. One approach to preventing the debugger from pausing the execution of your application upon encountering an exception is to run your code without the debugger being active.

There are two ways to do this. One is to first compile or build your application, and then to run it outside of the IDE. In other words, after compiling or building your project, follow this by executing the resulting EXE creating during compilation.

Since Delphi 2005, there has been an easier way. You can select Run Run Without Debugging, or simply press Ctrl-Shift-F9. In Delphi XE it's even easier, since there is a Run Without Debugging button on the Debug toolbar.

When you run without debugging, Delphi will first compile your executable, if necessary. It will then run your application, but without the services of the debugger. This means that all breakpoints are ignored (whether they are breaking or non-breaking), and exceptions do not load the debugger.

One advantage of running without debugging is that the editor features, such as code navigation, code insight, and all other features of the editor, are active when you run without debugging. This means that you can test your running application, and at the same time make changes to and inspect the project from which the application was compiled.

But what if you encounter a problem while running without debugging, and you want to enable the debugger in order to examine what's going on? Here again, Delphi offers a powerful option. You can attach the debugger to the running process (this option also was introduced in Delphi 2005). Once you have attached to the running process, the debugger will be active without your having to stop and restart your application.

To attach the debugger to an application that is running without debugging, select Run Attach to Process. Delphi responds by displaying the Attach to Process dialog box shown in Figure 6.


Figure 6. The Attach to Process dialog box

If you want to immediately pause the application with the debugger active, ensure that the Pause after attach checkbox is checked before you click the Attach button. Personally, I usually leave this option unchecked, since I probably do not want the debugger to pause the application until it encounters a breakpoint or an exception that it is not instructed to ignore. Nonetheless, when you are ready to attach, click the Attach button.

Once attached, it is as if you had run the application with the debugger active. At this point, editor features such as code insight and class navigation are once again unavailable. However, once you are done using the debugger, you have the option to select Run Detach from Program, to once again return to a state where your application is running without the services of the debugger.

In closing, I want to mention one additional capability that the Attach to Process option provides you. If you are writing a DLL (dynamic link library) in the IDE, but this DLL is loaded by a process for which you do not have source code, you can use Attach to Process to debug your DLL.

Here's how you do it. When you are ready to debug your DLL, set your breakpoints and then compile the DLL program. Then, run the external program (the EXE) that will load your DLL.

Next, select Run Attach to Process. Use the Attach to Process dialog box to select the program that will load your DLL. Once you are attached, do whatever you need to do to cause that program to load your DLL. As soon as the attached process loads your DLL, and one of your breakpoints is encountered (or an exception is raised), Delphi's debugger will pause the execution inside your DLL source, permitting you to inspect variables, trace into, step over, or whatever else you need the debugger for. When you are done with the debugger, click the Run button or Press F9 to resume normal execution of the attached program.

Conclusion

While you need the services of Delphi's debugger, you do not necessarily need it all of the time. By learning how to instruct the debugger to ignore specific exceptions, as well as disable and enable particular breakpoints, you gain more control, and more precision, over your debugging tasks.

This posting is based on an article that I originally published in the Software Development Network Magazine (issue #107). This is the official magazine published by the Software Development Network (http://www.sdn.nl/).

Sunday, October 31, 2010

First Look at Advantage Database Server 10

With the release of Advantage 10, Sybase continues the tradition of consistent improvements to this high-performance, low maintenance database server. In addition to a rich array of additional features and feature enhancements, this release also includes a large number of internal optimizations that will improve the performance of most Advantage client applications simply by upgrading the server to Advantage 10. These improvements add significant value to the already impressive collection of features that make Advantage Database Server a perfect match for small to medium size database applications.

Overview

The Advantage Database Server (ADS) is a high-performance, low maintenance database server by Sybase. ADS supports an impressive set of features often found in high-end databases. These features, in combination with its ease of installation and nearly maintenance free operation, make it a favorite database for vertical market applications.

With each new release of Advantage, the Sybase team has consistently created added value by introducing new and enhanced features, as well as improving the already impressive performance of the server. This tradition continues with this release of Advantage 10.

This post is designed to provide you with a brief overview of the Advantage Database Server, followed by a look at the new and enhanced features found in Advantage 10. If you are already familiar with Advantage, you may want to go directly to the section New Features and Enhancements in Advantage 10.

Overview of Advantage Database Server

The Advantage Database Server provides a unique set of features that make it an ideal database for small to medium size applications. The major features that make Advantage special are described in the following sections.

High Performance

To put it simply, Advantage is fast. Much of its speed comes from its architecture, which is based on ISAM (indexed sequential access method) technology. ISAM makes extensive use of indexes to provide high-speed table searches, filters, and table joins.

Unlike other ISAM technologies, such as dBase and Clipper, Advantage Database Server is a transaction processing, remote database server. As a result, it provides application developers with a reliable, distributed solution for managing data using client/server technology.

Low Maintenance

The Advantage Database Server installs in minutes, and rarely needs attention after that. Indeed, unlike high-end database servers, most Advantage installations do not have a database administrator. This makes Advantage an ideal server for vertical market applications where the server may be installed in many facilities that do not have their own IT department.

Navigational and Set Based Orientation

While Advantage is based on the navigational ISAM architecture, it also supports industry standard SQL (structured query language), with most of the SQL operations optimized for lighting fast execution. As a result, Advantage is one of the rare remote database servers to support both the navigational model of data access as well as set-based SQL, giving you a wealth of options for presenting and managing your data.

Advanced Feature Set

The Advantage Database Server sports an impressive collection of features often only found in high-end database servers. These include security provided by users and groups, table encryption, and support for encrypted client/server communication. Additional high-end features supported by Advantage include stored procedures, SQL PSMs (persistent stored modules), views, user defined functions, table- and field-level constraints, referential integrity, online backup, triggers, notifications, and replication.

Scalable

Advantage comes in two basic flavors: the Advantage Database Server (ADS) and the Advantage Local Server (ALS). ALS is a free, file-server based technology whose API (application programming interface) is identical to ADS. ALS permits developers to deploy their Advantage applications royalty free to clients who do not need the stability and power of a separate database server.

Importantly, as the needs of those applications deployed with ALS grow over time, those applications can be almost effortlessly scaled to client/server technology, in many cases simply by deploying ADS. So long as the client applications are designed correctly, those applications will begin using ADS the next time they execute.

New Features and Enhancements in Advantage 10

Rather than reciting a laundry list of updates, I have organized the enhancements into the following sections: Major performance improvements, enhanced notifications, additions to Advantage SQL, nested transactions, Unicode support, additional 64-bit clients, added design-time support for Delphi, and side-by-side installation. For a detailed listing of all of the updates found in Advantage 10, see the white paper at the following URL:

http://www.sybase.com/files/White_Papers/Advantage_WhatsNewADS10_WP.pdf

Major Performance Improvements

The Advantage Database Server has always been recognized for its superior performance, being able to handle very large amounts of data with blinding speed. That makes it all the more remarkable that one of the most enticing aspects of upgrading to Advantage 10 involves performance. Specifically, the performance of database operations in client applications will improve simply by upgrading the server to Advantage 10. In some cases, these performance gains will be significant.

Many of the internal systems that contribute to Advantage's already impressive performance were evaluated by Advantage's R&D engineers. Where possible, improved algorithms were introduced, caching was implemented or enhanced, and resources were pooled. These changes resulted in more efficient indexes, improved transaction handling, and more intelligent management of resources such threads, record locks, and file writes.

The effects of these improvements range from nice to stunning. During Advantage 10's Beta cycle, one of the Beta testers reported the results of his performance tests on some of his larger queries involving, in some cases, millions of records. He found that some Advantage 10 queries executed 40 percent faster than the same queries in Advantage 9. In other cases, the Advantage 10 queries were exponentially faster (one query that ran in 2.7 seconds in Advantage 9 took about 1 millisecond in Advantage 10). The R&D team has found similar improvements during testing.

But SQL queries are not the only area of Advantage to benefit from these internal improvements. Operations that benefit from Advantage's support for navigational operations have also improved. In fact, the Help files for Advantage 10 list no less than 20 specific improvements or optimizations introduced in Advantage 10. And these updates affect everything from cascading referential integrity updates to record insertion, from memo file header updates to table creation, from low-level index operations to worker thread management. Simply put, the performance enhancements introduced in Advantage 10 alone make a solid business case for upgrading from an earlier version of Advantage.

Enhanced Notifications

Notifications are a feature originally introduced in Advantage 9, and they provide you with a mechanism by which Advantage can notify interested client applications that some change has occurred on the server. For example, a client application can subscribe to a notification in order to be informed when the contents of a specific table have changed. The client application can then use this information to update the end user's view of that data.

A small change to notifications in Advantage 10 has resulted in a very significant improvement in their utility: Advantage 10 notifications now support a data packet. This data packet, in the form of a memo field, permits you to include any data you like along with the notification. This data may include the record ID of the record that was affected in the table of interest, the connection id of the user who made the change, the type of change, or any other data you like.

This data permits you to implement advanced features in your notification-subscribing clients. For example, you can now distinguish between changes made by your client application's user and those made by other users. This information can be used to automatically update a user's view of data when someone else has made changes, ignoring those changes posted by that user.

Additions to Advantage SQL

There are many updates and additions to Advantage's support for the structured query language (SQL). Of these, my favorite update is the new ability to use a stored procedure in the FROM clause of a SELECT query.

If you have a stored procedure that returns a result set, you can treat that result set like a table in a SQL SELECT statement, permitting you to select specific fields or expressions from the result set, link the stored procedure result to other tables (or other stored procedure result sets), and define WHERE clause conditions to select just those records in which you are interested. You can even use the predefined Advantage system stored procedures in the FROM clause.

Another enhancement is the ability to use Boolean expressions in your SQL statements. For example, if you have a table named CUSTOMER in which a Boolean (logical) field named Active appears, the following query will select all records where the Active field contains True.

SELECT * FROM CUSTOMER WHERE Active;

In previous versions of Advantage, you would have to form your query like the following:

SELECT * FROM CUSTOMER WHERE Active = True;

Also, TOP queries now support a START AT clause, which permits you to select a specific number of records beginning from some position in the result set other than the top. For example, the following query will return records 11 through 15 from the CUSTOMER table, ordered by last name.

SELECT TOP 5 START AT 11 FROM CUSTOMER ORDER BY LastName;

A collection of bitwise SQL operators have also been introduced. These include AND, OR, and XOR, as well as >> (right-shift) and << (left-shift).

There is also a new SQL scalar function: ISOWEEK, which returns the ISO 8601 week number for a given date (it is also a new expression engine function). And, some of the SQL scalar functions that were previously not expression engine function are now. These include DAY, DAYOFYEAR, DAYNAME, and MONTHNAME, to name a few. These are in addition to CHAR2HEX and HEX2CHAR, which are newly added expression engine functions. Support in the expression engine means indexes can now be created using these functions, which in turn allows the Advantage query engine to fully optimize restrictions that use these scalars.

Finally, there are a number of new system stored procedures and system variables. The following are just a few of the new system stored procedures available in Advantage 10: sp_SetRequestPriority, sp_GetForeignKeyColumns, and sp_IgnoreTableTransactions.

As far as system variables go, among the new variables are ::conn.OperationCount (number of operations performed on this connection), ::stmt.TrigEventType (the event type of the executing trigger), ::stmt.TrigType (the type of trigger executing), and ::conn.TransactionCount (the current nesting depth of nested transactions).

Nested Transactions

Speaking of nested transactions, Advantage 10 now supports them. In previous versions of Advantage, code executing in an active transaction could not attempt to start a transaction without raising an exception. This is no longer the case. As a result, if you write a stored procedure whose operations should be performed in a transaction, you can safely call BEGIN TRANSACTION, even if that stored procedure is called by code where a transaction is already active.

New Table Features

Several interesting new table-specific features have been introduced in Advantage 10. Several of these are related to transactions and table caching. Let's consider table caching first.

To begin with, so long as memory resources permit, temporary tables are now kept entirely in cache. As a result, operations that rely on temporary tables are usually very fast.

There is also a new table property called Table Caching. Most tables are created with Table Caching set to None. These tables are not cached, and any changes to these tables are written to the underlying file immediately.

When Table Caching is set to either Read or Write, the corresponding table is kept in cache while it is open, making its data highly available. These settings are normally used for data that is largely static, and which can be reconstructed if the table becomes corrupt. Specifically, tables held in cache are not written to disk except when the table is closed. As a result, changes to their data will be lost if Advantage unexpectedly shuts down without being able to persist those tables' contents (for instance, if there is a sudden failure of your server's power supply). However, this functionality can be very useful for static data (zip codes, part numbers, and so forth).

The transaction free tables feature is also a table property, called Trans Free Table. When set to True, the associated table does not participate in transactions.

There are two implications of a table not participating in an active transaction. First, changes made to a Trans Free Table during a transaction are not rolled back even if the transaction itself is rolled back. Second, changes to data in a Trans Free Table are not isolated during a transaction, being immediately visible to all other client applications, even though the transaction has not yet been committed.

Just like when a table's Table Caching property is set to Read or Write, Trans Free Table is set to True only for special tables in most applications. For example, you may use a table to log a user's actions in an application. In those cases, you may want to log that a user tried to perform some task, even though the action may fail and the user's changes may be rolled back.

Similarly, you may have a table used for generating unique key field values. This table may have a single record and single field that holds an integer value. A client needing a key would lock this table, read the key, increment the integer, and then release the lock.

With such a table, the incremented key needs to be visible to all client applications, even if individual clients increment the key from within a transaction. If such a table were not a Trans Free Table, other clients would not be able to access the incremented key until the transaction was committed, rendering the table useless for its intended purpose.

Unicode support

Although Unicode support is arguably a table feature, its significance warrants separate consideration. In short, Advantage 10 introduces three new field types. These types, nchar, nvarchar, and nmemo, are UTF-16 Unicode field types. The nchar type is a fixed length Unicode string field and nvarchar is a variable length Unicode string field. The data for these two field types are stored entirely in the table file.

The nmemo field, by comparison, is a variable length Unicode field that is stored in the memo file. Together, these three fields provide you with a number of options for storing Unicode symbols and characters in Advantage tables.

More 64-bit Clients

Advantage 9 introduced 64-bit versions of both the Windows and Linux Advantage servers, as well as 64-bit clients for the Advantage Client Engine (ACE) and the Advantage .NET Data Provider. A number of additional 64-bit clients have been added in Advantage 10, including 64-bit versions of the OLE DB Provider, the ODBC driver, as well as the Linux PHP driver. The Advantage Data Provider for .NET has also been enhanced to use the appropriate 64-bit or 32-bit drivers, depending on the OS on which your managed code is executing.

In addition, 64-bit versions of the Advantage Local Server (ALS) and Advantage backup utility have been introduced in Advantage 10.

Added Design-Time Support in Delphi

The SQL Utility, a comprehensive SQL editor and debugger, is now exposed as a property editor directly within the Delphi IDE (integrated development environment). To use the SQL Utility within Delphi, select the ellipsis button on the SQL property in Delphi's Object Inspector when an AdsQuery component is selected.

Using the SQL Utility, you can check the syntax of your SQL, execute it, and even set breakpoints and debug your SQL scripts. Once you are satisfied with your SQL, click the Save button on the SQL Utility toolbar (or press Ctrl-S) to save your SQL to the SQL property of the AdsQuery.

The Advantage Delphi Components also include a new component, TAdsEvent. This component, which you can use to subscribe to and handle notifications, allows you to easily configure and manage the handling of asynchronous events.

Side-By-Side Installations

With Advantage 10, it is now possible to run two or more instances of the Advantage server on the same physical server, even different versions of Advantage. For example, it is now possible to run Advantage 9 and Advantage 10 on the same server. This feature is particularly useful for vertical market developers whose applications need to support more than one version of the Advantage server.

Conclusion

With the release of Advantage 10, Sybase has once again confirmed its commitment to this unique and valuable database server. In addition to a number of useful additions and enhancements, Advantage 10 also includes a wide range of performance improvements that will improve the performance of most client applications merely by installing this updated server.

Most developers, however, will also want to update their client applications to benefit from the many enhancements found in Advantage 10. From support for Unicode to greatly improved notifications, from updated SQL syntax to enhanced table features, Advantage 10 has something for everybody.

Want to Learn More

Loy Anderson and I have released the latest edition of our Advantage book, Advantage Database Server: A Developer's Guide, 2nd Edition with extensive coverage of ADS 10. For more information and to order this book, please visit:

http://www.jensendatasystems.com/ADSBook10/

Thursday, September 30, 2010

Developing in a Virtual World

Back in December of last year I described how I set up my most recent laptop with VMWare, installing my various development environments in virtual machines instead of on my host operating system (Creating a More Manageable Development Environment). It's been about ten months since I originally setup that machine, and this seems like a fine time to share how the experience has been going.

Overall, I can say that I am very happy with how this is working. I have not had to reinstall my host operating system during the past ten months, and I think that this is the first time that I can think of when I've gone this long. I still have the early, clean image of this machine that I created shortly after performing my base install, but the thought of restoring my machine to that state simply has not crossed my mind.

In the past, after four or six months, my operating system had slowed down to the point where I felt it necessary to restore from an early image of my system. But not now. I am pleasantly surprised that my host operating system is humming along fine, thank you. And I even have several applications installed on the host that many developers blame for some of their ills, such as iTunes.

As for the virtual machines, they are doing fine as well. I currently have quite a collection of virtual machines that I use, some more than others. As I described in the previous post, I create a separate virtual machine for each development environment. For example, I have one for RAD Studio 2010, one for RAD Studio 2007, and another for Visual Studio 2008. I recently installed virtual machines for Visual Studio 2010 and Delphi XE. There are also some separate VMs for 32-bit operating systems, though most of them are 64-bit. There are others, but you get the picture.

My reasoning for this approach is multi-fold. First, by keeping each virtual machine pure (a single development environment) I prevent any possible cross contamination that might compromise one or more of the environments. It's these unintended side effects that I blame for the increasing instability of my previous OS installations. Second, I can test my applications under different conditions, such as 32-bit and 64-bit versions of Windows 7, or under Windows XP.

The third benefit is that I can keep backups of these individual VMs. In fact, I keep several backups of each. There is the clean backup that contains my original installation. I can always fall back to this if something goes terribly wrong, such as a Windows update that hoses one of the VMs. I also keep a current backup of each VM, and carry that around with me on a portable USB drive when I travel.

If something happens to my VM, or even to my laptop, I have a solution. For example, if my laptop simply quits on me, I could load my virtual machine onto any other machine on which VMWare is loaded, and I am ready to go.

Finally, and this is one that I have not actually had to confirm yet, but my thinking is that I can use these VMs on my next laptop. In other words, I will never have to install Delphi XE on Windows 7 ever again. On my next laptop, I will simply copy over the VM, and I should be good to go. Sure, I may have to install it on the next version of Windows (whatever that will be called), but that should only happen once, and then I will be set with Delphi XE and that version of the operating system (this assumes that I will be supporting clients who are using Delphi XE on that new operating system. If not, then I'll only have to install the more current versions of Delphi on that OS).

Lessons Learned


I wanted to do more than simply report that this approach seems to be working well. I also wanted to share some of the tricks and insights that I've gained during this experiment. Note, however, that I am using VMWare (VMWare 7.1.2 to be precise). While other virtual machines may support features similar to those I describe here, I am referring specifically to how they are working for me with VMWare.

Memory


Let's begin with memory and the virtual machines. When I initially set up each virtual machine, I configured each to use 3 gigabytes of RAM. My thinking was that I wanted the virtual machine performance to be similar to running my development environment in the host operating system.

My laptop has 8 GB of RAM (a minimum, if you ask me, for taking this particular route), which means that using VMs that use 3GBs limits you to running one at a time (sure, you could run two at a time, but I'd be careful about that. If your VMs require so much memory that you leave little left for the host operating system, bad thing can and will happen).

Over the past several days I have cranked several of the VMs down to 2 GB of RAM. Honestly, I really haven’t noticed a difference. The big change, however, is that I can now run two VMs simultaneously, and still have plenty of breathing room for my host OS.

License to Thrill


One thing that I did not go into in any depth in my first post was the licensing issues that arise when you use multiple VMs. In short, you need to have licenses for the software that you use in your virtual machines. This includes those for the operating systems themselves, as well as for the other software that you install on those virtual machines.

Let's consider the operating system first. Having the one copy of Windows that shipped with your computer is not enough. That's probably only one license, and it is being used as your host OS. Depending on the operating system that you are using for your virtual machines, you may need one additional license for each VM.

For example, if you want to install Windows in a virtual machine, you must have a separate license for each virtual machine. No, you cannot rationalize that you only run one at a time. Read the Windows end user license agreement (EULA). It specifically covers virtual machines, and in almost every case, you will need a separate license for each one.

Fortunately, for us developers, there is a rock solid solution from Microsoft, and it is called Microsoft Developers Network, or MSDN. Your MSDN subscription includes the rights to use certain Microsoft products under certain conditions. And even the lowest level of MSDN subscription, MSDN Operating Systems, includes licenses for the current version of Windows. These licenses are specifically designed for testing and development. And, guess what, if you are using your VMs for building and testing software, you are using those licenses for testing and development.

(Disclaimer: If you have specific questions about the various MSDN subscriptions, do not take my word for it. Go to the Microsoft site and read what they say about the various MSDN subscriptions. It is possible that my subscription is different from your subscription.)

At least with the MSDN subscription that I have, I get up to an initial 10 licenses of Windows 7 Ultimate for use in testing and development environments. However, if I use those licenses up, I can request another 10, and another 10 after that, though I cannot imagine needing that many VMs.

My rather low-level subscription only gives me access to the most current operating system. However, some of the more deluxe subscriptions of MSDN give you access to current and past operating systems, which, depending on the type of testing you need to do, might be perfect. Likewise, if your testing and development involves other products, like SharePoint, Azure, Microsoft Office, SQL Server, and other Microsoft products, there are MSDN subscriptions that includes testing and development licenses for those products as well.

As far as AntiVirus is concerned, I initially purchased a 3-license pack of Norton Antivirus. That worked fine, until I wanted to install my fourth VM. At that point I got frustrated and began looking for an alternative, as I didn't want to shell out another $100 for 3 more licenses.

Fortunately, there is a free alternative from Microsoft called Microsoft Security Essentials. It works on Windows 7, Windows Vista, and XP, and I am now a big fan. It installs quickly, has a pretty small footprint, and doesn't pester me as much as Norton did. In fact, I am so happy with it that I removed Norton AntiVirus from my host OS (even though I still had more than half a year left on my license), and installed Windows Security Essentials there.

Quickie Installations


One feature of VMWare that I really didn't start using until recently was the ability to install software into a VM from an ISO image (an archive format used for optical disks). You can install either an entire operating system, or any other software, using an ISO image, which saves a lot of time.

For example, I recently created a new VM using Windows 7 Ultimate x86. I downloaded the ISO image from the MSDN Web site, and saved it to the hard disk on my host OS. I then created a new virtual machine in VMWare. When prompted, I indicated that I was going to install from an ISO image. Before even starting, VMWare prompted me for my product key, which I entered. It then proceeded to install the operating system, and was done in no time.

Next, I wanted to install Delphi XE under this VM. I had an ISO image of that installer, and instructed VMWare to treat that image on my host hard drive as the CD drive in the VM. Again, the installation was fast and easy.

Snapshots and Linked Clones


Another one of the benefits of VMs is that you can easily perform "what if" tests without much risk of doing harm. Two of the tools provided in VMWare for this purpose are snapshots and linked clones.

Imagine that someone has recommended that you try a new profiling tool, but you do not have much experience with it. Wouldn't it be nice to install it without having to worry about what side effects it might cause, or what trash it may leave behind if you end up uninstalling it?

The most foolproof technique, in my book, is to create a linked clone. A linked clone is a new VM based on an existing one. The big deal here is that a linked clone is very small, since it is relying on a snapshot of an existing VM. Once you have created the linked clone, which takes less than a minute, you can install the suspect piece of software in it and test away.

Once you are satisfied that the software you are testing is going to work for you, you can re-install it into your regular VM and delete the clone. If you decide that the software is not for you, simply delete the linked clone. In any case, what you installed into the linked clone cannot affect the VM you cloned it from.

Snapshots are designed to give you a similar capability. A snapshot is a placeholder into the state of a virtual machine. In theory, you should be able to create a snapshot, and then install the software you want to test. If things go badly, you simply revert the virtual machine to the snapshot, and no more pain.

This all sounds well and fine, and I'm sure it works, but I don't have a whole lot of experience with snapshots, so I am still a little gun shy.

Shared Folders


The final feature that I've started using extensively is shared folders. A shared folder is a location (drive, folder) on your host OS that can be seen by one or more virtual machines.

This is the place where I put files and programs that I am likely to use from any given virtual machine, such as printer drivers, 7-zip, and other utilities. I then use the Shared Folders option from the Hardware tab of the VMWare Virtual Machine Settings dialog to configure folder sharing.

For each VM, I select the Always enabled radio button. I then use the Browse button to select the shared folder. I also enable the Map as a network drive in the guest operating system checkbox. When done, the shared folder appears as a network share in each of the VMs.

Even without this feature, it is easy to copy and paste between VMs. However, having files that you typically use always available from within each VM is even better.

Still, There Are Drawbacks


Don't get me wrong, I have no regrets about going down this road. So far, so good. But there are some issues that I still grapple with.

For one, there are those infernal updates. Holy smokes! Every time Microsoft comes out with a Windows update I've got all these different VMs nagging me to update them. And, once or twice, the update has messed things up a bit.

I've addressed this issue by altering my behavior. I don't update Windows every time it asks. In fact, I normally ignore Windows updates until I have some time to really deal with them. Next, before I do the updates, I copy the virtual machines that I am using in my daily work to one of my backup drives. Then, I systematically open each one of these VMs and perform the update.

For those VMs that I use only on occasion, I wait until I need them. For example, if I am asked to teach a class on Delphi 7, I will backup and then update that VM in preparation for the class.

As a result, some VMs don't get updated that often. However, if an update causes a problem, I've always got that pre-update copy that I can rely upon.

Conclusion


I'm running into a lot of developers who are approaching their development environment this way, so I'm not so cocky as to think that I made this thing up. However, I am happy to share my experiences in this realm, and hope that if you are still developing in your base OS that you might have gained some insight into whether this approach might better serve you or not

Monday, August 30, 2010

Breakpoints with Side Effects

A breakpoint with a side effect is a breakpoint that causes a task to be performed upon encountering the enabled breakpoint within your application. Of course, such a breakpoint only performs its side effect when your code is running with Delphi's debugger attached to the process. Nonetheless, this type of breakpoint can be extremely useful, especially when you are in the testing phase of your application development.

This article shows you how to create breakpoints that cause side effects. It begins with a general overview of breakpoints, and continues with a discussion of non-breaking breakpoints. Finally, how to create breakpoints to produce side effects is explained in detail.


Breakpoint Overview

When most Delphi developers think of breakpoints, they typically think of the debugger feature that interupts the execution of your code upon executing a specific line of your code. In reality, this definition is far from complete.

First of all, the only type of breakpoint that is associated with a line of your source code is a source breakpoint. There are three other types of breakpoints: address breakpoints, data breakpoints, and module breakpoints.

Address Breakpoints

Address breakpoints permit you to define a breakpoint that will trigger when an instruction at a particular memory address is executed. You can only add an address breakpoint when the debugger is loaded. Then you can select Run -> Add Breakpoint -> Address Breakpoint. Enter the address in the Address field of the Add Address Breakpoint dialog box.

You can also add an address breakpiont from the disassembly pane of the CPU window. With your project stopped in the debugger, display the CPU window by selecting View -> Debug Windows -> CPU. To add an address breakpoint, either click in the left gutter of the disassembly pane in the CPU window on the line associated with the instruction address, press Ctrl-B with your cursor on the instruction line, or right-click the instruction line and select Toggle breakpoint. An address breakpoint is shown in the CPU window in Figure 1.


Figure 1 An address breakpoint in the CPU window


Data Breakpoints

Data breakpoints are those that trigger when a particular memory address is written to. The memory address can be represented either as a hexadecimal number or as the variable used to refer to that memory address. Unlike other breakpoint types, data breakpoints last only for the current Delphi session.

To add a data breakpoint, invoke the debugger (either through an exception or by hitting a breaking breakpoint). Then select Run -> Add Breakpoint -> Data Breakpoint. Delphi responds by displaying the Add Data Breakpoint dialog box, as shown in Figure 2.


Figure 2 Adding a data breakpoint

At Address, enter the memory address of the data. Use Length to define the length of the data beginning at that address. If you are running Delphi 2007 or later, you can also enter the name of a variable in the Address field.

In earlier versions of Delphi, to add a data breakpoint based on a variable name, add the variable into the Watches window. (You display the Watches window by selecting View -> Debug Windows -> Watches from Delphi's main menu.) Then, right-click on the watch entry and select the option Break on change. For data breakpoints added using this technique, you can modify the breakpoint properties by right-clicking on the breakpoint in the Breakpoint List dialog box and selecting Properties. (You can display the Breakpoint List dialog box by pressing Ctrl-Alt-B.)

Module Breakpoints

Module load breakpoints are those that trigger when a specified module is being loaded into your application. For example, if you want to ensure that a particular DLL is not being used by your application, you can set an enabled breaking module breakpoint for it, ensuring that the debugger will load if for some reason that DLL gets loaded by your application.

To add a module breakpoint, select Run -> Add Breakpoint -> Module Load Breakpoint. Delphi responds by displaying the dialog box shown in Figure 3.



Figure 3 The Add Module Breakpiont dialog box

Enter the name of the module in the Module Name field. Click OK when you are done. This is the only way to set a module load breakpoint on a module that is not yet loaded (or even one that will never be loaded).

You can also create a module load breakpoint from the Modules window, but this is only useful if the project is in the debugger and the module has already been loaded. With the debugger loaded, open the Modules window by selecting View -> Debug Windows -> Modules. Then in the Module pane, right-click the module you want to break on, and select Break On Load. Modules that will trigger a module load breakpoint appear in the Module pane with a red dot to their left. To remove a module load breakpoint, right-click the specific module and select Break On Load once again.

Non-Breaking Breakpoints

Another aspect of the breakpoint definition provided at the top of this article that is not completely correct is the part where the breakpoint interupts the execution of your code. In fact, a breakpoint only interupts the execution of your code if it is a breaking breakpoint. Non-breaking breakpoints, like breaking breakpoints, trigger when they are active and are encountered by the debugger. Unlike breaking breakpoints, however, the do not stop code execution.

To define a non-breaking breakpoint, display the Breakpoint Propeties dialog box for the breakpoint. Click the button labeled Advanced >> to display the Advanced breakpoint properties, as shown in Figure 4, and then disable the Break checkbox. (Module breakpoints are the only breakpoints that do not support a non-breaking mode.)


Figure 4 The Advanced view of the Breakpoint Properties dialog box

At first you might wonder "What is the value of a breakpoint if it does not stop your code execution?". Let me tell you, the value can be very big indeed. Non-breaking breakpoints permit you to perform a variety of actions without interrupting the execution of your application.

Consider that a non-breaking breakpoint can be used to enable or disable one or more other breakpoints. For example, you can set a non-breaking breakpoint to execute on the second pass of a section of initialization code that you assume will only execute once. If that code is executed a second time, for whatever reason, you can enable a group of additional breakpoints that will load the debugger, permitting you to examine your application's variables in order to determine why the code is executing a second time.

For me, however, the most valuable capability of a non-breaking breakpoint is to perform a side effect.

Breakpoints with Side Effects

There are five types of side effects that a breakpoint can perform, whether or not it is a breaking breakpoint. It can enable or disable the loading of the debugger in response to an exception, it can write a message to the event log, it can evaluate an expression, it can enable or disable breakpoints associated with a particular group, and in the more recent versions of Delphi, it can write some or all of the call stack to the event log.

Of these, the side effect I consider the most valuable is the ability to evaluate an expression, so I will consider that side effect last. But first, let's consider the other side effects.

Controlling How The Debugger Handles Exceptions

If you check the Ignore subsequent exceptions checkbox, the integrated debugger will stop loading when an exception is raised, at least until another breakpoint that enables subsequent exceptions is encountered. To define a breakpoint that enables exceptions, place a second breakpoint (breaking or non-breaking), and check the Enabled subsequent exceptions checkbox.

Disabling and enabling exceptions using non-breaking breakpoints permits you to conveniently run a section of code from the IDE that raises exceptions, without interruption from the debugger. This technique is particularly useful for sections of code that raise exceptions for the purpose of displaying errors or warnings to the user, but which do not constitute bugs or other similar errors in your code.

For example, when client-side validation code (code that tests the validity of user input) raises an exception, it is really not an error, code-wise. Instead, it is to inform the user that validation has failed, suggest to the user how to correct the problem, and to abort any remaining validations or operations that would proceed had the user's input been valid. While testing your validation code it serves you no purpose to be interrupt by the integrated debugger.

Writing Messages to the Event Log

Use the Log message field to enter a string that will be written to the event log when the non-breaking breakpoint triggers. This option is a nice alternative to using the Windows API call OutputDebugString, which requires adding additional code to your project. After running your application in the IDE, or while the integrated debugger is loaded, you can view messages written to the event log using the Log message field by selecting View -> Debug Windows -> Event Log. Messages written to the event log using breakpoints are identified by the label "Breakpoint Message."

Enabling/Disabling Breakpoint Groups

A breakpoint can be used to enable and disable groups of breakpoints. A breakpoint group consists of one or more breakpoints to which you have assigned the same group name, using the Group field on the Breakpoint Properties dialog box (see Figure 4).

When a breakpoint group is disabled, none of the breakpoints in the group will trigger when encountered by code running in the IDE. Enabling a group restores the enabled property of any disabled breakpoints in the group.

Evaluating Expressions

The Eval expression field serves two purposes. First, it can be used to evaluate any expression. This expression can include any object, method, variable, constant, resource, function, or type that is within scope of the breakpoint, and can also include operators appropriate for those symbols. After having entered a value in Eval expression, you have the option to enabled the Log result checkbox. When Log result is checked, the value of the expression is written to the event log. As with the Log Message field, Eval expression results that are logged permit you to avoid writing expression to the event log using OutputDebugString.

So, finally, here is the main point of this article. My favorite use of Eval expression is to execute a function. Specifically, a function that has side-effects. For example, imagine that upon startup your application tests for the existence of a CDS file that is used by a ClientDataSet. If the file is absent, you call CreateDataSet to create the data structure that the ClientDataSet will use, after which that structure is written to a CDS file.

During testing, you may always want to begin your application without an existing CDS file, so that you can test whether or not the CDS file is being created properly. One way of doing this is the write a function that deletes the CDS file if it exists. You can then call this function with a breakpoint by adding the function name to the Eval expression field.

So long as this Eval expression is associated with a breakpoint that is encountered prior to the first time your application tests for the presence of the CDS file, the Eval expression function will always ensure that no file exists when the test occurs. Since breakpoints only trigger when the application is run from the IDE, a deployed application, or one executed without the debugger being enabled, will not destroy an existing CDS file.

In order to execute a function using the Eval expression field, that function must be included in your compiled code by the linker. But this is a bit more complicated than it sounds. Specifically, if the only call to your function that you create to produce your side effect is made by the breakpoint, Delphi's smart linker will not recognize that the function is used, and will therefore omit it from the compiled application. While this feature ensures that your applications only contain code that they use, thereby keeping your exe size to a minimum, it will have the effect of removing your side effect.

Fortunately, there is a work-around for this. Specifically, you can trick the smart linking into thinking that your side effect function might be called, in which case it will be obligated to include the function in your compiled code. You do this by creating a method that looks like an event hander, and include a call to your side effect function within that method.

Here is how you can do this. Create a published method. While in reality this method can have any signature, you might as well make it a TNotifyEvent, which is a procedure that takes a TObject parameter named Sender. Then, in the implementation of this method, include your call to your side effect function. The following code shows what the pseudo event handler, and the side effect function, might look like:


procedure TForm1.FakeEvent(Sender: TObject);
begin
CleanupCDS;
end;

function TForm1.CleanupCDS: String;
begin
if FileExists(ExtractFilePath(Application.ExeName) + 'data.cds') then
DeleteFile(ExtractFilePath(Application.ExeName) + 'data.cds');
end;

All you have left to do is to add the call to CleanupCDS to the Eval Expression property of the breakpoint, as shown in Figure 5. Assuming that the Break property of the breakpoint has been disabled, anytime that this breakpoint is encountered in your code while executing with the debugger active, the CDS file named DATA.CDS, located in the same directory as your application, will be silently deleted.


Figure 5 A non-breaking breakpoint with side effects

Tuesday, June 8, 2010

Creating Editor Key Bindings in Delphi

There is a powerful but little known feature of the code editor in Delphi and that permits you to add your own custom keystrokes. This feature is referred to as custom key bindings and it is part of the Delphi open tools API (OTA). The open tools API provides you with a collection of classes and interfaces that you can use to write your own extensions to the IDE.
This article provides you with an overview of this interesting feature, and demonstrates a simple key binding class that you can use as a starting point for creating your own custom key bindings. This key binding makes a duplicate, or copy, of the current line in the code editor. If a block of text is selected, this key binding will duplicate that block. This is a feature that is found in other code editors, and now, through key bindings, you can have it in Delphi.

Overview of Key Bindings

A key binding is a unit that installed into a design-time package. Writing an editor key binding involves creating class type declarations and implementing interfaces. In fact, creating and installing an editor key binding involves a number of explicit steps. These are:
  1. Descend a new class from TNotifierObject. This class must be declared to implement the IOTAKeyboardBinding interface. This class is your key binding.
  2. In addition to the four methods of that IOTAKeyboardBinding interface that you must implement in your key binding class, add one additional method for each new feature that you want to add to the editor. This method is passed an object that implements the IOTAKeyContext interface. You use this object within your implementation to read information about, and control, the behavior of the editor.
  3. Declare and implement a standalone Register procedure. Within this procedure invoke the AddKeyboardBinding method of the BorlandIDEServices object, passing an instance of the class you declared in step 1 as the only argument.
  4. Add the unit that includes this Register procedure to a designtime only package.
  5. Add the designide.dcp package to this design time package's Requires clause. This package is located in lib folder under where you installed Delphi.
Each of these steps is discussed in the following sections. As mentioned earlier, these steps will define a new key binding that adds a single key combination. Once implemented and installed, this key combination will permit you to duplicate the current line in the editor by pressing Ctrl-Shift-D.
(The source code for this editor key binding project can be downloaded from Embarcadero Code Central at
http://cc.embarcadero.com/item/27635.)

Declaring the Key Binding Class

The class that defines your key binding must descend from TNotifierObject and implement the IOTAKeyboardBinding interface. If you are familiar with interfaces, you will recall that when a class is declared to implement an interface, it must declare and implement all of the methods of that interface. Consider for a moment the following declaration of the IOTAKeyboardBinding interface. This declaration appears in the ToolsAPI unit:
IOTAKeyboardBinding = interface(IOTANotifier)
 ['{F8CAF8D7-D263-11D2-ABD8-00C04FB16FB3}']
 function GetBindingType: TBindingType;
 function GetDisplayName: String;
 function GetName: String;
 procedure BindKeyboard(const BindingServices:
   IOTAKeyBindingServices);
 property BindingType: TBindingType read GetBindingType;
 property DisplayName: String read GetDisplayName;
 property Name: String read GetName;
end;
As you can see, this interface declares four methods and three properties. Your key binding class must implement the methods. Note, however, that it does not need to implement the properties. (This is a regular source of confusion when it comes to interfaces, but the fact is that the properties belong to the interface, and are not required by the implementing object. Sure, you can implement the properties in the object. But you do not have to, and the properties were not implemented in this example.)
In addition to the methods of the IOTAKeyboardbinding interface, your key binding class must include one additional method for each custom feature that you want to add to the editor. In order to be compatible with the AddKeyBinding method used to bind these additional methods, these additional methods must be TKeyBindingProc type methods. The following is the declaration of the TKeyBindingProc method pointer type, as it appears in the ToolsAPI unit:
TKeyBindingProc = procedure (const Context: IOTAKeyContext;
  KeyCode: TShortcut; var BindingResult: TKeyBindingResult)
    of object;
All this type really means is that each of the additional methods that you write, each one of which adds a different keystroke to the editor, must take three parameters: an IATOKeyContext, a TShortcut, and a TKeyBindingResult.
The following is the key binding class declared in the DupLine.pas unit. This class, named TDupLineBinding, includes only one new key binding. The following is the class type declaration of this class:
type
  TDupLineBinding = class(TNotifierObject, IOTAKeyboardBinding)
  private
  public
    procedure DupLine(const Context: IOTAKeyContext;
      KeyCode: TShortCut;
      var BindingResult: TKeyBindingResult);
   { IOTAKeyboardBinding }
   function GetBindingType: TBindingType;
   function GetDisplayName: String;
    function GetName: String;
    procedure BindKeyboard(const BindingServices:
      IOTAKeyBindingServices);
  end;

Implementing the IOTAKeyboardBindings Interface

Once you have declared your key binding class, you must implement the four methods of the IOTAKeyboardBinding interface, as well as each of your additional TKeyBindingProc methods. Fortunately, implementing the IOTAKeyboardBinding interface is easy.
You implement GetBindingType by returning the type of key binding that you are creating. There are only two types of key bindings: partial and complete. A complete key binding defines all of the keystrokes of the editor, and you identify your key binding as a complete key binding by returning the value btComplete. A complete key binding is actually a full key mapping.
A partial key binding is what you use to add one or more keystrokes to the key mapping that you are using. The TDupLineBinding class is a partial key binding. The following is the implementation of GetBindingType in the TDupLineBinding class:
function TDupLineBinding.GetBindingType: TBindingType;
begin
  Result := btPartial;
end;
You implement GetDisplayName and GetName to provide the editor with text descriptions of your key binding. GetDisplayName should return an informative name that Delphi will display in the Enhancement modules list of the Key Mappings page of the Editor Options node of the Options dialog box.
GetName, on the other hand, is a unique string that the editor uses internally to identify your key binding. Because this string must be unique for all key bindings that a user might install, by convention this name should be your company name or initials followed by the name of your key binding.
The following listing contains the implementation of the GetDisplayName and GetName methods for the TDupLineBinding class:
function TDupLineBinding.GetDisplayName: String;
begin
  Result := 'Duplicate Line Binding';
end;

function TDupLineBinding.GetName: String;
begin
  Result := 'jdsi.dupline';
end;
You implement the BindKeyboard method to provide for the actual binding of your TKeyBindingProc methods. BindKeyboard is passed an object that implements the IOTAKeyBindingServices interface, and you use this reference to invoke the AddKeyBinding method.
AddKeyBinding requires at least three parameters. The first parameter is an array of TShortcut references. A TShortcut is a word type that represents either a single keystroke, or a keystroke plus a combination of one or more of the following: CTRL, ALT, or SHIFT, as well as left, right, middle, and double mouse button clicks. Because this parameter can include an array, it is possible to bind your TKeyBindingProc to two or more keystrokes or key combinations.
The Menus unit in Delphi contains a function named Shortcut that you can use to easily create your TShortcut references. This function has the following syntax:
function Shortcut(Key: Word; Shift: TShiftState): TShortcut;
In this function, the first character is the ANSI value of the keyboard character, and the second is a set of zero, one, or more TShiftState. The following is the declaration of TShiftState, as it appears in the Classes unit:
TShiftState = set of (ssShift, ssAlt, ssCtrl,
  ssLeft, ssRight, ssMiddle, ssDouble);
The second parameter of BindKeyboard is a reference to your TKeyBindingProc method that implements the behavior you want to associate with the keystroke or key combination, and the third parameter is a pointer to a context. In the BindKeyboard implementation in the TDupLineBinding class, the method DupLine is passed as the second parameter and nil is passed in this third parameter. The following is the implementation of the BindKeyboard method that appears in the DupLine.pas unit:
procedure TDupLineBinding.BindKeyboard(
  const BindingServices: IOTAKeyBindingServices);
begin
  BindingServices.AddKeyBinding(
    [ShortCut(Ord('D'), [ssCtrl, ssShift])], DupLine, nil);
end;
As you can see, this BindKeyboard implementation will associate the code implemented in the DupLine method with the CTRL-Shift-D keystroke combination.

Authors Note: In the original posting, ssShift was omitted from the set in the second argument of the above BindKeyboard implementation. A reader pointed this out in a reply to this blog, and I have since updated this code. Thanks Atys!

Implementing TKeyBindingProc Methods

Implementing the methods of IOTAKeyboardBindings is pretty straightforward. Implementing your TKeyBindingProc method, however, is not.
As you can see from the TKeyBindingProc method pointer type declaration shown earlier in this section, a TKeyBindingProc is passed three parameters. The first, and most important, is an object that implements the IOTAKeyContext interface. This object is your direct link to the editor, and you use its properties to control cursor position, block operations, and views. The second parameter is the TShortCut that was used to invoke your method. This is useful if you passed more than one TShortCut in the first parameter of the AddKeyBinding invocation, especially if you want the behavior to be different for different keystrokes or key combinations.
The final parameter of your TKeyBindingProc method is a TKeyBindingResult value passed by reference. You use this parameter to signal to the editor what it should do after your method exits. The following is the TKeyBindingResult declaration as it appears in the ToolsAPI unit:
TKeyBindingResult = (krUnhandled, krHandled, krNextProc);
You set the BindingResult formal parameter of your TKeyBindingProc method to krHandled if your method has successfully executed its behavior. Setting BindingResult to krHandled also has the effect of preventing any other key bindings from processing the key, as well as preventing menu items assigned to the key combination from processing it.
You set BindingResult to krUnhandled if you do not process the keystroke or key combination. If you set BindingResult to krUnhandled, the editor will permit any other key bindings assigned to the keystroke or key combination to process it, as well as any menu items associated with the key combination.
Set BindingResult to krNextProc if you have handled the key, but want to permit any other key bindings associated with the keystroke or key combination to trigger as well. Similar to setting BindingResult to krHandled, setting BindingResult to krNextProc will have the effect of preventing menu shortcuts from receiving the keystroke or key combination.
As mentioned earlier, the real trick in implementing your TKeyBindingProc method is associated with the object that implements the IOTAKeyContext interface that you receive in the Context formal parameter. Unfortunately, Embarcadero has published almost no documentation about how to do this. One of the few bits of information are the somewhat intermittent comments located in the ToolsAPI unit.
A full discussion of the properties of IOTAKeyContent is well beyond the scope of this article. That having been said, the following is the implementation of the TKeyBindingProc from the TDupLineBinding class:
procedure TDupLineBinding.Dupline(const Context: IOTAKeyContext;
  KeyCode: TShortcut; var BindingResult: TKeyBindingResult);
var
  EditPosition: IOTAEditPosition;
  EditBlock: IOTAEditBlock;
  CurrentRow: Integer;
  CurrentRowEnd: Integer;
  BlockSize: Integer;
  IsAutoIndent: Boolean;
  CodeLine: String;
begin
  EditPosition := Context.EditBuffer.EditPosition;
  EditBlock := Context.EditBuffer.EditBlock;
  //Save the current edit block and edit position
  EditBlock.Save;
  EditPosition.Save;
  try
    // Store original cursor row
    CurrentRow := EditPosition.Row;
    // Length of the selected block (0 means no block)
    BlockSize := EditBlock.Size;
    // Store AutoIndent property
    IsAutoIndent := Context.EditBuffer.BufferOptions.AutoIndent;
    // Turn off AutoIndent, if necessary
    if IsAutoIndent then
      Context.EditBuffer.BufferOptions.AutoIndent := False;
    // If no block is selected, or the selected block is a single line,
   // then duplicate just the current line
    if (BlockSize = 0) or (EditBlock.StartingRow = EditPosition.Row) or
      ((BlockSize <> 0) and ((EditBlock.StartingRow + 1) =(EditPosition.Row)) and
      (EditBlock.EndingColumn = 1)) then
    begin
      //Only a single line to duplicate
      //Move to end of current line
      EditPosition.MoveEOL;
      //Get the column position
      CurrentRowEnd := EditPosition.Column;
      //Move to beginning of current line
      EditPosition.MoveBOL;
      //Get the text of the current line, less the EOL marker
      CodeLine := EditPosition.Read(CurrentRowEnd - 1);
      //Add a line
      EditPosition.InsertText(#13);
      //Move to column 1
      EditPosition.Move(CurrentRow, 1);
      //Insert the copied line
      EditPosition.InsertText(CodeLine);
    end
    else
    begin
      // More than one line selected. Get block text
      CodeLine := Editblock.Text;
      // Move to the end of the block
      EditPosition.Move(EditBlock.EndingRow, EditBlock.EndingColumn);
      //Insert block text
      EditPosition.InsertText(CodeLine);
    end;
    // Restore AutoIndent, if necessary
    if IsAutoIndent then
      Context.EditBuffer.BufferOptions.AutoIndent := True;
    BindingResult := krHandled;
  finally
    //Move cursor to original position
    EditPosition.Restore;
    //Restore the original block (if one existed)
    EditBlock.Restore;
  end;
end;
As you can see from this code, the IOTAKeyContext implementing object passed in the first parameter is your handle to access a variety of objects that you can use to to implement your keybinding behavior. And without a doubt, it is the EditBuffer property that is most useful.
This property refers to an object that implements the IOTAEditBuffer interface. You use this object to obtain a reference to additional interface implementing objects, including IOTABufferOptions, IOTAEditBlock, IOTAEditPosition, and IOTAEditView implementing objects. These objects are available using the BufferOptions, EditBlock, EditPosition, and TopView properties of the EditBuffer property of the Context formal parameter.
You use the IOTABufferOptions object to read information about the status of the editor, including the various settings that can be configured on the General page of the Editor Properties dialog box.
The IOTAEditBlock object permits you to control blocks of code in the editor. Operations that you can perform on blocks includes copying, saving to file, growing or shrinking the block, deleting, and so on.
You use the TOTAEditPosition object to manage the insertion point, or cursor. Operations that you can perform with this object include determining the position of the insertion point, moving it, inserting single characters, pasting a copied block, and so forth.
Finally, you use the TOTAEditView object to get information about, and to a certain extent, control, the various editor windows. For example, you can use this object to determine how many units are open, scroll individual windows, make a given window active, and get, set, and goto bookmarks.
Turning our attention back to the DupLine method, this code begins by getting references to the IOTAEditPosition and IOTAEditBlock. While this step was not an essential step, it simplifies the code in this method, reducing the need for repeated references to Context.EditBuffer.EditPosition and Context.EditBuffer.EditBlock. Next, the current state of both the edit position and edit buffer are saved.
The code now saves the current row of the cursor, the size of the selected block (it will be 0 if no block is selected), and the AutoIndent setting of the code editor.
In the next step, the AutoIndent setting is turned off, if necesary. The code now determines whether a single line of code or a block of code needs to be duplicated. If a single block is being duplicated, the length of the current line is measured, the text is copied, and then the copied text is inserted into a new line. If a block is being copied, the selected text is copied, the cursor is positioned at the end of the selected block, and the copied text is inserted.
Finally, the AutoIndent setting is restored (if necessary), the BindResult formal parameter is set to krHandled, and both the edit position and edit block is restored. Restoring the edit position moves the cursor to its original position, and restoring the edit block re-selects the selected text (if a block was selected).

Declaring and Implementing the Register Procedure

In order for your key binding to be installed successfully into the editor, you must register it from an installed designtime package using a Register procedure. The Register procedure, whose name is case sensitive, must be forward declared in the interface section of the unit that will be installed into the designtime package. Furthermore, you must add an invocation of the IOTAKeyBindingServices.AddKeyboardBinding method to the implementation of this procedure, passing an instance of your key binding class as the sole parameter. You invoke this method by dynamically binding the BorlandIDEServices reference to the IOTAKeyboardServices interface, and pass as the actual parameter an invocation of your key binding object’s constructor.
The following is how the Register procedure implementation appears in the DupLine.pas unit:
procedure Register;
begin
  (BorlandIDEServices as IOTAKeyboardServices).
    AddKeyboardBinding(TDupLineBinding.Create);
end;

Installing the KeyBinding

The source code download includes a project for a design-time package, as well as the Dupline.pas unit. Use the following steps to install this keybinding in Delphi.
  1. Open the KeyBind project in Delphi.
  2. Using the project manager, right-click the Keybind project and select Install. The new keybinding should compile and install. (If it does not compile, ensure that the designide package is in the project requires clause, and that the project is a designtime only package.
The keybinding is now active. You should now be able to press Ctrl-Shift-D in a unit to create a duplicate of the current line or selected text.
I hope that this has inspired you to try to create your own key bindings. Note, however, if you create a key binding that uses a keystroke that is already in use by Delphi's editor, and you set BindResult to handled, you will have effectively overwritten the existing keystroke. In fact, Ctrl-Shift-D is current in use by Delphi to display the Declare Field refactoring dialog box. However, you can still access that feature by selected Refactor Declare Field from Delphi's main menu.
Or, you might consider changing the BindKeyboard implementation in this package to map DupLine to something else, such as Ctrl-D. Ctrl-D is mapped to the Source Formatting feature in Delphi 2010. This feature, which reformats your code, is something that you might prefer to have to intentionally select by right-clicking in the code editor and selecting Format Source.

Tuesday, June 1, 2010

Would You Like Chips * With That Q&A?

(* British for French Fries)

I don't go to McDonald's often. But last Wednesday, I did go to McDonald's, though it was for the technology, not for the food.

With the exception of last Wednesday, I'm not exactly sure when I last visited a McDonald's. I think it might have been almost four years ago, in Prague, and that was to get a cup of coffee (though I have a vague memory of my wife, Loy, having some kind of ice cream dish). So, I stand by my original assertion, that I do not go to McDonald's that often.

Many of my European friends find this hard to believe. Certainly, as an American, and one who is on the road a lot (most would say too much), certainly I must succumb to the siren song of the fast and easy food available at these locations. Actually, it's largely because I travel so much that I avoid fast food restaurants in general, if not in principle.

While I am not a skinny person, it would be disingenuous to characterized me as being overweight. And I work at that. In fact, when I am on the road, I rarely eat dinner at restaurants of any kind. Instead, I find something at a local supermarket and fix it up in my hotel room. Since I normally have a room with a microwave oven and refrigerator, at least in US hotels, there is a lot I can do. Making soup, building sandwiches, re-heating prepared meals, or even preparing raw vegetables. It's amazing, really, what you can do.

Even when I am not on the road, I rarely eat out. The fact is, I love to cook. To me, eating at a restaurant when I could be cooking at home would be my loss. I would much rather spent my money on ingredients, and maybe a nice bottle of wine, than to give it to someone else to prepare my food.

In fact, I am shocked each time I do eat out. My goodness, it's expensive (not that this is an issue). But when I think of what I could have done with that 30 bucks (or 50 bucks) I feel a loss. That same money could have bought some beautiful New Zealand green mussels, or Dungeness crab, or Maine lobster (I love seafood), or a great steak, or a little of each!

But my visit to McDonald's had nothing to do with food. It was for the the WIFI.

Last Wednesday I was in London, England, as part of the Delphi Developer Days 2010 tour that I was delivering with my friend and colleague, Marco Cantù. We had just finished the first day of our presentations, and I was preparing to go online for a live broadcast of my question and answer (Q&A) session for a web-based presentation I was doing for Embarcadero, called DataRage 2.

There was a problem, however. Well, two problems, really. The first problem was that the wireless Internet connection at our hotel had gone down. It had been up for most of the day, but now, just as I was supposed to go online, it was down.

I knew that this might be a problem. At this hotel the Internet goes down at least once a day. In most cases, we simply talk to the front desk and they reboot the router. There, problem solved.

But this is where the second problem comes in. And his name is Trevor. You see, Trevor works a shift at the front desk, and apparently he is uncomfortable with technology. Basically, if the Internet goes down when Trevor is on duty, you might as well wait until his replacement comes in, and they will have it back up in minutes.

Asking Trevor to reboot the router is like asking your plumber for health tips. It's a pointless exercise.

When I reported to Trevor that the hotel had lost its Internet connection, a hint of panic crosses his face. Next he stutters that there is nothing that he can do. When I comment that the other managers simply reboot the router, he starts flipping the power switches on and off on two power strips connected to who knows what, over and over, until he reports that, "There, it's gone. I'm sorry, it won't turn on again."

It's gone? Trevor, what have you done? I offer to come behind the counter and take a look. After all, I am a computer guy. You know, one of those people who work with these things all the time.

But Trevor would have none of this. His stuttering is getting worse. He's not supposed to touch it, he informs me. No one is supposed to touch it. It's not the hotel's equipment. No, I cannot come behind the counter. He's becoming more agitated. "We'll just have to wait for someone to come in and fix it," he informs me.

While this was going on, my presentation was already being broadcast. I submitted a 30- minute recording some weeks before, and this is played prior to my live question and answer period. However, it is now 5:22 pm (London time), and there are about 10 minutes left before I am supposed to speak.

I grab my backpack, which contains my computer, and start to head out the door. I ask Marco, who has a mobile phone, to send a message to DataRage coordinator Christine Ellis at Embarcadero and let her know that I am going to try to find an Internet café from which I can talk.

As I head towards the high street, where I had previously seen several Internet cafés, it occurs to me that I left my USB headphones, the ones with over-the-ear earphones and a noise canceling microphone, back in the hotel. But it's too late to go back. I need to get to an Internet connection as soon as possible.

A couple of minutes later I turn the corner onto the high street, and immediately find an Internet café. Actually, it was more of an all-in-one shop, international phone calls, prepaid phone cards, and several Internet-connected computers lined up along one wall. Success!

"Do you have wireless?" I ask. "No" was the simple reply. Drat! I need wireless.

My laptop is already set up with Microsoft Live Meeting, which takes several minutes to install on an existing machine. And, it's not clear that these machines even have microphones, or that I would be permitted to install Live Meeting. I needed another option.

"Is there an Internet café nearby that provides wireless access?" I ask. "Well," the clerk informs me, "if all you need is wireless access, go to the McDonald's across the street. It's free there."

I quickly thank him and leave. The McDonald's across the street was not actually across the street, but I could see it from here. And, I needed to go down to a crosswalk, as this was a very busy street.

As I entered the McDonald's, I did feel a tinge of guilt. I am not a regular customer, and I don't intend to buy anything this time, either. So, I should be as inconspicuous as possible. But as I sat down at a single table in a far corner, away from most of the rest of the patrons, I realize that this is one noisy place.

To begin with, the place is heaving (this is a particularly British phrase). It's now just after 5:30pm, and my Q&A should be starting. But so is dinner service, and the place is filled with families with young children, each one speaking at the top of their voices. On top of this was the music, a pulsing electronic pop that blared from speakers in the ceiling. The music may have been louder than the children. But the cacophony of it all was too much. There is no way that I could talk from here.

I began to put my computer back into my bag when I realize that it was going to be McDonald's or nowhere. The nearest Internet café that I could recall was at least two blocks away. I was going to have to make do. If nothing else, maybe I could type my answers to any questions asked.

So, I restart my computer, and connect to the McDonald's Internet (this took several minutes, as I had to register with their provider first). However, I was soon connected and Live Meeting was loading. During this setup time I once again realized that I didn't have my USB headphones. How am I going to hear the questions?

I scrounge around in my bag and find an extra pair of cheap earphones that I got on my flight to London. Actually, these were ear buds, and poor ones at that. But, they'd have to do.

Just about the time I have the ear buds plugged in, and Live Meeting is finally coming on line, I hear the rich voice of Embarcadero Developer Relations Evangelist David Intersimone (affectionately known as "David I") answering a question about my presentation. Suddenly, he stops and says "It sounds like Cary has come online from an Internet café. Cary, are you there?"

Oh, I'm here all right. Leaning over my laptop, speaking directly into the built-in microphone of my laptop's lid, with my fingers in my ears, trying to push the ear buds in further so that I could hear David while trying to block as much of the external racket as possible. "Yes, David, I'm here. Coming to you from a McDonald's."

"I can hear that. Sounds like you have a big audience there." But the good news was that I could hear the questions, and remarkably, they could hear my answers. And that's the way it was for the next 20 minutes. Me leaning over my laptop with the forefinger of each hand stuck in each ear.

I must have been quite a sight. But certainly not enough to discourage the family of five, an exhausted mother and her four children, all yelling at each other at the top of their lungs, from sitting down next to me half way through my session. Gee, I would have thought that a lunatic talking to his computer with his fingers in his ears would be something that a nurturing mother would want to avoid. But what do I know? This is London, after all, and they have their fare share of lunatics, and most of them are harmless.

Meanwhile, back at the conference hotel, Marco and Loy are back online and listening in (and laughing pretty hard, I'm later told). Apparently someone has rescued Trevor.

Considering the circumstances, it all went quite well. And at the conclusion of the Q&A, David I noted that we had accomplished a first. Other presenters had handled their Q&As live from the Embarcadero studios, some from their own offices, others from home, but never from a McDonald's. And, despite the constant chatter of children in the background (Marco and Loy said it sounded like I was presenting from a daycare center), we managed to answer all of the questions asked.

So, I'll have to admit that my first visit to McDonald's in years was an overall satisfying experience. Maybe I'll be back another day, when there is less pressing business. After all, I hear they pour a world-class cup of coffee these days.

Copyright (c) 2010 Cary Jensen. All Rights Reserved

Thursday, May 6, 2010

Delphi Non-Core Feature Survey: You Can Help

Delphi developers, I want to hear from you. Are you using Delphi's frameworks for unit testing, audits, metrics, and design patterns? If no, why? If yes, to what extent do you use these features? Click here to take the survey

Here is a little background. Since the release of Delphi 2005, there have been a number of interesting support features introduced in Delphi. For example, Delphi 2005 added support for easily creating unit tests. And audits, metrics, and design pattern support was added in Delphi 2006.

Initially Delphi's support for unit testing was available in all versions (Professional, Enterprise, and Architect). By comparison, the support for audits, metrics, and design patterns required Together, and this product was shipped only with the high-end versions of Delphi. And, on top of that, the Together product stayed with Borland when CodeGear was spun off. So what did that mean for these features as Delphi evolved?

I have to admit that I have not used the audits, metrics, and design pattern features of Delphi, though I have noticed the associated menu items in Delphi's menus. So, when I got a request from a client to include discussion of audits and metrics in an upcoming Delphi class that I am going to deliver, it was time to do some research.

What I found was interesting and puzzling. There is not a whole lot of information out there about these features. And, I did discover that not only are these features available in the absence of Together, but are now even included (with limited support) in the Professional sku of Delphi 2010.

I have now committed to writing training material on these topics, and this will also lead to my adapting this material for this blog as well for some magazine articles that I am writing. And this is where your help comes.

I want to hear from you. If you are a Delphi developer, I want to know which of these features you use, and to what extent. If you don't use these features as they ship in the product, do you use third-party tools that provide similar support?

I have created a short, 10 question survey that should take only a couple of minutes to complete. Please help me by filling out this survey. In addition, please ask your Delphi colleagues to help out as well. Click here to take the survey

Copyright (c) 2010 Cary Jensen. All Rights Reserved.

Wednesday, May 5, 2010

In-Memory DataSets: ClientDataSets and .NET DataTables Part 6: Applying Updates to a Database

In the preceding article in this series I discussed various techniques that you can use to manage the change cache. In this installment I will conclude that discussion by looking at how you can apply the changes held in the change cache to the underlying database from which the data was originally loaded.

Call the ApplyUpdates method of a ClientDataSet to save any changes made to the in-memory data to the underlying database. Specifically, if you edit data obtained through a dataset provider, and then close or free a client dataset without calling ApplyUpdates, any changes stored in the change log are lost.

When ApplyUpdates is called, the contents are sent back to the dataset provider for resolution in the context of a transaction (so long as you pass a non-negative integer as the sole parameter of the ApplyUpdates method. When you pass –1, no transaction is initiated). The dataset provider, in turn, generates the necessary calls to apply the updates to the underlying dataset, applying the changes to the underlying dataset one record at a time.

By default this process is handled by a SQLResolver instance, which is created by the dataset provider. The SQL resolver determines the database that needs to be updated, and then creates the necessary SQL statement to apply the changes based on the contents of the change log. Specifically, one SQL statement will be generated for each change that needs to be applied.

Alternatively, the dataset provider can be configured to use the dataset from which it originally read the data to apply the changes. This approach is only possible when the dataset from which the records were read permits data changes.

For example, if the dataset provider gets its data from a TTable, the dataset provider can edit the TTable directly, inserting, deleting, or posting the changes using the TDataSet interface. Again, these changes are applied one at a time. It should be noted that when the dataset provider resolves the changes through the dataset, the dataset's event handlers, such as BeforeDelete and BeforePost can be used to perform data validation.

By comparison, if the dataset provider gets its data from a dataset that does not permit data editing, the dataset provider cannot resolve the data to the dataset. For example, if the dataset provider gets its data from a SQLDataSet, a dataset that retrieves its data using a unidirectional cursor and which does not permit editing, the dataset provider cannot resolve the changes directly to the dataset.

In these cases, there are two options. Either the default SQLResolver described earlier can be used, or you can write a BeforeUpdateRecord event handler. From within this event handler your code is given a reference to the changes that must be applied, and your code can take any action necessarily to explicitly apply these changes. This approach is the most flexible, although the most difficult to implement.

What kind of SQL (or direct edit) is generated is controlled by the UpdateMode of the DataSetProvider. If set to upWhereAll, a record is updated only if that exact record currently exists in the database. If set to upWhereChanged, the record is updated if a record with the same key fields and same values in the modified fields are found (this is a merge). When set to upWhereKeyOnly, an update is made if a record with the same key is found (last to post wins).

Updating .NET DataSets is similar to updating ClientDataSets. In that case, however, the DbDataAdapter class is the one typically used to apply the updates. DbDataAdapters, such as DataStoreDataAdapter, have four DbCommand properties. The SelectCommand property contains the SQL statement that returns the result set that is inserted into the DataTable when you call the Fill method. The other DbCommand properties, DeleteCommand, InsertCommand, and UpdateCommand, are designed to hold parameterized queries that will get their parameter values from the change log at runtime when you call the DbDataAdapter.Update method.

Some developers write the SQL statements that define the DeleteCommand, InsertCommand, and UpdateCommand objects manually. Doing so give them control over the queries, permitting the queries to be optimized for the underlying database. Other developers use a CommandBuilder to generate these queries. CommandBuilders are easy to use, but do not always generate optimized queries.

To use a CommandBuilder, you call its constructor, passing to it the DbDataAdapter whose SelectCommand command has already been defined. Based on this query, the CommandBuilder generates the DeleteCommand, InsertCommand, and UpdateCommand queries.

The following code shows the configuration of a CommandBuilder:

Connection1 := DataStoreConnection.Create(
'host=LocalHost;user=sysdba; ' +
'password=masterkey;database="C:\Users\Public\Documents\' +
'Delphi Prism\Demos\database\databases\BlackfishSQL\employee"';
Connection1.Open();
//Sql statements are executed by IDbCommand objects
Command1 := DataStoreCommand.Create('SELECT * FROM customers',
Connection1);
//DbDataAdapters are used to populate DataTables and resolve data
DataAdapter1 := SqlDataAdapter.Create(Command1);
//CommandBuilders create DbCommand objects for a DataAdapter's
//DeleteCommand, InsertCommand, and UpdateCommand properties based
//on the DbDataAdapter.SelectCommand IDbCommand property
CommandBuilder1 := DataStoreCommandBuilder.Create(DataAdapter1);
DataSet1 := DataSet.Create;
DataAdapter1.Fill(DataSet1);

The following code shows how the changes made to the DataTable are applied.

var
DataTable1: DataTable;
begin
DataTable1 := DataSet1.Tables[0].GetChanges;
if DataTable1 <> nil then
DataAdapter1.Update(DataSet1.Tables[0]);

Applying Updates and Persisted Data

Probably one of the most important characteristics of an in-memory dataset's ability to apply its updates to the underlying database is its concurrent ability to persist its data and state to a file, stream, or database. Together, these features permit an in-memory dataset to apply its updates at some future time, regardless of whether it has been persisted or not (and independent of its duration of persistence).

In order for a dataset to be able to apply its updates to a database subsequent to its persistence, the dataset's change log must be intact. Without this information, you lack the data required to determine which changes have been made to the dataset since it was originally populated.

If you are relying on a DataSetProvider (for ClientDataSets) or an DbDataAdapter implementation (for .NET) to apply the updates to the database, the object must be in a state compatible with applying the dataset's updates. For example, a DataSetProvider that you use to apply a previously persisted ClientDataSet's updates to a database must point to a TDataSet whose structure is consistent with the one that was used to originally load the ClientDataSet (unless you are using the DataSetAdapter's BeforeUpdateRecord event handler to programmatically apply the update, in which case it's all up to your code).

In the case of a .NET DataTable, the DbDataAdapter that is used to apply its updates must hold DbCommand instances in its DeleteCommand, InsertCommand, and UpdateCommand properties that contain parameterized queries sufficient to the task of applying those updates. This can be achieved by defining these queries manually, or by having an adequate DbDataAdapter.SelectCommand instance that can be used by a CommandBuilder to construct the necessary queries (of course, you must then use a CommandBuilder to create those delete, insert, and update queries objects). Otherwise, you once again must take matters into your own hands and generate all calls to update the underlying database by programmatically examining the change log and generating the necessary queries.

What If Updates Cannot Be Applied?

There is another issue that is worth mentioning. Specifically, even when you can construct the necessary queries to apply updates contained in a persisted in-memory dataset, those updates may not be possible. Specifically, it is conceivable that between the time that the data was originally loaded into the in-memory dataset and the point in time at which you want to write it back to the underlying database, the corresponding records in the database have been changed.

While many developers worry about this possibility, it is normally less of a concern than it might at first appear. For example, if you delete a record from an in-memory dataset, and then attempt to apply that deletion (at a later time), but find that the record was already deleted by someone else, who cares? It's gone. Mission accomplished (by someone, at least).

Similarly, if you attempt to apply a record insertion, only to find a record with that key already inserted, then the insertion is not necessary. But what, you might ask, if another user inserted a record with the same key as you (which causes your insertion to fail), but the other user's inserted record is different than the one you intended?

This is really an architectural issue, isn't it? Many of today's developers avoid this issue altogether by assigning arbitrary (and more or less meaningless) primary keys to each and every record. For example, many developers use GUIDs (globally unique identifiers ¾ 128-bit values that are guaranteed to be unique), to each record. In these cases, duplicate records are impossible, no matter who inserts the record.

Updates are a bit more problematic, but nonetheless generally workable. In most cases it is once again an architectural issue. There are, in fact, three options. In the first case, an attempt to update a record that you are holding in memory fails if someone else has changed any part of that record (in the underlying database) since the time you originally loaded it into memory. In this situation, your update query should include a test for all original field values in the WHERE clause of the update query.

In reality, failure to update a record that has been subsequently updated by another user, when those updates must be entirely exclusive, is rarely a tragedy.

The second case is to permit two or more users to update a record, so long as neither user changes the key field values, or a common field. This is a fine compromise in many situations. For example, if, since the time that you read your record into memory and the time that you are attempting to apply your update, another user has changed that record, but not one of the fields that your user changed, who cares? Simply include the record's key fields, and your changed fields, in the WHERE clause of the update query, ignoring those non-key fields you did not change. (If you use an arbitrary key field, such as a GUID, it is unreasonable, and should be prohibited, that anyone should ever change the key field value).

The third option, and one that is appropriate for some database applications, is simply that the last user to attempt to write to a record be permitted to do so, overwriting any previous user's applied updates. For example, imagine that five different sensors record the current temperature in a given area. If all sensors are equally correct, the most current update attempt is the most accurate. Who cares about a temperature that was written ten minutes ago? A similar analogy can be made with respect to stock prices. A more current stock price is more accurate. Past updates are old news.

Copyright (c) 2010 Cary Jensen. All Rights Reserved.