Thursday, December 17, 2009

Creating a More Manageable Development Environment

I recently bought a new laptop, something that I do every year or so. And the recent release of Windows 7 gave me a good excuse to do so. But this new purchase also gave me another opportunity to do something that I’ve been thinking about for a while, the opportunity to set up my machine in a fashion that will improve my ability to maintain my development environment.

Here’s the problem. Over time, in some cases in as little as three or four months, my computer performs noticeably slower than when it was first configured. Eventually it gets so bad that I need to reinstall my operating system and all of my software and tools. For me, this can take days.

Part of my problem is that I need quite a few different versions of a number of different programs. For example, I currently support a number of applications written in Delphi 7, Delphi 2007 (both Win32 and Delphi for .NET), Delphi 2009 and Delphi 2010, Visual Studio 2008 (including Delphi Prism). (I am also asked to teach classes in other versions of Delphi, such as Delphi 2006, or even Delphi 5.)

In addition, I use a variety of database servers, including Advantage Database Server and SQL Server, and these must be installed as well. And then there are the various tool sets, set as third-party component sets for the various applications I support, source code control, utilities, and the list goes on and on.

Actually, it’s much worse than it sounds. Let’s take Delphi 2007, for instance. Once you install Delphi 2007 and it’s help files, and register it as a legitimate copy, you must also download and install the various patches. I think that the total time, from start to finish, to get Delphi 2007 installed is something on the order of 5 hours.

And don’t get me started on Visual Studio 2008. In addition to the core development environment, there were several service packs, as well as updates such as Silverlight 3, and so forth. These needed to be downloaded and installed. VS 2008 must have taken a total of 8 hours.

Believe me, I’ve gotten pretty good at configuring my system, but this is a waste of time. Why can’t I just install it all once and be done with it.

I’ve tried to solve this problem in the past. One of my approaches was to carefully install everything I needed, and then make a good, clean image using something like Acronis True Image or Norton Ghost. And while this works, somewhat, it isn’t perfect. After restoring from an image, there is still a lot to do to get a system back into shape.

The fact is, that “clean” image you create early on isn’t perfect. It doesn’t have all the nifty tools you’ve picked up in the intervening months since you created the image. Nor does it have the latest third-party components that you have added to some of your more recent development projects.

Incremental images are not the answer, either. Once you start using your system, things get progressively worse, performance-wise. As a result, while that pristine, original image that you created, once restored, will perform very nicely, those incremental images you’ve been making along the way each contain some of the “gunk” that has been eating away at your performance. Restore from one of those and you’re starting with a non-optimal setup (and you will still need a day or so to get things really back to where you need them).

A Possible Solution

Over the past year I have been playing with an idea that is not so original. Most of us use products like VMWare Workstation or Virtual PC to create virtual installations of guest operating systems. Most often we use these guest OSes to install beta software, so that it can run in a clean environment, without having to worry about it doing something ugly to our primary OS (host operating system). Once testing is done, we simply delete the guest operating system and go about our way.

Well, sometime last year I bought a copy of Windows XP Pro, and installed it as a guest OS under VMWare. I then installed Delphi 2007, as well the various support tools that I use. I used this guest to compare how Delphi (and then other tools) worked under XP versus Vista (which I was using as my host OS).

This was all very educational, until at one point something weird happened to my Delphi 2007 installation under my host (personally, I blame the Internet Explorer 8 installation, but I’ll leave that speculation for another article). At the time I was on the road, working for 10 days at a client’s site. And, I didn’t have my RAD Studio 2007 installation disks with me.

Fortunately, I was able to load the guest OS, and continue developing without missing a beat. I simply retrieved a clean copy of the source code from the version control system, and off I went.

A colleague of mine and I talked about what had happened at length, and came to the same conclusion. Maybe I should isolate my development environment from the host operating system as a matter of practice. Furthermore, maybe I should isolate each of my different development environments from each other. Maybe, I thought, this would decrease (I wanted to say eliminate, but that would be blatant hubris) the “gunk,” and reduce the opportunity for incremental decay of performance.

The Virtual Road is Paved With Good Intentions

Here is what I did. I bought a fast computer with loads of memory. This machine rus an Intel Core 2 Quad Q9000 CPU running at 2 GHz. (Importantly, this chip supports hardware virtualization.) The machine also has 8 GB of DDR 3 RAM. Plenty of memory, plenty of cores, I was good to go.

From here, I installed onto my host operating system, which was Windows 7 64-bit Home Premium, just that stuff that an ordinary computer user needs, and nothing else. I’m talking about basic stuff, like a word processor, an email client, a browser, antivirus, backup/restore software, a Twitter client, iTunes, and most importantly, virtualization software, which in this case, was VMWare Workstation 7.0. Once all of this was installed, and any available updates applied, I made a nice clean image of this base.

Next, I went to work on the guest operating systems. And I was very systematic about this. First, I created a guest OS with the absolute basics: Windows 7 64-bit Ultimate and antivirus. After installing all updates, I made a full cloned of this guest OS. Onto the clone I installed my very basic stuff that I need in all of my development environments, including version control software, various utilities (like SysInternals Suite), as well as Advantage Database Server and SQL Server (hey, I’m a database guy. No matter what I’m doing, I’m going to need a database). Let’s refer to this virtual machine as “Database Base.”

Now I’m ready for the big time. I cloned Database Base, and installed onto this clone a copy of Delphi 2010. I created a second clone of Database Base and installed Visual Studio 2008 (that took an entire day). In all, I’ve got about seven of these development environments so far.

I also have a nice, 500 GB, portable, external eSATA drive (this laptop supports eSATA, which is about three times faster than USB 2.0). I backed up the image of my host OS, as well as all of the nice new guest OSes, to this drive.

Despite the limitations, which I’ll share with you in a moment, this is a pretty nice setup. First of all, since I don’t actually develop in the host, and have only limited software installed on it, I gain two benefits. First, the host is small, and really, if I had to install it from scratch, it would take hours, not days. And, since I have a nice, clean image of the host, I can actually restore it in a matter of minutes, if necessary.

The second benefit is that there is little going on in the host. Sure, I end up installing necessary evils such as Goto Meeting, Adobe Air, and other stuff like that, but it’s limited, since I am not developing in the host. And, so far (it’s been a couple of months now), the host is showing little signs of slowing down.

And it’s working out fairly well with the virtual machines, so far. Performance is nice I’m allocating 3 GB of RAM to each virtual machine, and this give them some room to stretch, while permitting me to run two of them simultaneously without choking the host OS (though, frankly, things do get slow when the host has only 2 GBs to play on, so I try not to run two guests at a time).

And here’s the great thing. Since I’ve got the individual VMs backed up on the eSATA drive, I can restore them as well. In fact, if my idea works as planned, I should be able to migrate these VMs to my next machine, meaning that I may not have to install RAD Studio 2007 ever again. (I wish it were true, but I know in my heart that it is a lie. I know I’ll have to install RAD Studio 2007 again, but just not as many times as I would had I not taken this particular route.)

There are two additional, and compelling, advantages that I now have. First, if something awful happens to one of my development environments, I’ve got a quick solution. For example, if, when testing a new routine that removes an old key from the Windows registry I accidentally delete every ProgID, no worries. I simply copy the backed up guest VM file from my eSATA drive (which, of course, I have backed up to another storage drive as well), and I’m cooking with gas (this means that I’m back to work quickly).

The second advantage is this. When I get to the point of shipping a product (or hit a major milestone, or whatever), I can make a backup of the VM that I used, and I’ll have that exact environment forever. If I ever need to return to the precise installation of service packs, updates, component sets, and the like, that was used to create that special build, I’ve got it, right there, in that virtual machine that I’ve saved (and backed up, of course).

But It’s Not Perfect

I wish I could say that I’m completely satisfied with this solution, but I cannot. There are problems, and some of them are not trivial. They are not horrific, either. In other words, while there are benefits to this approach, I’ve realized some serious limitations, even though I’m only a few months into this experiment.

The first issue is, honestly, a pretty minor one: It takes a little bit longer to get up and running, as far as development goes. In short, I have to wait for two OSes to load (the host and a guest) before I can get to work. Fortunately, these OSes, being Windows 7, do load quickly, so its only a minor inconvenience.

The second issue is more complicated. I run square into a major issue involving software updates. You know, those annoying message you get from our fine friends at Microsoft that inform you that updates are being installed (actually, I don’t get those, because I refuse to let Microsoft decide when to install updates. I get to choose when.)

Well, each of the individual VMs have this problem, which means that right now I don’t install one update, I install eight (host plus seven guests). And, Java and Adobe, and every other bloke on the block who wants to make sure that their bugs don’t destroy my system(s), want to install updates as well. You get the picture, and it isn’t pretty.

Ok, if you only work in one development environment, say Visual Studio, this will not be an issue. Most of us, though, must support a variety of environments. So if you take the road I did, creating a separate guest OS for each, you're going to have to face this issue (which may be to ignore updates altogether, with the exception of major service packs).

There is another, related issue. What if I find a great new tool (for example, the best merge tool that you’ve ever seen, one that flawlessly and perfectly merges two different versions of the same source file). Or, what if I realize, after creating all of my swell VMs, that I failed to install one of my more useful utilities. Well, at present, I have to install these things separately in each VM. A time consuming task if taken all at once, or an annoyance if done piecemeal each time I discover the missing utility in a particular VM.

I was hoping that a feature of VMWare Workstation, called linked clones, was the answer (it’s not). A linked clone is where you create a clone that is based on an existing VM. When I first started looking into linked clones, I thought my problems were solved. But after reading more closely, the VMWare Workstation help makes it clear that a linked clone is associated with a particular snapshot of an existing VM, and that subsequent changes to the cloned VM do not appear in the clone.

I was really hoping that I could create a linked clone to a base VM, like Database Base, and then perform maintenance only to Database Base. For example, if a Windows update is released, I was hoping I could update only Database Base, and all the linked clones created from it would automatically have the updates. That’s not how it works. Even with linked clones, the individual clones need Windows updates (Oh, the humanity!).

While there is a slight performance decrement when using linked clones (not really an issue; we're not trying to run games on these systems), there is a benefit. Specifically, the individual linked clones, while requiring that you keep around a copy of the original VM that you cloned, take up much less disk space than full clones. For example, one full clone I have takes up 19 GB, while a comparable linked clone consumes 6.5 GB of disk space. This makes the linked clone much more convenient to backup and restore.

What’s The Solution?

As I said, this approach has some serious benefits, but it doesn’t solve all of the problems, either. Is there a perfect solution? I don't know.

Some developers I've talked to have used a system like this, and report that they are very pleased. In those cases, however, they had only one guest OS to maintain. And, either they only needed one development tool, or they installed all of their development environments on the single guest OS. While that might work, these guest OSes tend to get very large, and increase the likelihood that one tool (Visual Studio, for example) may introduce issues with another (say Eclipse).

Other developers I know accept the predictable cycle of re-installation of the OS and all tools. Of these, one or the more compelling approaches involved not only installing all tools and development environments, but also installing all installation CDs and DVDs to a directory or partition on the system. In addition, each time they install a service pack, they first download it to a folder along side the installation disk images.

So long as you a good image of your base operating system, a backup of your installation images, and keep a handy record of your serial numbers, registration keys, and the like, re-installation goes much faster. It still takes time (days?), but it takes much less time than when using disks.

So, while I'm pretty satisfied with my current setup, I'm still looking for something better. I’d like to hear what you think, and what you’ve done to address these issues.

Copyright © 2009 Cary Jensen. All Rights Reserved

Monday, December 14, 2009

In-Memory DataSets: ClientDataSet and .NET DataTable Compared: Part 3: Populating In-Memory DataSets

In the preceding article in this series I showed you how to create in-memory datasets at runtime (as well as how to define a ClientDataSet’s metadata at design time). In this third installment in this series I will demonstrate how to populate your in-memory dataset.

In short, data can be inserted into an in-memory dataset in one of four ways. These are:

  1. You can add data programmatically
  2. You can load previously saved data
  3. You can retrieve data from a database
  4. You can permit an end user to enter data manually

Each of these techniques is described in the following sections.

Writing Data Programmatically

After creating a ClientDataSet, you use its TField instances to add data programmatically. Alternatively, you can use the ClientDataSet's InsertRecord or AppendRecord methods to add data directly. The following code sample demonstrates adding a record using TField instances:

ClientDataSet1.FieldByName('Account ID').AsInteger := 1001;
//This next line assumes that the second column is a string field
ClientDataSet1.Field[1].AsString :=
'Frank Borland';
//Adding a record using the InsertRecord method:
ClientDataSet1.InsertRecord([1002, 'Phillipe Kahn']);

With .NET DataTables, you add a DataRow instance, which you then populate with data. This is demonstrated in the following code segment.

  //Add two records to the DataTable
var DataRow1: DataRow := DataTable1.NewRow();
//Referencing columns using column name
DataRow1['Account '] := 1001;
DataRow1['Customer Name'] := 'Frank Borland';

DataRow1 := DataTable1.NewRow();
//Referencing columns using ordinal position
DataRow1[0] := 1002;
DataRow1[1] := 'Bill Gates';

Loading Data From Files or Streams

There are a variety of ways to load a ClientDataSet from a file or stream. One of the most common is to invoke the ClientDataSet's LoadFromFile or LoadFromStream methods. This is shown in the following code sample:


You can also use the XMLData property, which is a read/write property that represents the ClientDataSet's data as a string of XML. The following line of code shows how you can load a ClientDataSet from a memo field of a table:

ClientDataSet1.XMLData :=
ClientDataSet2.FieldByName('Hold Data').AsString;

Loading a DataTable from a file or stream can either be done directly (ADO.NET 2.0 or later) or can be done using a DataSet that contains the DataTable (ADO.NET 1.1). The following line of code demonstrates this technique:

DataTable1.ReadXML('c:\netdata.xml', XmlReadMode.ReadSchema);

It is important to note that both ClientDataSets and DataTables that are loaded from a file or stream will obtain their metadata and data store during the loading process. In other words, it is not necessary to define the structure of a ClientDataSet or DataTable prior to loading it from a file or stream.

The Other Side: Saving Data to Files or Streams

Of course, for you to be able to correctly load your data into a ClientDataSet or DataTable, the data that you are loading must be in the correct form. Although it may be possible to transform an incompatible XML file into the correct format (using XSLT or Delphi's XML Mapper Utility), in most cases the file or stream that you are loading is one that was previously created by saving a dataset to a file or stream.

With ClientDataSets, you save your data to a file or stream using the SaveToFile or SaveToStream methods. The following is an example of how this may look:

ClientDataSet1.SaveToFile('c:\mydata.xml', dfXML);

Similarly, saving a ClientDataSet's data to a memo field in a database looks something like this:

ClientDataSet2.FieldByName('Hold Data').AsString :=

If you'd rather store your data in a more compressed format (the native CDS format takes 20 to 60 percent less space than the XML format), you can do something like this:

ms: TMemoryStream;
ms := TMemoryStream.Create;

You save the contents of a .NET DataTable using the WriteXml method. While there are a number of overloaded versions of this method (16 at last count), one of the more interesting parameters is the XML write mode.

There are two write mode enumeration values that are typically used by developers writing out the contents of a DataTable. These are XmlWriteMode.WriteSchema and XmlWriteMode.DiffGram.

When you call WriteXML with the WriteSchema enumeration, DataTable metadata is written to the XML file in the form of a schema definition. This information is required in order for a DataTable loading the saved XML to accurately reconstruct the metadata of the DataTable.

By comparison, if you use XmlWriteMode.IgnoreSchema, the DataTable will try to reconstruct the metadata based on the data it encounters while loading, which is rarely 100 percent correct. For example, if you are loading a saved DataTable from XML, and there are string fields, the DataTable will estimate the size of the string fields based on the longest string found in the XML file, which may be substantially shorter than the original DataTable's metadata permits.

Like WriteSchema, XmlWriteMode.DiffGram writes schema information into the XML file. DiffGram writes the change cache information as well, which makes this enumeration essential if you want to persist the DataTable's state. Recall that the change cache is crucial if you want to be able to write the changes back to an underlying database.

The following example shows a DataTable and its change cache being written to an XML file.

DataTable1.WriteXml('c:\savedat.xml', XmlWriteMode.DiffGram);

If you need to save a DataTable as text, which is what you need to do in order to persist the DataSet to a memo field of a database, you must write your XML to a descendant of TextWriter (an abstract class). The following example shows a DataSet, and its change log, being persisted to a memo field in a database using a StringWriter (obviously a TextWriter descendant).

  var StringWriter1: StringWriter;
StringWriter1 := StringWriter.Create;
DataTable1.WriteXml(StringWriter1, XmlWriteMode.DiffGram);
DataTable2.Rows[0]['Current Data'] :=

Retrieving Data From a Database

Loading data into a ClientDataSet from a database requires the use of a DataSetProvider. The DataSetProvider, in turn, points to a TDataSet descendant. When you make the ClientDataSet active, the DataSetProvider will open the TDataSet, navigate the result set, loading the data in the TDataSet to an OleVariant. Once the DataSetProvider completes this navigation, it (usually) closes the TDataSet (based on a number of factors, which I will not go into now), and populates the ClientDataSet's Data property with the OleVariant.

This process is demonstrated in the following code segment:

DataSetProvider1.DataSet := SqlDataSet1;  //using dbExpress
ClientDataSet1.ProviderName := DataSetProvider1.Name;

Note that when you use this technique, both the ClientDataSet and the DataSetProvider must be owned by the same owner. If that is not the case, or if the DataSetProvider does not have a name, you can use the following technique:

DataSetProvider1.DataSet := Query1;
ClientDataSet1.Data := DataSetProvider1.Data;

In .NET, you acquire data into a DataTable from a database using the Fill method of an instance of a DbDataAdapter. Before you can call Fill, the DbDataAdapter class must have a SQL SELECT statement associated with an DbCommand class assigned to its SelectCommand property. The following code segment demonstrates loading a DataTable using a DataStoreDataAdapter:

Connection1 := DataStoreConnection.Create;
Connection1.ConnectionString := 'host=LocalHost;user=sysdba; ' +
'password=masterkey;database="C:\Users\Public\Documents\' +
'Delphi Prism\Demos\database\databases\BlackfishSQL\employee"';
Command1 := Connection1.CreateCommand;
Command1.CommandText := 'select * from customers';
DataAdapter1 := DataStoreDataAdapter.Create;
DataAdapter1.SelectCommand := Command1;
DataTable1 := DataTable.Create();

As is the case when you load data into an in-memory dataset from a file or a stream, you do not need to define the metadata or data store in advance of loading a dataset from a database. The metadata will be obtained from the result set you load into the dataset.

Direct Input From the User

Both ClientDataSets and DataTables can be associated with GUI (graphical user interface) controls and displayed to a user. The user can then use those GUI controls to view and edit the contents of the dataset.

If the in-memory dataset is created by loading the data from a file, stream, or database, it is not necessary to define the metadata of the dataset prior to presenting it to the user. If the dataset is not loaded by one of these techniques, it is necessary to define the metadata (otherwise the dataset will have no fields/columns, and therefore no data store).

In Win32 applications, you display data from datasets using data aware controls (primarily found on the Data Controls page of the Component Palette). At a minimum, you must point a DataSource to the dataset, and then assign the DataSource property of the data aware control to that DataSource. For those data aware controls that bind to a specific field in the dataset, you must also assign the DataField property.

For .NET controls (any visual control that descends from System.Windows.Form.Control), you bind the control using a BindingSource, its DataBindings property, or, if available, use its DataSource field (and its DataMember field if necessary). (Binding System.Web.UI controls is somewhat similar, though differences do exist. This issue, however, is beyond the scope of this article).

In the next segment in this series I will described how to programmatically navigate ClientDataSets and .NET DataTables.

Copyright © 2009 Cary Jensen. All Rights Reserved

Monday, November 30, 2009

In-Memory DataSets: ClientDataSet and .NET DataTable Compared: Part 2 Creating In-Memory DataSets

Creating an in-memory dataset involves defining its metadata and constructing its data store. Regardless of which in-memory dataset you are creating, this can be achieved in one of two ways. You can both define the metadata and construct the data store programmatically, or you can load the dataset from a query result set. When you load the dataset from a query result set, the contents of the query define the dataset's metadata and the data store is created by the method used to load the data.

With ClientDatasets you have an additional capability, that of defining the metadata at design time. This can be done either by using the Fields Editor to define TField definitions or by using the collection editor for the FieldDefs property to define TFieldDefs.

No matter which design time approach you take (and these are not exclusive options, you can use a combination of TFields and TFieldDefs to define a ClientDataSet’s metadata), you can then right-click the ClientDataSet in the Delphi designer and select Create DataSet to create a design time instance of its data store.

Though the design time definition of a DataTable’s structure is not an option, DataTables support another feature that ClientDataSets do no. Specifically, after you load a DataTable from a query result set at runtime, you can then add additional metadata definitions to configure additional DataColumns. For ClientDataSets, once they have been instantiated and their data store created, no further change can be made to its columns (represented by TField instances).

But before going further, it is worth nothing that this discussion of DataTables (as well as future discussions of DataViews, DataSets, and related ADO.NET classes) apply both to Delphi Prism, Delphi for .NET, as well as other first class .NET classes (such as C# and VB for .NET). However, the syntax of the code samples is slightly different in some cases. Since Delphi Prism is the current (and preferred) Delphi solution for building .NET applications, the examples in this series will use Delphi Prism syntax, that of the Oxygene compiler.

The following code segment demonstrates how to create a ClientDataSet programmatically using its FieldDefs.AddFieldDef method.

ClientDataSet1 := TClientDataSet.Create(Self);
with ClientDataSet1.FieldDefs do
with AddFieldDef do
Name := 'ID';
DataType := ftInteger;
end; //with AddFieldDef do
with AddFieldDef do
Name := 'Name';
DataType := ftString;
Size := 30;
end; //with AddFieldDef do
with AddFieldDef do
Name := 'Date of Birth';
DataType := ftDate;
end; //with AddFieldDef do
with AddFieldDef do
Name := 'Active';
DataType := ftBoolean;
end; //with AddFieldDef do
end; //with ClientDataSet1.FieldDefs

Here is another example that uses the FieldDefs.Add method. This example creates a ClientDataSet identical to the one created in the preceding code.

ClientDataSet1 := TClientDataSet.Create(Self);
with ClientDataSet1.FieldDefs do
Add('ID',ftInteger, 0, True);
Add('First Name',ftString, 30);
Add('Date of Birth',ftDate);
end; //with ClientDataSet1.FieldDefs

Here is an example of code that creates a .NET DataTable.

  var DataTable1: DataTable := DataTable.Create;
var DataColumn1: DataColumn := DataColumn.Create('CustNo',
DataColumn1.AllowDBNull := False;
DataColumn1.Unique := True;
//Here is another way to define the type
DataTable1.Columns.Add('LastName', TypeOf(String));
DataTable1.Columns['LastName'].DefaultValue := 'Not assigned';

The next article in this series will demonstrate how to populate an in-memory dataset once it has been created.

Copyright © 2009 Cary Jensen. All Rights Reserved

Tuesday, November 17, 2009

Have You Had An Effortless Delphi Unicode Migration?

In my last post ( I asked Delphi and C++Builder developers to share their Unicode migration success stories. In doing so, I think that I may have implied that converting existing Delphi applications to RAD Studio 2009 or RAD Studio 2010 presented a challenge. This is not always the case.

The truth is, some applications can be migrated to Unicode versions of RAD Studio with little or no modification. It all depends on the techniques that you have used in your applications.

I recently presented a five-day Delphi course that covered a wide range of topics, including creating and using DLLs, multithreaded programming, component creation, and basic database development. The target environment for this class was RAD Studio 2009.

Most of my code samples for this class had been originally created with earlier versions of Delphi, some going back as far as Delphi 1 and 2. While updating this course material to Delphi 2009 I had to migrate more than 60 projects. Of those projects, only a handful of them required modifications associated with Unicode. These were almost exclusively associated with the DLL examples where PChars were being passed as parameters. All the rest simply compiled and ran properly without changes.

Granted, these projects were very limited in scope, designed specifically to demonstrate a particular feature or technique. As a result, they lack the richness that is normally associated with client applications. Still, given the many different techniques that these code samples represented, it is impressive that most required no modification to run in Delphi 2009.

Is this typical, or is it the exception? I want to hear from you. Have you moved a Delphi application to RAD Studio 2009 or 2010 with little or no modifications? If so, your story is also important.

Send me a quick email to with the subject line Delphi Migration. Tell me a little about the size and scope of the project, and how much effort you migration required. Other Delphi developers will be grateful.

Copyright © 2009 Cary Jensen. All Rights Reserved

Wednesday, October 28, 2009

Share Your Unicode Migration Story

Delphi developers everywhere (and I am including C++Builder developers here), I am asking for your help in preparing a white paper that I hope will serve the greater community for a long time to come. I have been asked to assemble a white paper on Unicode migration for Delphi and C++ developers, and I want to hear your story.

When Senior Director of Delphi Solutions Mike Rozlog originally approached me about putting together this paper, I was enthusiastic. However, there was a slight problem.

While I can easily write about issues concerning the size of PChars or the ins-and-outs of the new UnicodeString type, I felt that this white paper was too important to simply repeat the obvious. Instead, I really wanted to go for an “in the real world this is what you have to do” approach. And while it is a noble goal, it is simply too much for one developer. We each have our specializations, and each of us, if we do run into migration problems, is going to do so in our particular domain.

For example, I am primarily a database developer, and I often work at a pretty high level; querying databases, building user interfaces, generating reports, you know, the bread-and-butter kind of stuff. And as far as Unicode migration goes, I’ve got it easy. Several of my larger applications have converted with few or no problems.

But some of you work much closer to the metal, making extensive use of calls to the operating system API, using sophisticated third-party libraries, manipulating data at the bit and byte level, and generally working in a world where the size of characters and strings matters. If so, your Unicode migration has likely encountered challenges, some incidental and some significant, that you’ve had to solve along the way. It’s these real world stories that I want to hear about.

Now, you might be wondering, why have I been asked to write this white paper, given that I don’t normally have to deal with some of the more serious Unicode migration issues? It’s a valid question, and there is an equally valid answer. Another of my skills, beyond software development, is the ability to communicate clearly and simply about complex topics. With your input, I intend to produce a white paper that is organized, clear, and helpful to those whose Unicode migration is not yet started or complete. And, I want the paper to reflect the wealth of experience of the collective Delphi community on the subject, not just my own experience.

Here is what I am looking for. If you have solved a Unicode-related challenge in converting existing code or techniques to Delphi 2009 or later, I want your input. The basic input that I need is your name and email address (so I can contact you if I have questions), the company you work for (if you are willing to share that), a description of the problem, and a description of the solution. Code samples that demonstrate the problem and solution are preferred (though we do not want anything that is proprietary or non-disclosable).

If your solution is included in the white paper, you will be recognized for your contribution by name, unless you specifically request to remain anonymous.

You can email your contribution to Tim Del Chiaro at Alternatively, you can submit your contribution using the Get Published interface at the Embarcadero Developer Network (EDN). A detailed description of how to submit your contribution can be found in the EDN article located

Of course, you can also send questions or contributions to me at Please include the words “Unicode migration” in the subject line. Please also feel free to contact me if you need help putting together your story.

We are looking to release this white paper as soon as possible, so we necessarily have to set a deadline for contributions. Please submit your Unicode migration story by 5:00pm (GMT-8), November 27th, 2009.

Now, finally, a few words to please the Lawyers. Any Unicode migration contribution you send to me, Tim Del Chiaro, or upload to the Get Published interface, will be interpreted as explicit permission from you for Embarcadero Technologies to include your contribution, or descriptions of it, in the Unicode migration white paper, as well as in any other form, and that you have the right to grant this permission.

Thank you for your consideration. I sincerely hope to hear from many of you with your stories, and look forward to producing a paper that will help all Delphi developers with their Unicode migration challenges.

Copyright © 2009 Cary Jensen. All Rights Reserved

Wednesday, September 30, 2009

In-Memory DataSets: ClientDataSet and .NET DataTable Compared: Part 1 Overview

As some of you know, I have been a big fan of Delphi's ClientDataSets since they were first introduced in Delphi 3 (that's way back in 1997). When .NET shipped, its data access framework, ADO.NET, also included an in-memory dataset, named the DataTable. (.NET also includes the DataSet class, but in most cases, the DataTable class bears the strongest resemblance to the ClientDataSet.)

Both ClientDataSets and DataTables are in-memory datasets, and as such, share a lot of features in common. On the other hand, they are radically different in a number of interesting ways. In this series of articles, which begins with this one, I will examine the general features of in-memory datasets, and provide a direct comparison between ClientDataSets and .NET DataTables.

This article begins with a introduction to in-memory datasets in general. In future posts I will provide explicit code examples of how to perform various tasks with these two datasets, including how to create them in code, reading and writing data, sorting, filtering, persisting, navigating, and so on.

I hope you enjoy.

Developing with Disconnected Datasets

Disconnected datasets are database table-like structures that are stored in memory. These types of datasets are sometimes referred to as cached datasets or in-memory datasets. In this series they will be referred to as in-memory datasets.

In-memory datasets are structured, high-performance, self-describing data structures temporarily stored in memory. A significant feature of in-memory datasets is that they maintain, and can persist, a change log. The change log permits you to programmatically determine what changes have been made to the data since some point in time, often when the data was originally loaded into the dataset. This information is essential if you need to persist these changes back to an original source, such as a Web service, underlying database, or other persistence mechanism.

Data persistance in the .NET framework is based on in-memory datasets, and Delphi has included this capability since Delphi 3 in the form of the ClientDataSet.

This series begins with an overview of the features that make in-memory datasets so useful, including their self-descriptive nature, ability to hold sophisticated relational data structures, their close association with XML, and their persistence and management of change information. How these features are surfaced in both ClientDataSets and .NET datasets is discussed in this section.

In-memory datasets are the cornerstone of modern software development. Nothing confirms this statement as much as Microsoft's commitment to in-memory datasets as a central aspect of the database framework in .NET, ADO.NET.

While most developers consider in-memory datasets for the presentation layer in applications, this use represents only a fraction of the possibilities for these powerful data structures. As this series will demonstrate, the characteristics of in-memory datasets make them a valuable tool for many different aspects of application development. The following are the essential feature of in-memory datasets:

  • High performance

  • Self describing

  • Flexible

  • Change log managing

  • Persistable

Individually, these characteristics provide a compelling argument for using in-memory datasets in your applications. But it is the combination of these features in a single, easy to use class that makes them so valuable for a wide range of software features. The following sections look at each of these features in greater depth.

High Performance

In-memory datasets reside entirely in RAM (random access memory). Consequently, operations on the data they contain, including searches, filters, and sorts, are very fast. This is particularly true with respect to ClientDataSets, since these can have indexes on this data as well. But even for .NET datasets, which currently support a single index at any given moment (the primary index), data-related operations are many times faster than those that require disk reads (as is the case with a physical database).

Self Describing

In-memory datasets are formally designed around the concept of a database table. Unlike an array or sequence, whose data elements have a data type, and that's about it, the fields of a data table each have a name, a data type, and sometimes a data size (for example, the size of a text field or precision of a floating point number).

In addition, the fields of a data table may have constraints, such as a required field constraint, or referential integrity constraints when two or more in-memory tables are related. This information is typically referred to as metadata, which is data about data.

In ClientDataSets, you access the metadata of a dataset using the Fields property of the dataset, which contains a collection of TField instances. In .NET data tables, you access this information using the Columns property, which contains a collection of DataColumn instances.


In-memory datasets are designed to hold nearly any kind of data that might be stored in a physical database. This includes primitive data values, such as integers, strings, real numbers, and date/time values. But it also includes variable length objects, such as memos and Blobs (binary large objects). As a result, an in-memory dataset can hold the pages of a Web site, PDF files, and even executables (.EXEs and .DLLs). If it can be stored in a file on disk, it can be stored in an in-memory dataset (obviously, subject to the limits imposed by your available RAM).

Change Log Managing

Both ClientDataSets and .NET DataTables have a change log. The change log permits you to manage the unresolved changes that have been posted to the dataset's data since you loaded it into memory. This management includes the ability to determined precisely what changes have occurred (which records were inserted, deleted, and field-level modifications), revert changes to their prior state, cancel all changes, or commit those changes permanently, thereby erasing the change log. With ClientDataSets, this change log if held in the Delta property. For .NET DataTables, you use the RowStateFilter of a DataView to access the change log.

To manage the change log for a ClientDataSet, you use its methods, such as RevertRecord, UndoLastChange, CancelChanges, and ApplyUpdates. In addition, you can use the RecordStatus, StatusFilter, and Fields properties to examine the change log contents.

With .NET DataTables, you use the methods of the DataTable and DataView classes to control the change log, including NewRow, DeleteRow, AcceptChanges, and RejectChanges. To examine the change log, you use the RowStateFilter and Rows properties.


Of all the features supported by in-memory datasets, the ability to persist state is arguably the most powerful. Not only can you save an in-memory dataset's data, but you can save its change log as well. Specifically, it is possible to save the current state of an in-memory dataset to a file, Web service, or memo field of a database, and then to restore that dataset at a later time to its exact prior state. In short, there is absolutely no difference between the in-memory dataset prior to, and following, its persistence.

Consider the following scenario: After loading data into memory, and making several edits to an in-memory dataset, that dataset can be written to a file. At a future time, that dataset can be restored from the file, and the edits that were previously performed can be examined and rejected or accepted.

Furthermore, since the change log is restored to its exact prior state, that information can be used to resolve those edits to the underlying database from which the data was originally loaded. No information is lost during the time that the dataset is in storage, no matter how long its state was persisted.

Copyright (c) 2009 Cary Jensen. All Rights Reserved

Sunday, August 30, 2009

Notifications and the Advantage Database Server

The Advantage Database Server (ADS) is a high-performance, low maintenance, remote database server from Sybase iAnywhere. In addition to being one of the fastest database servers available, it is mind-numbingly simple to deploy. This, along with its ability to self-configure, and with no need for a formal database administrator in most situations, makes the Advantage Database Server a favorite with vertical market developers deploying small to medium size applications.

Over the past decade, ADS has gained one or more impressive new features associated with enterprise-level database servers with each major release. Advantage 6 introduced stored procedures, views, users and groups, and data dictionaries. Advantage 7 added triggers to the mix, while Advantage 8 provided replication and online backup services.

ADS 9 was no different. Advantage Database Server 9.0 added support for notifications. (Note that the current release is ADS 9.1, which is a free upgrade for 9.0 users.)

Notifications provide a mechanism for communicating to client applications that something has occurred in the database. For example, notifications can be used to inform a client application that data in a critical table has changed. The client application can then use this information to refresh its view of that table, providing the end-user real-time access to the most current data. Or a notification might be used to signal a client that a record has been added to a special table created for the purpose of communicating messages from the system administrator to the end users. The client can then read the latest message and display it within its interface.

Client applications subscribe to an event by calling the sp_CreateEvent system stored procedure, to which it passes a string identifying the event of interest. The events themselves are created with calls to the system stored procedure sp_SignalEvent, and can only be executed from within a stored procedure or trigger.

Like sp_CreateEvent, sp_SignalEvent is passed a string that corresponds to events that one or more clients are expected to subscribe to. The call to sp_SignalEvent is passed an additional parameter that determines whether subscribing clients should be signaled immediately, or only after the current transaction is committed (assuming that the call to sp_SignalEvent was performed within a transaction), permitting the database to signal events that might be rolled back, in which case the client will not receive the notification.

Clients receive an event by calling either sp_WaitForEvent or sp_WaitForAnyEvent. Both of these procedure calls are synchronous, or blocking. Specifically, the call to either of these methods will not return until an event is signaled by the database, or the call times out. The timeout, which is specified by a parameter passed to the wait procedures, can either be infinite, or limited to a specified number of milliseconds. (Note that the wait call times out immediately if the database has already signaled the event since the last wait call.)

Because sp_WaitForEvent and sp_WaitForAnyEvent are blocking calls (the call does not complete until a signal is received or the timeout expires, whichever comes first), most developers who need real-time notifications from the database will employ multithreaded techniques. In other words, a call to sp_WaitForEvent or sp_WaitForAnyEvent will almost always be performed by a secondary thread, which is designed to specifically await the notification without interfering with the primary thread of execution.

Here is an example of a typical notification scenario. Before a client application begins displaying a table that must always display the most current data to the user, the client will subscribe to an event associated with updates to that table. The client does this by calling sp_CreateEvent, passing the string you have created to uniquely identify the event.

On the server, you create AFTER INSERT, AFTER UPDATE, and AFTER DELETE triggers on the underlying table. From these triggers, you call sp_SignalEvent, again using the same string that the client applications use to subscribe to the event.

As soon as one of the client applications begins displaying the critical data, the client creates a new thread that then calls either sp_WaitForEvent or sp_WaitForAnyEvent. Code that immediately follows the call to the wait procedure typically tests whether the call has timed out or has been signaled, which can be determined by the values returned by this stored procedure call.

If the call was signaled, the thread will attempt to lock a synchronization object (such as a critical section or use a monitor to lock an object) and update the display of the data. This synchronization object must also be shared with the thread responsible for the user interface, in order to prevent the multiple threads from interfering with each other.

Once the display has been updated, the notification thread releases the synchronization object and loops back to another call to the wait procedure.

While this type of programming requires careful consideration of shared resources, such as the user interface elements that display the data, when done properly, any signals received by the clients can be reacted to immediately as opposed to enduring the delays inherent with polling (the periodic check for a change in some state).

Like ADS, ALS also supports notifications. However, due to the local server nature of ALS, and the lack of a centralized service, ALS notifications are much less responsive than those supported by ADS.

Copyright © 2009 Cary Jensen. All Rights Reserved.

Thursday, July 30, 2009

Introduction to Language Integrated Query with Delphi Prism, Part 2

In part 1 of this 2 part series I provided you with a basic introduction to Language Integrated Query, or LINQ. In this second part I take a look at several specific implementations of LINQ, including LINQ to Objects, LINQ to DataSets, and LINQ to XML. I also have an opportunity to introduce iterator methods, special methods that you implement to return a sequence for use by LINQ queries.

LINQ to Objects

LINQ to Objects is any version of LINQ that does not employ a LINQ provider. Consequently, all LINQ queries shown in part 1 of this series are examples of LINQ to Objects.

Here is another example.

var Customers := GetCustomers;
var query := from c in Customers
where (c.Age > 10)
and (c.Age < 40)
and (c.Active = true<) select c;
for each m in query do
String.Format("Name: {0}, Age={1}",
[m.Name, m.Age]));

Note: If you had wanted to create a query that produced a sequence of objects that only included the Name and Age members, you could have used the technique shown in part 1 where the select clause returned a newly created object (which was an anonymous object in that example, but could have been any compatible type).

In the preceding query, a sequence of Customer objects is returned by the GetCustomers function. This particular function is an example of an iterator method. Iterator methods are discussed in the following section.

Granted, this is a simple example. However, there are many different methods in the .NET framework class library that return arrays, collections, and other queryable data sources. As a result, it is possible to use LINQ to perform a wide variety of useful tasks.

Examples of some of these uses can be found in the Windows SDK for .NET 3.5. For example, there are a number of examples of how to use LINQ to explore files and directories on the local file system. This is possible because the System.IO.Directory.GetFiles method returns an array of strings containing the names of the files in a given directory. Arrays, as you’ve seen, can be used in LINQ queries.

Interator Methods and Yield Statements

Iterators are special methods that return a sequence. You declare an iterator by including the iterator keyword in the method signature. Each element in the sequence is specifically returned by the iterator through a call to yield.

This is demonstrated in the following code sample. First, the iterator is declared using the following statement.

method GetLowDigits: sequence of Integer; iterator;

Next, the implementation of the iterator includes one or more calls to yield, which, like exit, can be used to return a value (though yield does not cause a return from the method in the same way that exit does).

method MainForm.GetLowDigits: sequenbce of Integer;


for i: Integer := 1 to 9 do

yield i;


The following code demonstrates the use of this iterator.

var numbers := GetLowDigits;

for each n in numbers do

Iterator methods are not executed when the sequence is assigned to a sequence reference. Specifically, the var declaration and initialization in the preceding code segment did not cause the GetLowDigits iterator to execute. Instead, it was the specific iteration over the sequence in the for each loop that caused the iterator to be executed. Furthermore, the iterator was not executed all at once. Instead, the iterator is executed up until it returns the first element of the sequence, that element is then used by the for each loop, which will then call back to the iterator to get the next value in the sequence, and so on.

The yield keyword can be used to return either a single element of the sequence or a reference to a sequence (which could be an array, a method that returns an array, or other similar reference).

Here is another example of an iterator and a sequence. This code requires a type declaration, which looks like the following.

Customer = class


FName: String;

FAge: Integer;


property Name: String read FName write FName;

property Age: Integer read fAge write FAge;

Here is the declaration of the iterator.

method GetCustomers: sequence of Customer; iterator;

And here is the implementation of the iterator.

method MainForm.GetCustomers: sequence of Customer;


yield new Customer(Name:= 'Allan', Age := 10);

yield new Customer(Name := 'Bob', Age := 25);

yield new Customer(Name := 'Craig', Age := 18);

Now that you understand the iterator, it is easy to see that the following LINQ query, which was introduced in the preceding section, populates the listbox named ResultList with the values Name: Bob, Age=25 and Name: Craig, Age=18.

var Customers := GetCustomers;
var query := from c in Customers
where (c.Age > 10) and (c.Age < 40) and (c.Active = true<) select c;
for each m in query do
String.Format("Name: {0}, Age={1}", [m.Name, m.Age]));

The implementation of the iterator in this case used the named parameters feature of Delphi Prism’s nameless constructors to return each of the customers (which in reality would have probably been populated with data from an external source, such as a database). Also, the Customers type declaration used type inference to determine that Customers variable was a sequence of Customer.

What is LINQ to DataSets

LINQ to DataSets is a LINQ provider that permits you to use LINQ queries against DataTables, DataRows, and other classes in the ADO.NET framework. In order to use LINQ to DataSet, you must add the System.Data.DataSetExtensions assembly to the references section of your Delphi Prism project.

This section is designed to provide you with a brief introduction to LINQ to DataSet. As such, it does not go into ADO.NET, which is the data connectivity framework in .NET.

Note: LINQ to DataSet should really probably be called LINQ to DataTable or LINQ to DataRows, or even LINQ to ADO.NET. None of the operations in LINQ to DataSet are performed on instances of the DataSet class. Instead, they are performed on DataTables and DataRows.

As you learned earlier in this article, LINQ queries can be performed on any object that implements the IEnumerable<T> interface. DataTables (and DataRows) do not implement IEnumerable. However, they both have extension methods that support IEnumerable. These are defined in the DataTableExtensions class, which is located in the System.Data.DataSetExtensions namespace (which is why you must add this assembly to your project references folder).

There are only three extension methods in the DataSetExtensions class. These are AsDataView, AsEnumerable, and CopyToDataTable. For DataTables, the key method is AsEnumerable, which returns an enumerable collection of DataRows (records).

This is demonstrated in the following segment code.

method MainForm.LINQToDataSet_Click(sender: System.Object;
e: System.EventArgs);
Connection: DataStoreConnection;
Adapter: DataStoreDataAdapter;
DataTable1: DataTable;
Connection := new DataStoreConnection(conStr);
Adapter := new DataStoreDataAdapter(
'SELECT * FROM CUSTOMER', connection);
DataTable1 := new DataTable;
DataTable1.TableName := 'Customer';
var query :=
from cust in DataTable1.AsEnumerable
where cust['ON_HOLD'].Equals(DbNull.Value)
and (cust.Field<string>('STATE_PROVINCE') = 'CA');
for each c in query do
String.Format('Customer Number: {0}' +
' Company Name: {1}', c['Cust_No'].ToString,

Once the DataTable has been populated with data from the SQL query, the AsEnumerable method of the DataTable is called, and this object is used in the LINQ query. Interestingly enough, the object returned from the AsEnumerable method is a generic IEnumerable<T>, where the type is a collection of DataRows.

As you consider this code segment, you might conclude that you would be better off using a SQL statement that performs the filtering of the results, rather than use a LINQ query. While that may be true in some instances, if you want to load a DataTable once, and perform numerous queries on the returned value, the LINQ query offers performance benefits. (Note that another alternative is to use a DataView to filter and sort the DataTable results.)

Note that in addition to LINQ to DataSet, the .NET framework provides the LINQ to SQL provider. Unlike LINQ to DataSet, which can be used against any database for which there is a .NET data provider, LINQ to SQL is specifically designed to work only with Microsoft SQL Server. LINQ to Entities is another LINQ provider that can be used against supported databases.

What is LINQ to XML

LINQ to XML provides a programming model, similar to XPath, with which you can create, edit, query, and save XML documents. LINQ to XML provides you an alternative to using the XML DOM (document object model), which is what is used by the XMLDocument class in the .NET framework. Like the DOM, LINQ to XML works with the XML document in memory. From there you can read, query, and write data to the XML document, after which you can stream it to a service or write it to a file, if you desire.

As is the case with the LINQ to DataSet topic that precedes this section, this discussion is designed to provide you with a basic introduction to LINQ to XML. It does not, however, cover general XML issues.

In fact, LINQ to XML is far more involved than LINQ to DataSet. This is because LINQ to XML not only supports features necessary for querying, but as mentioned earlier, an entire programming model for working with XML. As a result, this section will cover just enough information to get you started.

Unlike LINQ to DataSet, where most of the functionality is found in extension methods added to DataTable and DataRow, LINQ to XML, found in the System.Xml.Linq namespace, includes a large number of concrete classes that you can use to work with XML documents.

Of these classes, there are three that you are likely to use most often. These are XDocument and XElement, which descend from XNode, and XAttribute, which descends from XObject. (XNode, by the way, also descends from XObject.)

You can create a valid XML document using either XDocument or XElement, though XDocument provides a little more support for this. Specifically, there are certain advanced features of XML documents that you can access through the XDocument class which are unavailable through the XElement class. XAttribute, by comparison, is used to define attributes of XML elements.

But before you get started, at a minimum you will need to add the System.Xml.Linq namespace to your uses clause. In addition, you may have to add one or more of the following additional namespaces, depending on what features your code will use: System.Xml, System.Xml.Schema, System.Xml.XPath, and System.Xml.Xsl.

Creating XML Documents using LINQ to XML

As mentioned previously, you can create XML documents using either the XDocument or the XElement class. The following example demonstrates how to create an XML document using the XElement class.

element := new XElement('customers',
new XElement('customer',
new XAttribute('custno', 1001),
new XElement('name', 'John Doe'),
new XElement('address',
'101 Broadway Avenue'),
new XElement('city', 'New York'),
new XElement('state', 'NY'),
new XElement('zip',
'00123') ),
new XElement('customer',
new XAttribute('custno', 1002),
new XElement('name', 'John Doe'),
new XElement('address',
'1001 Main Street'),
new XElement('city', 'Los Angeles'),
new XElement('state', 'CA'),
new XElement('zip', '90123')

This code creates an XML file with a root element named customers. The root element has two child elements named customer. Each child element has one attribute and four child elements. The following is how the XML file created by this code looks.

<?xml version="1.0" encoding="utf-8"?>


<customer custno="1001">

<name>John Doe</name>

<address>101 Broadway Avenue</address>

<city>New York</city>




<customer custno="1002">

<name>John Doe</name>

<address>1001 Main Street</address>

<city>Los Angeles</city>




Creating this same file using an XDocument looks similar, though there are differences.

The documentation for LINQ with XML includes a large number of demonstrations of XML definition using declarations like the one provided previously. These declarations, however, are anything but flexible. In other words, they always create the same XML file, which is rarely useful.

The following example, which obtains its data from an ADO.NET data reader, produces an XML file similar in structure to the preceding one, except that its data is entirely based on the results of a SQL query.

Connection := new DataStoreConnection(conStr);
Command := Connection.CreateCommand;
Command.CommandText := 'select cust_no, customer, ' +
'address_line1, city, state_province, ' +
'postal_code from customer';
DataReader := Command.ExecuteReader;
document := new XDocument(
new XElement('customers'));
while DataReader.Read do
attribute := new XAttribute('custno',
element :=
new XElement('customer',
new XElement('name', DataReader.GetString(1)),
new XElement('address',
new XElement('city', DataReader.GetString(3)),
new XElement('state',
new XElement('zip',

Querying with LINQ to XML

Querying using LINQ to XML is similar to other LINQ queries, though there are classes and methods that you use in LINQ to XML that are not found in other LINQ technologies. In short, once you have a reference to a queryable object (an IEnumerable, a sequence, or other similar construct), you use query statements to retrieve the data you are interested in. The following example demonstrates a LINQ to XML query.

element := XElement.Load(XmlFileName);
var childList :=
from el in element.Elements
where String(el.Element('state')) = 'CA'
order by String(el.Attribute('custno')) desc
select new XElement('document',
new XElement('CustomerName', el.Element('name')) ,
new XElement('Address', el.Element('address')),
new XElement('CityStateZip',
el.Element('city').Value + ', ' +
el.Element('state').Value + ' ' +
for each e in childList do

This query returns a sequence of XElement references, which have a structure similar to, though different than, the original XElements. For example, the new XElements are not named customer, they are named document. Furthermore, there are no attributes in the new XElements, and there are only three child nodes.

The preceding example demonstrated a query using the XElement class. The following code shows the same basic query, however an XDocument is used in this example.

document := XDocument.Load(XmlFileName);
var childList :=
from el in document.Descendants
where String(el.Element('state')) = 'CA'
order by String(el.Attribute('custno')) desc
select new XElement('document',
new XElement('CustomerName',
new XElement('Address', el.Element('address')),
new XElement('CityStateZip',
el.Element('city').Value + ', ' +
el.Element('state').Value + ', ' +
for each e in childList do

Modifying Exiting XML using LINQ to XML

Once XML is in memory, you can add elements or attributes, remove elements or attributes, as well as modify the data for elements or attributes. If you load the XML from an existing source, such as a file, make modifications, and then save the XML back to the original source, you have effectively changed the XML.

This is demonstrated in the following code.

document := XDocument.Load(XmlFileName);
var childList :=
from el in document.Descendants
where String(el.Element('state')) = 'CA'
order by String(el.Attribute('custno')) desc
select el;
//make changes
for each e in childList do
//save the changes

There is an important characteristic of the preceding code that you should note. Specifically, the select clause of the LINQ to XML query returned a sequence of XElements from the original XDocument. While these represent a possible subset of all XElements returned by the Descendants method of the XDocument, they are still child elements of the original XDocument.

When changes are made to the items in the sequence returned by the query, those changes are actually being made to the corresponding XElements in the XDocument. This is why the changes are preserved when the subsequent call to XDocument.Save is made.

By comparison, if the select clause had created new XElements, similar to how the last query in the preceding section demonstrated, the subsequent changes would have been performed on the newly created objects, which are not child elements of the XDocument.


This two part series has provided you with a brief introduction to LINQ with Delphi Prism. Here you have learned how to construct LINQ Queries, the nature of LINQ to Objects, define and implement iterators, and have been given a preview of LINQ to DataSets and LINQ to XML.

Copyright © 2009 Cary Jensen. All Rights Reserved

Tuesday, June 30, 2009

Introduction to Language Integrated Query with Delphi Prism: Part 1

Language Integrated Query, or LINQ (pronounced link), is a declarative programming language developed by Microsoft for the .NET framework. In a declarative programming language you specify what result you want to achieve without needing to specify how those results are produced.

SQL SELECT statements are an example of declarative programming. With a given SQL statement you specify what data you want returned. It is up to the SQL engine to determine how that data is derived.

Traditional Delphi, on the other hand, is categorized as imperative programming. When writing Delphi code, you generally describe in detailed terms how you want the result obtained. You do this using control structures, expressions, and explicit calls to functions and procedures.

This article is designed to provide you with a general overview of using LINQ with Delphi Prism, the latest .NET development tool from Embarcadero Technologies and RemObjects. For a more detailed discussion of LINQ and its related topics, refer to the latest version of the .NET framework SDK.

Overview of LINQ

Language Integrated Query comes in a variety of different flavors, depending on the version of the .NET framework you are compiling against. For example, there is LINQ to Objects, LINQ to DataSets, LINQ to SQL, and LINQ to XML. Each of these technologies, which were introduced in .NET 3.0, provides classes you can use in conjunction with LINQ to work with objects in various domains of the .NET framework.

In addition, some of the major new features that are emerging in .NET also are LINQ related. One example is LINQ to Entities, which is part of the Entity Framework. The Entity Framework is one of the latest database technologies being promoted by Microsoft.

Turning our attention to the general case of LINQ, LINQ queries can be divided into three basic parts. These are

  1. A queryable source of data
  2. A query definition
  3. Query execution

The following sections describe each of these parts in greater detail.

Queryable Sources of Data

In strictly .NET terms, a queryable source of data is any object that implements the generic IEnumerable<T> interface, which includes List<T> and Dictionary<TKey, TValue>. In Delphi Prism, this also includes any collection that is declared as a sequence, as well as arrays (both fixed and dynamic).

Some of the LINQ technologies, such as LINQ to DataSet and LINQ to XML, are implemented through LINQ providers, which are support classes that enable LINQ operations. These providers usually include special methods that provide access to an IEnumerable<T> reference based on an object that otherwise does not support this interface. For example, LINQ to DataSet provides the AsEnumerable extension method to the DataTable class.

The following variable declaration defines a simple queryable data source. In this case, the data source is a sequence.

var seq: sequence of Integer := [1,2,3,4,5,6,7,8,9];

Query Definitions using Query Syntax

A query definition consists of two parts, a query variable and a query expression. Furthermore, query expressions are defined using one of two techniques. The first is called query syntax, and the second is called method syntax. This section describes query syntax. Method syntax is described later in this article.

Query expressions are defined using a combination of one or more LINQ statements, comparison operators, extension methods, and value references. At a minimum, a LINQ query includes a from clause, which defines an alias and a data source. Consider the following code segment:

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];

var myquery := from c in numbers select c;

The first line of code defines the data source, and the second line defines the query. The query in this case selects all items from the sequence numbers, as can be seen in the following figure (the code that populates the list box has not been shown yet).

Figure 1. The LINQExamples project

The c in the preceding query is an alias, and it is used to reference items in the numbers sequence. The select part of the query, which contrary to SQL standards, appears at the end of the statement, defines what is returned by the query. In this case, the query returns each of the items in the sequence.

The preceding query really just creates another sequence that is no different from the one that it queried. Most LINQ queries, by comparison, either select subsets of the data, performs data transformations, or other similar operations. The following query includes a where clause that limits the myquery sequence to those numbers between 4 and 8, inclusively. As you can see, the alias is essential for performing this operation.

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var myquery := from c in numbers where (c >= 4) and (c <= 8) select c;

In both queries shown so far in this section, the select clause could have been omitted. Specifically, if what we are returning is exactly the same type of item as in the sequence, we can omit the select clause altogether. In other words, omitting the select clause is somewhat similar to a SELECT * clause in a SQL query. The following query, which does not have a select clause, produces the same result as the preceding one.

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var myquery := from c in numbers where (c >= 4) and (c <= 8);

The select clause in LINQ is required when you are querying objects with one or more members, and want to only return a subset of its members, or want to return transformed data. For example, the following query returns a sequence of objects whose values are equal to the square of the original values. The objects returned by the select clause, in this case, are anonymous types. (An anonymous type of an object whose type is never explicitly declared, one of many advanced language features supported by Delphi Prism.)

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var myquery := from c in numbers select new class(Value := c * c);

LINQ queries can be relatively complicated. In addition to the from, where, and select keywords shown here, Delphi Prism also supports the following LINQ operations: order by, group by, join, with, take, skip, reverse, and distinct. In addition, LINQ queries can make use of lambda expressions, both in the where and select parts (Lambda expressions are discussed briefly later in this article). Finally, similar to SQL, LINQ queries can include subqueries.

Executing LINQ Queries

Defining a query and executing a query are two distinct steps (though as you will learn shortly, Delphi Prism provides a mechanism for combining these operations). Specifically, the query defined in the preceding section identifies what the query will do, but does not execute the query. In LINQ this is referred to as deferred execution.

You cause the query to be executed by iterating over it using a for each loop. Each execution of the loop brings back another instance of whatever the query returns. For example, if you used a for each loop to iterate over the query shown in the preceding example, each iteration of the loop would return a different instance of the anonymous type.

The following is the entire code sequence (short as it is), which shows how the Value member of the returned objects are assigned to the list box (which is name ResultList in this example).

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var myquery := from c in numbers select new class(Value := c * c);
for each m in myquery do

When written to the list box, this query result looks like the following.

Figure 2. Data from an anonymous type returns from a LINQ query

Similar to the alias in the query definition, the for each loop defines a local variable that holds the value of the item that is returned in each iteration. In the preceding code, this variable is named m. In most cases, Delphi Prism uses type inference to determine the type of this variable (which is fortunate in this case, since we used an anonymous type).

Not all LINQ queries use deferred execution. To cause the results of the query to retrieve immediately, you can use either the IEnumerable methods ToList<TSource> or ToArray<TSource>. An example of this is shown in the following code.

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var MyResults: Array of Integer :=
  (from c in numbers where (c >= 4) and (c <= 8)).ToArray();
for each val in MyResults do

Similarly, if you use any of the grouping operations, such Average, Count, First, and so forth, the query is executed immediately. This is demonstrated in the following code.

var numbers: sequence of Integer := [1,2,3,4,5,6,7,8,9];
var MyResults: Double := (from c in numbers).Average();

LINQ Query Method Syntax

As mentioned earlier, LINQ query expressions can be defined using both query syntax and method syntax. Method syntax is implemented through extension methods on the System.Linq.Enumerable class, and corresponds roughly to the LINQ query syntax operators.

The truth is that when you use query syntax in your query expressions, the compiler converts these into method syntax. Another way to put this is that any query that you can write using query syntax can also be defined using method syntax. Interestingly, the opposite is not true. Specifically, there are some queries that you must define using method syntax, as there is no equivalent in query syntax.

Unlike in query syntax, there are no aliases in method syntax. Instead, the methods are called on your IEnumerable object directly, using dot notation. These operations are typically performed in a single expression. In other words, using dot notation, method syntax sometimes includes a number of method calls, implemented through a chain of calls.

This is demonstrated in the following code segment, which produces a query identical to the first one listed in the preceding section “Query Definitions using Query Syntax.”

var myquery := numbers.where(c -> c >= 4).
  where(c -> c <= 8).orderby(c -> c).select(c -> c);

The preceding statement is a single statement, even though it has been wrapped onto two lines due to the limits of the column space in this article. Fortunately, this wrapped form can still be compiled by Delphi since its compiler is designed to ignore white space. Also, the arguments of the where, orderby, and select clauses are lambda expressions. Lambda expressions are discussed in the following section.

As with the query expressions, the select method call here is not necessary, since there is no transformation being performed on the returned items.

The syntax is a little different when your sequence contains more complex data. This can be seen from the following code, which is the method syntax version of the code segment described in the following section “LINQ to Objects.”

var Customers := GetCustomers;
var query := Customers.Where(Customer -> Customer.Age > 10).
  Where(Customer -> Customer.Age < 40).
  Where(Customer -> Customer.Active);

Lambda Expressions

Lambda expressions are anonymous functions that can accept zero or more parameters, and typically return a value. The parameters appear on the left-hand side of the lambda operator, and are enclosed in parentheses (unless there are zero or one parameters, in which case the parentheses are optional).

The parameters, if present, are following by the lambda operator, which is the -> character sequence in Delphi Prism (C# uses => as the lambda operator). When reading a lambda expression aloud, the lambda operator is spoken “goes to.” As a result, the first expression in the preceding query expression is read “customer goes to customer dot age greater than 10.”

The expression on the right-hand side of the lambda operator includes either an expression or a statement. This is the value that is returned by the lambda expression (if the statement is an expression lambda). It is possible, however, to include an expression block that does not resolve to an expression. These lambdas are called statement lambdas. Statement lambdas should not be used in method calls when using method syntax.

In most cases, lambda expressions take advantage of the compiler’s ability to infer the type of the input parameters.


This article has provided you with a brief introduction to the composition of language integrated queries. In part 2 of this article, which will appear within a month of this article, numerous examples of LINQ queries are shown, including technologies such as LINQ to Objects, LINQ to DataSets, and LINQ to XML.

Copyright © 2009 Cary Jensen All Rights Reserved

Friday, May 29, 2009

Keystroke Combinations in Delphi’s Code Editor

There are many advantages to being a Delphi developer. Not only has the language kept pace with improvements added to other languages, such as C#, but it has maintained a remarkable consistency over its many years, first as Turbo Pascal, and later as Delphi. For example, routines originally written for Turbo Pascal in the 1980s often compile in the latest version of Delphi with little or no changes. There are not many languages you can say that about.

Delphi developers have also benefited from Delphi’s strong IDE (integrated development environment). From Delphi 1 through RAD Studio 2009, Delphi’s IDE has provided developers with state-of-the-art features that support and improve the overall development experience.

Unfortunately, not all of the capabilities of Delphi’s IDE are well known. Consider this. Did you know that Delphi can record (Ctrl+Shift+R) and playback (Ctrl+Shift+P) a series of keystrokes? It’s my impression that at least half of Delphi developers do not know this. But it is a feature that has been available in Delphi since Delphi version 1.

Part of the problem is that the various keystrokes supported by Delphi’s IDE have been notoriously hard to find in Delphi’s help. Some of them have never been documented. For example, one of my favorite keystroke combinations is Ctrl+Spacebar, which invokes the Argument Value List drop down. Whenever you are entering the value of an expression, for example, the left side of an assignment statement or a parameter of a function, pressing Ctrl+Spacebar displays a list of the possible symbols that might satisfy the expression. This feature, which was added in Delphi 3, appeared in the Delphi 4 help files only. It wasn’t in Delphi 3’s help, and has been missing in action since Delphi 5.

Another interesting item to note is that, although Delphi’s IDE has undergone major revisions in the past few years with the introduction of the Galileo IDE, the keystrokes available in the IDE, and the editor specifically, have not changed much at all. In particular, nearly every keystroke combination that was available in Delphi 1 still works in Delphi 2009.

But there is still the problem of documentation. When I inspected the most recent help that ships with RAD Studio 2007, I found that just over half of the editor’s keystroke combinations appeared in the help. The others were nowhere to be found.

And this brings us to the essence of this article. I have attempted to collect here, in Table 1, every editor keystroke combination that I am aware of. These keystroke combinations are for the default key mapping, which is used by most Delphi developers. If you are using one of the other key mappings, some of these combinations will not work, but many will.

I also want to admit that most of this table’s contents was derived from Delphi’s help files. In other words, while I compiled this table, I did not write most of the entries. Some entries appear exactly as they did in the help files, but I wrote some as well.

Furthermore, I edited many of the help file entries, either to make corrections or to simplify the description. I also removed entries that were in the help files that either didn’t work correctly, or appeared to not work at all. Finally, I didn’t include many keystrokes that are not code editor specific. For instance, I did not include debugging related keystrokes.

Permit me to list these main points again:

  1. These keystroke combinations are only for the code editor. I did not include debugger keystroke combinations or general IDE keystroke combinations

  2. These keystroke combinations are for the default keybinding. I made no attempt to include keystroke combinations for other keybindings

  3. Many of these descriptions are derived from some version of Delphi’s documentation

  4. I listed these combinations in alphabetical order. Yes, there are many other alternative orders that would have been useful as well, but I opted for an alphabetical order for simplicity

  5. I know that there must be some valid keystroke combinations that are missing (I specifically omitted those that did not appear to work). If you know of valid editor keystroke combinations that apply to the default keybinding that are missing, feel free to email those to me at I will try to update this table periodically with your contributions

So here, for your consideration and enjoyment, is a zip file that contains a PDF of a nearly complete list of the keystrokes supported by Delphi’s editor. This list is presented in alphabetical order, by key. When two key or key combinations perform the same task, they both appear in the left column.

Cary Jensen's Delphi Editor Keystroke Table

Copyright (c) 2009 Cary Jensen. All Rights Reserved.

Sunday, May 3, 2009

An Advantage for Delphi Developers

Here's a trivia question: What was the name of Delphi during its original beta test?

The answer is Delphi (but you probably knew that already). But why did the Delphi development team pick such an odd name for their ground breaking, component-based heir to their Pascal development tool? The answer is related to databases.

Delphi is name of both the city and temple in Greece where people would travel to speak to the Oracle. And although Delphi can work with just about any database you can think of, the point was that it was designed from the beginning to be a great environment for developing database applications.

It’s ironic, then, that while Delphi’s name alludes to ORACLE, one of the most popular SQL-based relational database management systems (RDBMS), the foundations of database access in Delphi is navigational, not set oriented. Specifically, the original data access mechanism in Delphi was the Open Database Application Programming Interface, or ODAPI.

ODAPI, which would later be referred to simply as the Borland Database Engine, or BDE, was a direct outgrowth of the Paradox Engine. The Paradox file format, like many of its file-based cousins, is index based. In other words, database tables are treated as an ordered sequence of records (rows), whose order can be changed by changing which index is used to access the records. These indexes can also be used to filter records of a single table, as well as perform joins between tables having compatible indexes.

Data Access in Delphi

That Delphi’s database roots are well grounded in the navigational model is immediately obvious if you consider the TDataSet class, which is the base class for all data access components in Delphi. This class includes properties and methods for a variety of navigational operations. For example, for selecting indexes (IndexName), setting ranges (SetRange), finding records based on indexed values (FindKey, FindNearest), creating relational joins using indexes (MasterSource and MasterFields), and navigating records in index order (First, Next, Prior, MoveBy, and so forth).

While the benefits of SQL-based data access are well-known, navigational data access has its advantages as well. Specifically, the navigational model better suits the development of intuitive and user-friendly interfaces.

For example, with a navigational interface, it is possible to display all of the records from a database table (or a query result set), and permit the user to navigate these records freely. This navigation may even include incremental search (always an end user favorite). And, these features are available whether the underlying table has a couple of hundred records, or millions.

By comparison, nearly all set-based databases, such as MS SQL Server and ORACLE, are not designed to provide navigational access. As a result, searching for a record in a result set means refining an underlying WHERE clause in a SQL query to reduce the resulting result set to a manageable size. The idea of opening a result set with millions of records and permitting a user to freely browse it is unthinkable.

Here is where the Advantage Database Server really shines. Not only is it a high-performance, low maintenance database server that supports optimized SQL queries, its index-based architecture permits it to provide data access options normally only found in a file-based database (such as Paradox, dBase, or MS Access), while providing all the benefits of a transaction-based remote database server.

But there is more - a lot more. To begin with, database developers creating native Delphi applications have access to Advantage Delphi Components, a component set that implements the native TDataSet interface for access to ADS. (For Delphi developers using Delphi 7 and earlier, the Advantage TDataSet Descendant provides the same features.)

These components provide a near seamless replacement for BDE components, but go even further. For example, while the BDE supports filters, they are not optimized. With Advantage Delphi Components, so long as your tables have the appropriate indexes, filters are fully optimized. It is possible to set a filter on a 20 million record table and have the result in a fraction of a second.

For Delphi for .NET, as well as for Delphi Prism developers, Advantage offers the Advantage Data Provider for .NET. This ADO.NET provider not only supports all features defined by ADO.NET, but it even supports optimized, server-side cursors. Specifically, while all .NET data providers support a forward navigating, read-only DbDataReader descendant, the AdsExtendedReader class, which is also a DbDataReader descendant, supports bi-directional navigation, optimized server-side filters, read/write support, as well as pessimistic locking. You cannot find better .NET data provider anywhere.

General Overview of the Advantage Database Server

The Advantage Database Server is a high-performance, relational database server that simultaneously supports optimized set-based SQL, as well as a blinding-fast, index-based navigational operations. Its low per-seat cost, ease of deployment, and very low maintenance requirements makes it particularly well suited for vertical market applications, especially those where the deployed sites lack the IT infrastructure required to maintain normal database servers.

There is another reason why Advantage is so popular with vertical market developers. In addition to the Advantage Database Server, Sybase iAnywhere publishes the Advantage Local Server (ALS), a free, file-server based, five-user database that sports an interface identical to ADS, providing a seamless upgrade path to ADS. For those customers for whom cost is an issue, you can deploy your applications using ALS. Later, if the needs of the customers grow, or they want the stability of a transaction supporting, remote database server, migrating can be as simple as installing ADS (and this installation takes less than five minutes in most cases).

Advantage is a Full-Featured Database

While the benefits of ADS mentioned so far in this article provide a compelling case for building database applications with ADS and Delphi, there is much more to the story. The remainder of this article provides you with a brief overview of the many features that makes ADS a strong contender for just about any database development environment. All of these features, with the exception of replication, online backup, and support for class 4 Java drivers, are also supported by ALS.

At the time of this writing, the current version of ADS is version 9.1. While some of the features defined below have been available since version 6.0, many have been either introduced or enhanced in later versions.

Advantage Comes in Many Flavors

There is not just one Advantage Database Server, there are several. For Linux and Windows servers, there are both 32-bit and a 64-bit versions of ADS. If you are running Novel, there is an Advantage Database Server NLM (Netware Loadable Module). The 64-bit versions of ADS were introduced in ADS 9.0.

An Easy to Use Data Architect

While you can control all aspects of your database design and management through Advantage SQL (or even the Advantage Client Engine), most of the time, you will want to design and configure your Advantage databases and data dictionaries using the Advantage Data Architect. The Advantage Data Architect is shown in Figure 1.

Figure 1: The Advantage Data Architect

The Advantage Data Architect provides you with a rich graphical interface for working with Advantage objects. For example, you can use the Advantage Data Architect to create data dictionaries and database tables, create and configure users and groups, define stored procedures, create views, write user defined functions, and execute queries against your database objects.

The Advantage Data Architect also comes with an invaluable SQL debugger. This debugger helps you step through your SQL queries, SQL stored procedures, SQL triggers, and user defined functions. It supports inspection of local variables, multiple breakpoints, live editing of your SQL queries, and more.

A Multitude of Data Access Options

Advantage supports one of the widest arrays of data access mechanisms in the industry. In addition to the Advantage Delphi Components, TDataSet Descendant, Advantage Data Provider for .NET, and Java class 4 drivers already mentioned, Sybase iAnywhere publishes a range of additional drivers. These include an ODBC driver, an OLE DB Provider, a PHP driver, a DBD (Perl) driver, a Clipper Replaceable Database Driver (RDD), and a Crystal Reports driver.

In addition, some third party vendors provider their own Advantage drivers. For example, Alaska Software publishes the ADS-Database Engine for their Xbase++ language, an object-oriented language based on Clipper.

Data Dictionaries with Database Wide Security

Advantage first introduced data dictionary support in 2000 with the release of Ads 6.0. Data dictionaries provide a number of valuable features, some of which are associated with security. For example, with data dictionaries you can define users and groups, which permit you to define which database objects (including tables, fields, stored procedures, views, and so forth), individual users, or groups of users, have access to. This access can include full access, read/write access, readonly access, or no access.

Furthermore, data dictionaries enable table encryption to be applied across the entire database, without having to encrypt and decrypt tables on a table-by-table basis. In addition, encryption can be extended to network and Internet communications. Specifically, Advantage data can be transparently encrypted using 160-bit encryption before being moved across your networks, or even the Internet. All you need to do is configure your data dictionary for the encryption option you want. There is no need to manually perform encryption and decryption in your client applications.

Stored Procedures, User Defined Functions, and Views

Stored procedures and user defined functions are routines that you create for your data dictionaries that can be used by any applications that access your Advantage data. For stored procedures, these routines can be written using DLLs (a Delphi project template is supplied), COM objects, or .NET managed assemblies. Stored procedures can also be written using SQL Scripts (SQL persistent stored modules, or PSMs). SQL Scripts are the only option for writing used defined functions (UDFs).

Views are also based on SQL. However, views are SQL SELECT statements that you define in your data dictionary, and then call, as though they were a database table, from your client applications.

Support for Constraints

Advantage supports a range of constraint options. The most prominent of these is referential integrity (RI) constraints. Though they should be used judicially, RI constraints provide you with a mechanism to ensure the relational integrity between two or more related tables.

Advantage also provides both table-level and field-level constraints. These constraints define rules that ensure that data entered into your tables meets criteria that you specify, without your having to embody these rules in each of your client applications.

Online Full and Incremental Backup

In version 8.0 Advantage introduced online back. Online backup permits you to create a copy of your data, which you can use in the case of a catastrophic failure of your server.

Advantage supports two modes of online backup. A full backup creates a complete backup of your data each time it is run. Incremental back, by comparison, maintains a single backup, but updates it with the incremental changes that were detected since the previous back was completed.

Database Replication

Advantage also introduced database replication in ADS 8.0. With replication, changes to your database are propagated to one or more servers. Replication in Advantage uses a publish/subscribe model. As a result, replication can be performed between two or more Advantage servers using either unidirectional or bi-directional modes.

Replication in ADS 8.0 was performed on a whole record basis. In ADS 9.0, it became possible to filter replication in order to replicate only certain fields of a table.

Triggers and Notifications

Triggers, which were added in ADS 7.0, and notifications, which were added in ADS 9.0, provide you with the option to react to changes that occur in your databases programmatically. Triggers, which, like stored procedures, can be written as DLLs, COM objects, .NET managed code, or SQL scripts, are routines that execute in response to a change occurring to a record in an underlying table, such as an insertion, modification, or deletion. In addition, triggers can be defined to trigger before the operation (permitting you to prevent it), during the operation (permitting you to implement it), or after the operation (allowing you to respond to it).

Notifications provide a mechanism for communicating to a client that something has occurred in the database. For example, notifications can be used to inform a client application that data in a critical table has changed. The client application can then use this information to refresh its view of that table, providing the end user with the most current data. Or, a notification might be used to signal a client that a record has been added to a special table created for the purpose of communicating messages from the system administrator to the end users. The client can then read the latest message and display it within its interface.


The Advantage Database Server stands alone in the world of relational database servers (RDBMs). It not only supports the navigational model, but also provides optimized support for SQL operations. This navigational support makes it a perfect match with Delphi’s data access model. This, combined with its advanced features, very low maintenance, high performance, and ease of deployment, make it an ideal database server for a wide range of applications.
Copyright (c) 2009 Cary Jensen. All Rights Reserved.