Sixty Plus FireDAC Sample Apps With Source To Make Accessing Databases Easy

2e1ax_embarcadero_entry_60firedac

FireDAC is a unique set of Universal Data Access Components for developing multi-device database applications for Delphi and C++Builder. With its powerful common architecture, FireDAC enables native high-speed direct access from Delphi to InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, IBM DB2, SQL Anywhere, Access, Firebird, Informix and more.

Based on 10 years of experience writing native drivers for database back-ends, FireDAC was built as a powerful access layer that supports all the features needed to build real-world high-load applications. FireDAC provides a common API for accessing different database back-ends, without giving up access to unique database-specific features, or compromising on performance.

It enables the developer to concentrate on developing the application, not on the complexities of database interaction. It provides a modern feature-rich set of components to address all major RDBMS environments. It uses one set of components to address all supported RDBMS types. It reduces the Total Cost of Ownership by providing:

  • Less initial cost, compared to separate expensive libraries.
  • Less time required to learn and become familiar with the unique powerful libraries.
  • Straight-ahead application design and development when targeting different databases.

Here are the samples:

FireDAC.AutoincFields Sample

FireDAC.ConnectionDefs Sample

FireDAC.DAptLayerCommands Sample

FireDAC.DAptLayerGettingStarted Sample

FireDAC.DBX2FDMigration Demo Sample

FireDAC.DLLSharing Sample

FireDAC.Firebird Sample

FireDAC.GetFieldNames Sample

FireDAC.IFDPhysCommand.Async Sample

FireDAC.IFDPhysCommand.Batch Sample

FireDAC.IFDPhysCommand.EscapeFunctions Sample

FireDAC.IFDPhysCommand.FetchTables Sample

FireDAC.IFDPhysCommand.Macros Sample

FireDAC.IFDPhysConnection.CreateConnection Sample

FireDAC.IFDPhysConnection.Pooling Sample

FireDAC.IFDPhysConnection.Transactions Sample

FireDAC.InfoReport Sample

FireDAC.InterBase AdminUtility Sample

FireDAC.InterBase Arrays Sample

FireDAC.InterBase Sample

FireDAC.MappingColumns Sample

FireDAC.MongoDB Explore Sample

FireDAC.MongoDB.ListView Sample

FireDAC.MoniLayerClients Sample

FireDAC.MSAccess Sample

FireDAC.MSSQL Sample

FireDAC.MySQL Sample

FireDAC.Oracle CQN Sample

FireDAC.OraclStoredProc Sample

FireDAC.Pooling Sample

FireDAC.PostgreSQL Arrays Sample

FireDAC.PostgreSQL Ranges Sample

FireDAC.SchemaAdapterMemTable Sample

FireDAC.SQLite Desktop Sample

FireDAC.SQLite Encryption Sample

FireDAC.SQLite Sample

FireDAC.SQLiteIniFile Demo Sample

FireDAC.TFDBatchMove Sample

FireDAC.TFDEventAlerter Sample

FireDAC.TFDGUIxLoginDialog Sample

FireDAC.TFDLocalSQL InMemDB Sample

FireDAC.TFDLocalSQL MegaFMX Sample

FireDAC.TFDLocalSQL.xDBMS Sample

FireDAC.TFDMemTable.CachedUpdates Sample

FireDAC.TFDMemTable.CloneCursor Sample

FireDAC.TFDMemTable.IncFetchingMSSQL Sample

FireDAC.TFDMemTable.Main Sample

FireDAC.TFDMemTable.MasterDetail Sample

FireDAC.TFDMemTable.NestedDataSet Sample

FireDAC.TFDQuery.Aggregates Sample

FireDAC.TFDQuery.ArrayDML Sample

FireDAC.TFDQuery.Async Sample

FireDAC.TFDQuery.Batch Sample

FireDAC.TFDQuery.BatchErrorHandling Sample

FireDAC.TFDQuery.Blobs Sample

FireDAC.TFDQuery.BlobStreams Sample

FireDAC.TFDQuery.CentralizedCachedUpdates Sample

FireDAC.TFDQuery.Filter Sample

FireDAC.TFDQuery.Indices Sample

FireDAC.TFDQuery.Macros Sample

FireDAC.TFDQuery.MasterDetail Sample

FireDAC.TFDQuery.OnUpdateRecord Sample

FireDAC.TFDTableAdapter.Main Sample

FireDAC.Transactions Sample

FireDAC.VSE Demo Sample

ref: http://community.embarcadero.com/blogs?view=entry&id=8761

Synapse Based SSH Client

Many times, I needed a way to let my Delphi/FPC applications to connect to an SSH server, execute some commands, and get its results. Now I’m publishing a simple class based on Synapse’s TTelnetSend class to do exactly what I needed.

Required ingredients

First of all, you’ll need to grab a copy of the latest version of Synapse, now, to connect to an SSH server, the connection must be established by using the SSL protocol, and Synapse allows that kind of connections with plugins that allow filtering the data through OpenSSL, CryptLib and StreamSecII. Here, I’ll use CryptLib, so you’ll have to get this compiled version of cl32.dll for Windows, if you need the library compiled for Linux search for it in your repository (or use Google).

Now, configure the search paths of your compiler to find both, Synapse source and cryptlib.pas, the wrapper for cl32.dll.

Introducing TSSHClient class

This is a simple class to let you connect to an SSH server in an Object Oriented way, its internal parts where found in the Synapse’s newsgroups, tested and arranged in a class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
unit sshclient;
interface
uses
  tlntsend, ssl_openssl, ssl_openssl_lib, ssl_cryptlib;
type
  TSSHClient = class
  private
    FTelnetSend: TTelnetSend;
  public
    constructor Create(AHost, APort, AUser, APass: string);
    destructor Destroy; override;
    procedure SendCommand(ACommand: string);
    procedure LogOut;
    function ReceiveData: string;
    function LogIn: Boolean;
  end;
implementation
{ TSSHClient }
constructor TSSHClient.Create(AHost, APort, AUser, APass: string);
begin
  FTelnetSend := TTelnetSend.Create;
  FTelnetSend.TargetHost := AHost;
  FTelnetSend.TargetPort := APort;
  FTelnetSend.UserName := AUser;
  FTelnetSend.Password := APass;
end;
destructor TSSHClient.Destroy;
begin
  FTelnetSend.Free;
  inherited;
end;
function TSSHClient.LogIn: Boolean;
begin
  Result := FTelnetSend.SSHLogin;
end;
procedure TSSHClient.LogOut;
begin
  FTelnetSend.Logout;
end;
function TSSHClient.ReceiveData: string;
var
  lPos: Integer;
begin
  Result := '';
  lPos := 1;
  while FTelnetSend.Sock.CanRead(1000) or(FTelnetSend.Sock.WaitingData>0) do
  begin
    FTelnetSend.Sock.RecvPacket(1000);
    Result := Result + Copy(FTelnetSend.SessionLog, lPos, Length(FTelnetSend.SessionLog));
    lPos := Length(FTelnetSend.SessionLog)+1;
  end;
end;
procedure TSSHClient.SendCommand(ACommand: string);
begin
  FTelnetSend.Send(ACommand + #13);
end;
end.

A sample application

This is an example of how to execute an “df -h” command in an external SSH server, inspirated by a question in StackOverflow.

The example just connects to a server, execute the command and capture its output, just that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
program TestSSHClient;
{$APPTYPE CONSOLE}
uses
  sshclient;
var
  lSSh: TSSHClient;
begin
  lSSh := TSSHClient.Create('[TARGET_HOST_OR_IP_ADDRESS]','[PORT]', '[USER]', '[PASSWORD]');
  if lSSh.LogIn then
  begin
    Writeln('Connected!.');
    (* Get welcome message *)
    Writeln(lSSh.ReceiveData);
    (* Send command *)
    lSSh.SendCommand('df -h');
    (* Receive results *)
    Writeln(lSSh.ReceiveData);
    lSSh.LogOut;
    Writeln('Logged out.');
  end
  else
    Writeln('Can''t connect.');
  lSSh.Free;
end.                           

Replace the words between ‘[‘ and ‘]’ by the real ones and test it.

ref : http://leonardorame.blogspot.com/2010/01/synapse-based-ssh-client.html

TDocVariant custom variant type

With revision 1.18 of the framework, we just introduced two new custom types of variants:

  • TDocVariant kind of variant;
  • TBSONVariant kind of variant.

The second custom type (which handles MongoDB-specific extensions – like ObjectID or other specific types like dates or binary) will be presented later, when dealing with MongoDBsupport in mORMot, together with the BSON kind of content. BSON / MongoDB support is implemented in the SynMongoDB.pas unit.

We will now focus on TDocVariant itself, which is a generic container of JSON-like objects or arrays.
This custom variant type is implemented in SynCommons.pas unit, so is ready to be used everywhere in your code, even without any link to the mORMot ORM kernel, or MongoDB.

TDocVariant documents

TDocVariant implements a custom variant type which can be used to store any JSON/BSON document-based content, i.e. either:

  • Name/value pairs, for object-oriented documents;
  • An array of values (including nested documents), for array-oriented documents;
  • Any combination of the two, by nesting TDocVariant instances.

Here are the main features of this custom variant type:

  • DOM approach of any object or array documents;
  • Perfect storage for dynamic value-objects content, with a schema-less approach (as you may be used to in scripting languages like Python or JavaScript);
  • Allow nested documents, with no depth limitation but the available memory;
  • Assignment can be either per-value (default, safest but slower when containing a lot of nested data), or per-reference (immediate reference-counted assignment);
  • Very fast JSON serialization / un-serialization with support of MongoDB-like extended syntax;
  • Access to properties in code, via late-binding (including almost no speed penalty due to our VCL hack as already detailed);
  • Direct access to the internal variant names and values arrays from code, by trans-typing into a TDocVariantData record;
  • Instance life-time is managed by the compiler (like any other variant type), without the need to use interfaces or explicit try..finally blocks;
  • Optimized to use as little memory and CPU resource as possible (in contrast to most other libraries, it does not allocate one class instance per node, but rely on pre-allocated arrays);
  • Opened to extension of any content storage – for instance, it will perfectly integrate with BSON serialization and custom MongoDB types (ObjectID, RegEx…), to be used in conjunction with MongoDB servers;
  • Perfectly integrated with our Dynamic array wrapper and its JSON serialization as with the record serialization;
  • Designed to work with our mORMot ORM: any TSQLRecord instance containing suchvariant custom types as published properties will be recognized by the ORM core, and work as expected with any database back-end (storing the content as JSON in a TEXT column);
  • Designed to work with our mORMot SOA: any interface-based service is able to consume or publish such kind of content, as variant kind of parameters;
  • Fully integrated with the Delphi IDE: any variant instance will be displayed as JSON in the IDE debugger, making it very convenient to work with.

To create instances of such variant, you can use some easy-to-remember functions:

  • _Obj() _ObjFast() global functions to create a variant object document;
  • _Arr() _ArrFast() global functions to create a variant array document;
  • _Json() _JsonFast() _JsonFmt() _JsonFastFmt() global functions to create anyvariant object or array document from JSON, supplied either with standard orMongoDB-extended syntax.

Variant object documents

With _Obj(), an objectvariant instance will be initialized with data supplied two by two, asName,Value pairs, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Obj(['name','John','year',1972]);
  V2 := _Obj(['name','John','doc',_Obj(['one',1,'two',2.5])]); // with nested objects

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1)); // explicit conversion into RawUTF8
 writeln(V1);                  // implicit conversion from variant into string// both commands will write '{"name":"john","year":1982}'
 writeln(VariantSaveJson(V2)); // explicit conversion into RawUTF8
 writeln(V2);                  // implicit conversion from variant into string// both commands will write '{"name":"john","doc":{"one":1,"two":2.5}}'

As a consequence, the Delphi IDE debugger is able to display such variant values as their JSON representation.
That is, V1 will be displayed as '"name":"john","year":1982' in the IDE debugger Watch List window, or in the Evaluate/Modify (F7) expression tool.
This is pretty convenient, and much more user friendly than any class-based solution (which requires the installation of a specific design-time package in the IDE).

You can access to the object properties via late-binding, with any depth of nesting objects, in your code:

 writeln('name=',V1.name,' year=',V1.year);
 // will write 'name=John year=1972'
 writeln('name=',V2.name,' doc.one=',V2.doc.one,' doc.two=',doc.two);
 // will write 'name=John doc.one=1 doc.two=2.5
 V1.name := 'Mark';       // overwrite a property value
 writeln(V1.name);        // will write 'Mark'
 V1.age := 12;            // add a property to the object
 writeln(V1.age);         // will write '12'

Note that the property names will be evaluated at runtime only, not at compile time.
For instance, if you write V1.nome instead of V1.name, there will be no error at compilation, but an EDocVariant exception will be raised at execution (unless you set thedvoReturnNullForUnknownProperty option to _Obj/_Arr/_Json/_JsonFmt which will return a null variant for such undefined properties).

In addition to the property names, some pseudo-methods are available for such objectvariant instances:

  writeln(V1._Count); // will write 3 i.e. the number of name/value pairs in the object document
  writeln(V1._Kind);  // will write 1 i.e. ord(sdkObject)for i := 0 to V2._Count-1 do
    writeln(V2.Name(i),'=',V2.Value(i));
  // will write in the console://  name=John//  doc={"one":1,"two":2.5}//  age=12if V1.Exists('year') then
    writeln(V1.year);

You may also trans-type your variant instance into a TDocVariantData record, and access directly to its internals.
For instance:

 TDocVariantData(V1).AddValue('comment','Nice guy');
 with TDocVariantData(V1) do// direct transtypingif Kind=sdkObject then// direct access to the TDocVariantDataKind fieldfor i := 0 to Count-1 do// direct access to the Count: integer field
     writeln(Names[i],'=',Values[i]);    // direct access to the internal storage arrays

By definition, trans-typing via a TDocVariantData record is slightly faster than using late-binding.
But you must ensure that the variant instance is really a TDocVariant kind of data before transtyping e.g. by calling DocVariantType.IsOfType(aVariant).

Variant array documents

With _Arr(), an arrayvariant instance will be initialized with data supplied as a list ofValue1,Value2,…, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Arr(['John','Mark','Luke']);
  V2 := _Obj(['name','John','array',_Arr(['one','two',2.5])]); // as nested array

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1));
 writeln(V1);  // implicit conversion from variant into string// both commands will write '["John","Mark","Luke"]'
 writeln(VariantSaveJson(V2));
 writeln(V2);  // implicit conversion from variant into string// both commands will write '{"name":"john","array":["one","two",2.5]}'

As a with any object document, the Delphi IDE debugger is able to display such arrayvariant values as their JSON representation.

Late-binding is also available, with a special set of pseudo-methods:

  writeln(V1._Count); // will write 3 i.e. the number of items in the array document
  writeln(V1._Kind);  // will write 2 i.e. ord(sdkArray)for i := 0 to V1._Count-1 do
    writeln(V1.Value(i),':',V2._(i));
  // will write in the console://  John John//  Mark Mark//  Luke Lukeif V1.Exists('John') then
    writeln('John found in array');

Of course, trans-typing into a TDocVariantData record is possible, and will be slightly faster than using late-binding.

Create variant object or array documents from JSON

With _Json() or _JsonFmt(), either a document or arrayvariant instance will be initialized with data supplied as JSON, e.g.

var V1,V2,V3,V4: variant; // stored as any variant
 ...
  V1 := _Json('{"name":"john","year":1982}'); // strict JSON syntax
  V2 := _Json('{name:"john",year:1982}');     // with MongoDB extended syntax for names
  V3 := _Json('{"name":?,"year":?}',[],['john',1982]);
  V4 := _JsonFmt('{%:?,%:?}',['name','year'],['john',1982]);
  writeln(VariantSaveJSON(V1));
  writeln(VariantSaveJSON(V2));
  writeln(VariantSaveJSON(V3));
  // all commands will write '{"name":"john","year":1982}'

Of course, you can nest objects or arrays as parameters to the _JsonFmt() function.

The supplied JSON can be either in strict JSON syntax, or with the MongoDB extended syntax, i.e. with unquoted property names.
It could be pretty convenient and also less error-prone when typing in the Delphi code to forget about quotes around the property names of your JSON.

Note that TDocVariant implements an open interface for adding any custom extensions to JSON: for instance, if the SynMongoDB.pas unit is defined in your application, you will be able to create any MongoDB specific types in your JSON, like ObjectID(), new Date() or even /regex/option.

As a with any object or array document, the Delphi IDE debugger is able to display suchvariant values as their JSON representation.

Per-value or per-reference

By default, the variant instance created by _Obj() _Arr() _Json() _JsonFmt() will use a copy-by-value pattern.
It means that when an instance is affected to another variable, a new variant document will be created, and all internal values will be copied. Just like a record type.

This will imply that if you modify any item of the copied variable, it won’t change the original variable:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := V1;                // create a new variant, and copy all values
 V2.name := 'James';      // modifies V2.name, but not V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'John and James'

As a result, your code will be perfectly safe to work with, since V1 and V2 will be uncoupled.

But one drawback is that passing such a value may be pretty slow, for instance, when you nest objects:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := _Arr(['John','Mark','Luke']);
 V1.names := V2; // here the whole V2 array will be re-allocated into V1.names

Such a behavior could be pretty time and resource consuming, in case of a huge document.

All _Obj() _Arr() _Json() _JsonFmt() functions have an optional TDocVariantOptionsparameter, which allows to change the behavior of the created TDocVariant instance, especially setting dvoValueCopiedByReference.

This particular option will set the copy-by-reference pattern:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972],[dvoValueCopiedByReference]);
 V2 := V1;             // creates a reference to the V1 instance
 V2.name := 'James';   // modifies V2.name, but also V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'James and James'

You may think this behavior is somewhat weird for a variant type. But if you forget aboutper-value objects and consider those TDocVariant types as a Delphi class instance (which is a per-reference type), without the need of having a fixed schema nor handling manually the memory, it will probably start to make sense.

Note that a set of global functions have been defined, which allows direct creation of documents with per-reference instance lifetime, named _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast().
Those are just wrappers around the corresponding _Obj() _Arr() _Json() _JsonFmt()functions, with the following JSON_OPTIONS[true] constant passed as options parameter:

const/// some convenient TDocVariant options// - JSON_OPTIONS[false] is _Json() and _JsonFmt() functions default// - JSON_OPTIONS[true] are used by _JsonFast() and _JsonFastFmt() functions
  JSON_OPTIONS: array[Boolean] of TDocVariantOptions = (
    [dvoReturnNullForUnknownProperty],
    [dvoReturnNullForUnknownProperty,dvoValueCopiedByReference]);

When working with complex documents, e.g. with BSON / MongoDB documents, almost all content will be created in “fast” per-reference mode.

Advanced TDocVariant process

Object or array document creation options

As stated above, a TDocVariantOptions parameter enables to define the behavior of aTDocVariant custom type for a given instance.
Please refer to the documentation of this set of options to find out the available settings. Some are related to the memory model, other to case-sensitivity of the property names, other to the behavior expected in case of non-existing property, and so on…

Note that this setting is local to the given variant instance.

In fact, TDocVariant does not force you to stick to one memory model nor a set of global options, but you can use the best pattern depending on your exact process.
You can even mix the options – i.e. including some objects as properties in an object created with other options – but in this case, the initial options of the nested object will remain. So you should better use this feature with caution.

You can use the _Unique() global function to force a variant instance to have an unique set of options, and all nested documents to become by-value, or _UniqueFast() for all nested documents to become by-reference.

// assuming V1='{"name":"James","year":1972}' created by-reference
  _Unique(V1);             // change options of V1 to be by-value
  V2 := V1;                // creates a full copy of the V1 instance
  V2.name := 'John';       // modifies V2.name, but not V1.name
  writeln(V1.name);        // write 'James'
  writeln(V2.name);        // write 'John'
  V1 := _Arr(['root',V2]); // created as by-value by default, as V2 was
  writeln(V1._Count);      // write 2
  _UniqueFast(V1);         // change options of V1 to be by-reference
  V2 := V1;
  V1._(1).name := 'Jim';
  writeln(V1);
  writeln(V2);
  // both commands will write '["root",{"name":"Jim","year":1972}]'

The easiest is to stick to one set of options in your code, i.e.:

  • Either using the _*() global functions if your business code does send someTDocVariant instances to any other part of your logic, for further storage: in this case, the by-value pattern does make sense;
  • Or using the _*Fast() global functions if the TDocVariant instances are local to a small part of your code, e.g. used as schema-less Data Transfer Objects (DTO).

In all cases, be aware that, like any class type, the const, var and out specifiers of method parameters does not behave to the TDocVariant value, but to its reference.

Integration with other mORMot units

In fact, whenever a schema-less storage structure is needed, you may use a TDocVariantinstance instead of class or record strong-typed types:

  • Client-Server ORM will support TDocVariant in any of the TSQLRecord variantpublished properties;
  • Interface-based services will support TDocVariant as variant parameters of any method, which make them as perfect DTO;
  • Since JSON support is implemented with any TDocVariant value from the ground up, it makes a perfect fit for working with AJAX clients, in a script-like approach;
  • If you use our SynMongoDB.pas unit to access a MongoDB server, TDocVariant will be the native storage to create or access BSON arrays or objects documents;
  • Cross-cutting features (like logging or record / dynamic array enhancements) will also benefit from this TDocVariant custom type.

We are pretty convinced that when you will start playing with TDocVariant, you won’t be able to live without it any more.
It introduces the full power of late-binding and schema-less patterns to your application, which can be pretty useful for prototyping or in Agile development.
You do not need to use scripting engines like Python or JavaScript to have this feature, if you need it.

ref: http://blog.synopse.info/post/2014/02/25/TDocVariant-custom-variant-type

Connecting to legacy databases and publishing a RESTful interface to it

Most systems, especially in the DDD area, needs to integrate to a legacy system. In our case our we had to communicate to a Firebird 1.5 database.

The first step was to define our Data Transfer Objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
type
  TLegacyID = type RAWUTF8;
  TLegacyAccount = class(TSynPersistent)
  private
    fLegacyID: TLegacyID;
    fDateModified: TDateTime;
    fUserCreated: RAWUTF8;
    fDateCreated: TDateTime;
    fName: RAWUTF8;
    fisActive: Boolean;
  public
    constructor Create; overload; override;
    constructor Create( aID : TLegacyID; aName : RAWUTF8; aIsActive : Boolean;
      aDateCreated, aDateModified : TDateTime; aUserCreated : RAWUTF8 ); overload;
  published
    property ID : TLegacyID read fRevelightID write fRevelightID;
    property Name : RAWUTF8 read fName write fName;
    property isActive : Boolean read fisActive write fisActive;
    property DateCreated : TDateTime read fDateCreated write fDateCreated;
    property DateModified : TDateTime read fDateModified write fDateModified;
    property UserCreated : RAWUTF8 read fUserCreated write fUserCreated;
  end;
  TLegacySupplier = class(TLegacyAccount)
  end;

Here we declare a unique type to identify our legacy IDs (strings). We also do a basic map of our data layout, with a customized constructor for ease of creation. One can add other methods later to handle copies and assignments.

The next step was to define our service:

1
2
3
4
5
type
  ILegacyStockQuery = interface(IInvokable)
    ['{2BDC9F78-B9C2-4621-A557-F87F02AC0581}']
    function GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
  end;

we’ll publish a service as LegacyStockQuery with a single method GetSupplier. This method will return a JSON encoded representation of our TLegacySupplier, ready to be consumed by a client.

To implement it:

1
2
3
4
5
6
7
8
9
10
type
  TLegacyStockQuery = class(TInterfacedObject, ILegacyStockQuery)
  private
    fDbConnection : TSQLDBConnectionProperties;
  public
    constructor Create( const aProps: TSQLDBConnectionProperties ); overload;
    function GetSupplier(const aID: TRevelightID; out Supplier: TLegacySupplier): TCQRSResult;
    property DbConnection : TSQLDBConnectionProperties read fDbConnection write fDbConnection;
  end;

We keep a copy of our database connection properties local to our instance to ensure thread safety.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{ TLegacyStockQuery }
constructor TLegacyStockQuery.Create(const aProps: TSQLDBConnectionProperties);
begin
  fDbConnection := aProps;
  inherited Create;
end;
function TLegacyStockQuery.GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
var
  Res, Address : ISQLDBRows;
begin
  Result := cqrsNotFound;
  Res := fDbConnection.Execute( 'select * from SUPPLIERS where SUPPLIER_ID=? ', [aID] );
  if Res.Step then begin
    Result := cqrsSuccess;
    Supplier.ID := Res['SUPPLIER_ID'];
    Supplier.Name := Res['SUPPLIER_NAME'];
    Supplier.isActive := Res['ACTIVE_FLAG'] = 'Y';
    Supplier.DateCreated := Res['DATE_CREATED'];
    Supplier.DateModified := Res['DATE_MODIFIED'];
    Supplier.UserCreated := Res['USER_CREATED'];
  end;
end;

Execute the query against the legacy database and populate the DTO. Using the ISQLDBRows interface means less object maintenance and cleaner code.

To kick the whole thing off we have:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
procedure StartServer( aDbURI : RawURF8 );
var
  aDbConnection : TSQLDBConnectionProperties;
  aStockServer  : TSQLRestServerFullMemory;
  aHTTPServer   : TSQLHttpServer;
begin
  aDbConnection := TSQLDBZEOSConnectionProperties.Create( aDbURI, '', '', '' );
  aStockServer  := TSQLRestServerFullMemory.Create([]);
  try
    aStockServer.ServiceDefine( TLegacyStockQuery.Create( aDbConnection ), [ILegacyStockQuery]);
    aHTTPServer := TSQLHttpServer.Create( DEFAULT_HTTP_PORT, [aStockServer] );
    try
      aHttpServer.AccessControlAllowOrigin := '*'; // allow cross-site AJAX queries
      writeln('Background server is running.'#10);
      writeln('Cross-Platform wrappers are available at ',
          DEFAULT_HTTP_PORT ,'/', DEFAULT_SERVER_ROOT );
      write('Press [Enter] to close the server.');
      readln;
    finally
      aHTTPServer.Free;
    end;
  finally
    aStockServer.Free;
  end;
end;

It would be better to use dependency injection here, but we’ll get in to that later.

When invoking it, we used a Zeos DB URI, like this one:

Tips!

Remember to register your Object array types with

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(<ArrayType>),<ObjType>);

e.g.

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(TLegacySupplierObjArray),TLegacySupplier);

ref: https://tamingthemormot.wordpress.com/2015/07/14/connecting-to-legacy-databases/

External database speed improvements

Some major speed improvements have been made to our SynDB* units, and how they are used within the mORMot persistence layer.
It results in an amazing speed increase, in some cases.

Here are some of the optimizations how took place in the source code trunk:

Overall, I observed from x2 to x10 performance boost with simple Add() operations, using ODBC, OleDB and direct Oracle access, when compare to previous benchmarks (which were already impressive).
BATCH mode performance is less impacted, since it by-passed some of those limitations, but even in this operation mode, there is some benefits (especially with ODBC and OleDB).

Here are some results, directly generated by the supplied “15 – External DB performance” sample.

Insertion speed

Here we test insertion of some records, for most of our supplied engines.
We did the test with UNIK conditional undefined, i.e. with no index of the Name field.

A Core i7 notebook has been used, as hardware platform.
Oracle 11g database is remotely accessed over a corporate network, so latency and bandwidth is not optimal.
The hardrive is a SSD this time – so we will see how it affects the results.

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
Direct 501 911 81870 281848 288234 548 952 72697 518 512 4159
Batch 523 891 102614 409836 417257 557 868 91617 77155 509 4441
Trans 90388 95884 96612 279579 286188 99681 70950 105674 1024 1432 4920
Batch Trans 110869 117376 125190 412813 398851 127424 126627 121368 62601 1019 4926

Performance gain is impressive, especially for “ODBC Oracle” and also “OleDB Jet”.
Since Jet/MSAccess is a local engine, it is faster than Oracle for one record retrieval – it does not suffer from the network latency. But it is faster than SQlite3 at insertion, due to a multi-thread design – which is perhaps less ACID nor proven.
Note that this hardware configuration run on a SSD, so even “SQLite3 (file full)” configuration is very much boosted – about 3 times faster.
Our direct Oracle access classes achieve more than 77,000 inserts per second in BATCH mode (using the Array Binding feature).
Direct TObjectList in-memory engine reaches amazing speed, when used in BATCH mode – more than 400,000 inserts per second!

Read speed

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
By one 26777 26933 122016 298400 301041 135413 133571 131877 1289 1156 2413
All Virtual 429331 427423 447227 715717 241289 232385 167420 202839 63473 35029 127772
All Direct 443773 433463 427094 711035 700574 432189 334179 340136 90184 39485 186164

Reading speed was also increased. ODBC results have the biggest improvement.
Server-side statement cache for Oracle makes individual reading of records 2 times faster. Wow.
The SQLite3 engine is still the more reactive SQL database here, when it comes to reading.
Of course, direct TObjectList engine is pretty fast – more than 700,000 records per second.

ref: http://blog.synopse.info/post/2013/01/28/External-database-speed-improvements

RESTful mORMot

embarcadero-delphi

Our Synopse mORMot Framework was designed in accordance with Fielding’s REST architectural style without using HTTP and without interacting with the World Wide Web.
Such Systems which follow REST principles are often referred to as “RESTful”.

Optionally, the Framework is able to serve standard HTTP/1.1 pages over the Internet (by using the mORMotHttpClient / mORMotHttpServer units and the TSQLHttpServer andTSQLHttpClient classes), in an embedded low resource and fast HTTP server.

The standard RESTful methods are implemented, i.e. GET/PUT/POST/DELETE.

The following methods were added to the standard REST definition, for locking individual records and for handling database transactions (which speed up database process):

  • LOCK to lock a member of the collection;
  • UNLOCK to unlock a member of the collection;
  • BEGIN to initiate a transaction;
  • END to commit a transaction;
  • ABORT to rollback a transaction.

The GET method has an optional pagination feature, compatible with the YUI DataSource Request Syntax for data pagination – see TSQLRestServer.URI method andhttp://developer.yahoo.com/yui/datatable/#data . Of course, this breaks the “Every Resource is Identified by a Unique Identifier” RESTful principle – but it is much more easy to work with, e.g. to implement paging or custom filtering.

From the Delphi code point of view, a RESTful Client-Server architecture is implemented by inheriting some common methods and properties from a main class.

Then a full set of classes inherit from this TSQLRest abstract parent, e.g. TSQLRestClient TSQLRestClientURI TSQLRestServer.
This TSQLRest class implements therefore a common ancestor for both Client and Server classes.

BLOB fields

BLOB fields are defined as TSQLRawBlob published properties in the classes definition – which is an alias to the RawByteString type (defined in SynCommons.pas for Delphi up to 2007, since it appeared only with Delphi 2009). But their content is not included in standard RESTful methods of the framework, to spare network bandwidth.

The RESTful protocol allows BLOB to be retrieved (GET) or saved (PUT) via a specific URL, like:

 ModelRoot/TableName/TableID/BlobFieldName

This is even better than the standard JSON encoding, which works well but convert BLOB to/from hexadecimal values, therefore need twice the normal size of it. By using such dedicated URL, data can be transfered as full binary.

Some dedicated methods of the generic TSQLRest class handle BLOB fields: RetrieveBloband UpdateBlob.

JSON representation

The “04 – HTTP Client-Server” sample application available in the framework source code tree can be used to show how the framework is AJAX-ready, and can be proudly compared to any other REST server (like CouchDB) also based on JSON.

First deactivates the authentication by changing the parameter from true to false inUnit2.pas:

 DB := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'),
 false);

and by commenting the following line in Project04Client.dpr:

  Form1.Database := TSQLHttpClient.Create(Server,'8080',Form1.Model);
  // TSQLHttpClient(Form1.Database).SetUser('User','synopse');
  Application.Run;

Then you can use your browser to test the JSON content:

  • Start the Project04Server.exe program: the background HTTP server, together with its SQLite3 database engine;
  • Start any Project04Client.exe instances, and add/find any entry, to populate the database a little;
  • Close the Project04Client.exe programs, if you want;
  • Open your browser, and type into the address bar:
      http://localhost:8080/root
    
  • OYou’ll see an error message:
    TSQLHttpServer Server Error 400
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord
    
  • You’ll see the result of all SampleRecord IDs, encoded as a JSON list, e.g.
     [{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord/1
    
  • You’ll see the content of the SampleRecord of ID=1, encoded as JSON, e.g.
    {"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
    
  • Type into the address bar any other REST command, and the database will reply to your request…

You have got a full HTTP/SQLite3 RESTful JSON server in less than 400 KB. :)

Note that Internet Explorer or old versions of FireFox do not recognize theapplication/json; charset=UTF-8 content type to be viewed internally. This is a limitation of those softwares, so above requests will download the content as .json files, but won’t prevent AJAX requests to work as expected.

Stateless ORM

Our framework is implementing REST as a stateless protocol, just as the HTTP/1.1 protocol it could use as its communication layer.

A stateless server is a server that treats each request as an independent transaction that is unrelated to any previous request.

At first, you could find it a bit disappointing from a classic Client-Server approach. In a stateless world, you are never sure that your Client data is up-to-date. The only place where the data is safe is the server. In the web world, it’s not confusing. But if you are coming from a rich Client background, this may concern you: you should have the habit of writing some synchronization code from the server to replicate all changes to all its clients. This is not necessary in a stateless architecture any more.

The main rule of this architecture is to ensure that the Server is the only reference, and that the Client is able to retrieve any pending update from the Server side. That is, always modify a record content on a server side, then refresh the client to retrieve the modified value. Donot modify the client side directly, but always pass through the Server. The UI components of the framework follow these principles. Client-side modification could be performed, but must be made in a separated autonomous table/database. This will avoid any synchronization problem in case of concurrent client modification.

A stateless design is also pretty convenient when working with complex solutions.
Even Domain-Driven Design tends to restrain state to its smallest extend possible, since state introduces complexity.

ref:http://blog.synopse.info/post/2014/01/10/RESTful-mORMot

DataSnap analysis based on Speed & Stability tests – Part 2

Many thanks to all those who participated in the discussion in the previous post. All comments, criticisms and compliments are welcome. This is what motivates me to keep writing this blog.

The repercussion of the previous post was much bigger than I expected. The Embarcadero team corrected some of the DataSnap problems and therefore I decided to redo these tests and create this new post.

This post is a continuation of DataSnap analysis based on Speed & Stability tests, if you haven’t read the previous post, please do it before you go ahead.

Embarcadero

To be honest, I surprised myself at the reaction of Embarcadero. I really thought they would inundate me with criticism and would do nothing. I expected some of those unfounded answers that sometimes I receive. I thought nothing would happen, no fix or solution.

Actually I received some criticism during the Embarcadero Webinar presented by David I. and Marco Cantù. Someone questioned them about my tests and both spoke I wanted a better wizard with more options and that allowed me to configure everything without writing a line of code. It was a totally distorted view of my post. I only used the Wizard to avoid claims that the problem was in my implementation and not in the framework. I don’t like wizards and Embarcadero don’t need to create better or fancy ones. All I want is that the framework works and has a decent documentation.

After all they understood that it wasn’t a Wizard problem and a lazy programmer, but instead a real problem in the framework.  After some discussion they realized the problem and then worked to correct.

Surely there was a pressure from the community over the Embarcadero team, claiming the quality they promise in their products. The participation of many people of the Delphi community influenced on the speed at which Embarcadero worked on this problem. I don’t think it should be that way. If I’m with a problem and I present it to Embarcadero, by e-mail or QC, they should look at it the same way. Why is it necessary for the Delphi community to press them for a fix on a bug?

The impression I have, evaluating the comments of David I. and Marco Cantù is that this response we’ve had from Embarcadero is mainly due to the commitment of Marco Cantù. Less than 2 hours after sending the email asking him to look at the blog I got feedback and since then he speaks with me daily to try to solve these problems. Thank you Marco and congratulations for the work you are doing. I really hope you can influence other employees of Embarcadero, maybe this way we’ll have a better feedback without needing to post an article on a blog.

For those who didn’t follow the comments of the previous post I will give a brief summary of what happened. A few days after posting my article, Marco Cantú wrote an article on his blog talking about optimizations that can be performed in DataSnap servers to improve performance and stability. He also conducted a webinar talking a bit about it. What happened is that I applied all the optimizations that were suggested and the problem persisted. Marco also realized this and has allocated a team to work on it and try to find the problems. After some time Embarcadero released XE3 Update 1 with a series of fixes for DataSnap. In this article I will present the tests I did with this new version and an analysis of the results.

Optimizations

I received some criticism concerning the Datasnap server I used . In general they criticized the fact I didn’t apply an optimization or because I didn’t use ISAPI, for example. Thank you for the constructive criticism, it helped me a lot.

I agree that the way the server was developed wasn’t ideal to test the performance of the framework. Maybe I haven’t noticed it because my focus was to show the stability problems of the framework, not so much the performance problem. Anyway, I applied these optimizations and redid the tests, which I will introduce below.

My main argument with respect to these critics is that the DataSnap server should at least work, even without any optimizations. It wouldn’t be as fast as it could be, but at least shouldn’t crash. I keep thinking that way. The truth is that the framework was not built to be scalable. There is no enough optimizations to make it to work in environments with a lot of concurrency.

I will make a brief description of the optimizations proposed by the community and Embarcadero.

Keep-Alive

Michael Justin noticed a mistake I made on the DataSnap server. Thank you Michael. All servers should use Keep-Alive. What I didn’t know was that the DataSnap Server had a setting for this and that was disabled by default. Thus, the DataSnap test was the only one who didn’t use Keep-Alive.

Theoretically it would be an interesting optimization, particularly for the test case presented, where many calls are made to the server. But I found a problem related to it. By enabling the Keep-Alive on server, the performance drops absurdly (5 requests / second). This occurs only with the testing server and client on different machines. On a local test it works perfectly. I have informed this problem to Marco Cantù and he would further evaluate it.

With that, I was forced to disable Keep-Alive, so all tests with DataSnap are without the Keep-Alive option. It would be unfair to turn this feature off on other servers because they have the capability and it works perfectly. I also don’t have enough time to redo the tests of all servers.

Memory consumption

One of the issues that I observed was related to memory consumption. Why Datasnap server consumes so much memory if the method called does absolutely nothing?

Maybe I don’t know how to explain exactly but i will try. Basically the DataSnap creates a session for each HTTP connection that it receives. This session will be destroyed after 20 minutes, in other words, on the first 20 minutes of the test the memory consumption will only go up, after that it has the tendency of stabilize itself. I really have no idea why Datasnap does this. In a REST application I don’t see much sense in these sessions with a default configuration. Of course, sessions can be helpful, but i can’t understand why it’s a default configuration. Indeed, DataSnap doesn’t have a configuration for that. It appears like you just have to use this session control, without being able to choose otherwise (There is no documentation).  The MORMot framework has a session control too but it’s configurable and doesn’t consumes so much memory.

Anyway, there is a way around this problem. Daniele Teti wrote an article on his blog, take a look. The solution that I will show here was placed by him on his blog. Thanks Daniele.

uses System.StrUtils, DataSnap.DSSession, Data.DBXPlatform;

function TServerMethods1.HelloWorld: String;
begin

Result := 'Hello World';
GetInvocationMetaData.CloseSession := True;

end;

After running this method the session will close and memory consumption will be lower. Of course still exists an overhead for creating and destroying this session.

Thread Pool

Ondrej Kelle alerted me to use a thread pool. Thank you Ondrej. I really didn’t know that.

All servers were using thread pool except Datasnap. Marco Cantù put more information on his blog about how to implement this thread pool on DataSnap. Implementing a thread pool should avoid all overhead on creating and destroying Indy threads I talked about on the first post.

ISAPI

I received severe criticism for doing a test without using ISAPI. I was worried when I read these critics, especially coming from renowned and experienced people.

I don’t want to justify, instead I want to explain why I haven’t used ISAPI. As you know, I don’t have much experience with servers, I’m starting to work with it now. Practically every opinion I have comes from information I extracted from the internet, books, etc. I’ve never worked effectively with ISAPI. I’ve already configured IIS a few times and did some tests but I never worked with anything in production using ISAPI. In almost every lecture I attended on DataSnap (This includes Delphi Conference 2011 and 2012 and Webinars) was placed as an advantage of DataSnap that it doesn’t need to use IIS to run the server. The “experts” reported problems, difficulties, etc.. For me it was clear (perhaps wrongly) that not using IIS was a great advantage. Because of this I had not cogitated the possibility of using ISAPI. Despite this, when the server started to fail I created an ISAPI server to test and the problem also occurred.

To add a bit more information to my study I included a ISAPI server on the tests that I will present in this post.

The new tests

The tests were conducted in the same environment of previous tests using the same methodology. Three additional servers were added as well.

  • A server using the same sources of the first version, but compiled using XE3 Update1. No optimizations.
  • A VCL server rather than Console (Suggestion from Marco Cantù) with all improvements suggested by the community and Embarcadero.
  • A ISAPI server

I sent VCL server codes for Marco before taking the tests. Unfortunately he didn’t have enough time to evaluate carefully but he told me it seemed like a good implementation.

The servers were added to the folder on github.

NOTE: I didn’t find  a way to measure the ISAPI server memory consumption, I believe that it couldn’t be done since the operation is quite different. Therefore it does not appear in the graphics memory consumption. If someone with more experience have any ideas, feel free to comment.

Pierre Yager suggested I perform tests in Delphi on Rails framework. I did some quick tests and the framework looks promising. Certainly has more performance than the DataSnap. Pierre, unfortunately I didn’t have time to perform all tests with this framework. Anyway I think that is a valid alternative and who is looking for options should certainly evaluate this framework as well. Thank you.

Tests without concurrency

The methodology hasn’t changed. In these tests we used only one thread making calls to the server, without any concurrency.

request_second_1thread

In this test, regarding the performance, showed no significant change. The update provided by Embarcadero in nothing has changed in this aspect (and this was not the purpose of the update). The optimizations also had no effect.

Maybe the optimization that could help  in this test is the Keep-Alive that malfunctioned, as explained above.

Let’s take a look at the memory consumption.

Memory_Consumption_1thread

We had changes in memory consumption. Although the graph shows a slightly lower consumption between versions XE3 and XE3 Update 1 I do not believe that Embarcadero has made any changes in this regard. Especially because we don’t see  the same behavior on the test with 1 million requests. The memory consumption of DataSnap servers are simply unpredictable and unstable.

The optimizations have a big effect here. The code proposed by Daniele Teti was effective and memory consumption dropped significantly.

Tests with concurrency

In these tests do we not have data for DataSnap server (Console) compiled with XE3 (pre-update1) because it didn’t run these tests as explained in the first post.

request_second_nthreads

The performance of all DataSnap servers in the tests with concurrency was mediocre. Thanks to XE3 Update 1 at least the server did not crash. You may notice that the server without optimization appears to have superior performance to others, but we have to evaluate this carefully, because a new variable needs to be exposed. Something I had not checked in any of the other tests appeared now. It comes to the amount of requests rejected by the server, or simply unanswered.
errors

The chart above shows the error rates in the test with a hundred threads (Unfortunately I do not have the rates of the test with 50 threads because I did not realize that during these tests. Rates of errors with 50 threads should be smaller).

All DataSnap servers showed very high error rates, ranging from 61% to 97%. Among the servers that had errors (Only DataSnap),  the one with the  lowest error rate was ISAPI with 61.48%.

Memory_Consumption_Nthreads

The memory consumption of the DataSnap server without optimizations is quite high. With the optimizations it is much smaller. Still showed a significant and disturbing consumption in the second test. I do not know where this consumption comes as sessions are being destroyed. I believe it is memory leaks.

Strange behavior

I could not monitor all the tests in real time simply because the tests take too long. Some tests took more than 10 hours and were running overnight. I took a week to complete all these tests because of this slowness.

In some moments I monitored the tests and incidentally I noticed a very strange behavior. In this case, the VCL server compiled with the XE3 Update 1 with optimizations running test in 100 threads.

datasnap_behavior_100threads_1datasnap_behavior_100threads_2In these pictures you can see the memory consumption varies a lot and a extremely high peak in I/O . The I/O was stable at 22.3KB and suddenly jumps to 464.5 KB, after a while it stabilizes again. At the same time the server was running with 1153 threads. An absurd amount. I got to catch more than 1450 threads running during this test. Where does it come from?

I have no theory to explain this behavior. Theoretically the overhead of Indy threads should no longer exist, since I am using a thread pool. I leave it in the hands of experts.

Another very strange behavior I noticed in the test with only one thread. I spent a long time researching but have not found the reason. Some softwares interfere with DataSnap server speed . I identified two, Google Chrome and EyeBeam (VOIP). The strange thing is that they cause the server to become much faster. The difference amounts to more than four times in some cases. In other words, the DataSnap server is four times faster with the EyeBeam running.

With Google Chrome the difference varies. As you use the browser DataSnap suffers a change in performance, but if you leave the browser idle it has no effect.

Obviously all these tests were done without interference. I think it would be interesting that Embarcadero investigate this. Probably they can find a way to optimize the DataSnap server without major changes. After all, if a third party software can do it, should not be so difficult to implement internally.

I have not noticed this interference in other servers.

Conclusion

With the adjustment made by Embarcadero in XE3 Update 1 I could finish the tests and introduce some new elements to the study. Now we can see more clearly how far we can go with DataSnap. For me it became even clearer that the DataSnap is an option only for small projects or small requirements. Even using ISAPI I did not get satisfactory results in this environment.

Marco Cantù made ​​it clear to me that the Update 1 includes some fixes for the problems and that it was not any major restructuring. He also made ​​it clear that after this update the server is more stable and able to work with a slightly larger amount of connections, but we can not expect it to work in environments with a large concurrency, which is my case and the test case presented here. Marco said that to improve these problems would require a complete redesign of the framework and they are thinking about this possibility. I hope Embarcadero actually invest a little more in DataSnap. Maybe it becomes the product they are selling.

My final critic  is for Embarcadero policy updates that is the target of much criticism around the world. All users of the previous versions of DataSnap XE3 just have to live with these problems and it has been like this for years. Embarcadero simply forces companies to buy a new product to gain access for a bug correction. As an analogy, you buy a car and it comes with an engine problem, instead of the manufacturer giving you maintenance, replace the engine or something, they correct the problem in a new model and forces you to buy the new one. I am not a lawyer and do not know much but I think this behavior violates the code of consumer protection in my country. At the company where I work, all purchases of Embarcadero products are suspended (due to the huge amount of issues with the Brazilian support), so I will not have access to this update. We bought the DataSnap XE2 to use DataSnap, it had some problems and now we have to buy the XE3 to make it work. That simply will not happen. I think it’s a totally unacceptable. What do you think of the update policy of Embarcadero?

I still can not add my blog to DelphiFeeds list, so if you can help disseminate this, I thank you. I added a suggestion on feedback page of DelphiFeeds , but it seems I to need to get too many votes. If you wish you can help me.

Liked the post? Leave a comment!

ref : https://robertocschneiders.wordpress.com/2013/01/09/datasnap-analysis-based-on-speed-stability-tests-part-2/comment-page-1/

Cds

Icon_StarterEdition
Sorting and indexing

Using indexes provides several benefits to your applications:

  • They allow clientdatasets to locate data quickly.
  • They let you apply ranges to limit the available records.
  • They let your application set up relationships with other datasets such as lookup tables or master/detail forms.
  • They specify the order in which records appear.

If a clientdataset represents server data or uses an external provider, it inherits a default index and sort order based on the data it receives. The default index is called DEFAULT_ORDER. You can use this ordering, but you cannot change or delete the index.

In addition to the default index, the clientdataset maintains a second index, called CHANGEINDEX, on the changed records stored in the change log (Delta property).

CHANGEINDEX orders all records in the clientdataset as they would appear if the changes specified in Delta were applied. CHANGEINDEX is based on the ordering inherited from DEFAULT_ORDER. As with DEFAULT_ORDER, you cannot change or delete the CHANGEINDEX index.

You can use other existing indexes, and you can create your own indexes. The following sections describe how to create and use indexes with clientdatasets.

Note You may also want to review the material on indexes in table type datasets, which also applies to clientdatasets. This material is in “Sorting records with indexes” on page 24-26 and “Limiting records with ranges” on page 24-31.

 

Adding a new index

There are three ways to add indexes to a clientdataset:

  • To create a temporary index at runtime that sorts the records in the clientdataset, you can use the IndexFieldNames property. Specify field names, separated by semicolons. Ordering of field names in the list determines their order in the index.
    This is the least powerful method of adding indexes. You can’t specify a descending or case-insensitive index, and the resulting indexes do not support grouping. These indexes do not persist when you close the dataset, and are not saved when you save the clientdataset to a file.
  • To create an index at runtime that can be used for grouping, call AddIndex.
    AddIndex lets you specify the properties of the index, including.

–          The name of the index. This can be used for switching indexes at runtime.

– The fields that make up the index. The index uses these fields to sort records and to locate records that have specific values on these fields.

– How the index sorts records. By default, indexes impose an ascending sort order (based on the machine’s locale). This default sort order is case-sensitive.
You can set options to make the entire index case-insensitive or to sort in descending order. Alternately, you can provide a list of fields to be sorted caseinsensitively and a list of fields to be sorted in descending order.

  • The default level of grouping support for the index.

Indexes created with AddIndex do not persist when the clientdataset is closed.
(That is, they are lost when you reopen the clientdataset). You can’t call AddIndex when the dataset is closed. Indexes you add using AddIndex are not saved when you save the clientdataset to a file.

  • The third way to create an index is at the time the clientdataset is created. Before creating the clientdataset, specify the desired indexes using the IndexDefs property. The indexes are then created along with the underlying dataset when you call CreateDataSet. See “Creating and deleting tables” on page 24-38 for more information about creating clientdatasets.
    As with AddIndex, indexes you create with the dataset support grouping, can sort in ascending order on some fields and descending order on others, and can be case insensitive on some fields and case sensitive on others. Indexes created this way always persist and are saved when you save the clientdataset to a file.

Tip You can index and sort on internally calculated fields with clientdatasets.

 

Deleting and switching indexes

To remove an index you created for a clientdataset, call DeleteIndex and specify the name of the index to remove. You cannot remove the DEFAULT_ORDER and CHANGEINDEX indexes.

To use a different index when more than one index is available, use the IndexName property to select the index to use. At design time, you can select from available indexes in IndexName property drop-down box in the Object Inspector.

 

Using indexes to group data

When you use an index in your clientdataset, it automatically imposes a sort order on the records. Because of this order, adjacent records usually contain duplicate values on the fields that make up the index. For example, consider the following fragment from an orders table that is indexed on the SalesRep and Customer fields:

SalesRep              Customer             OrderNo              Amount

1                             1                             5                             100

1                             1                             2                             50

1                             2                             3                             200

1                             2                             6                             75

2                             1                             1                             10

2                             3                             4                             200

Because of the sort order, adjacent values in the SalesRep column are duplicated.

Within the records for SalesRep 1, adjacent values in the Customer column are duplicated. That is, the data is grouped by SalesRep, and within the SalesRep group it is grouped by Customer. Each grouping has an associated level. In this case, the SalesRep group has level 1 (because it is not nested in any other groups) and the Customer group has level 2 (because it is nested in the group with level 1). Grouping level corresponds to the order of fields in the index.

Clientdatasets let you determine where the current record lies within any given grouping level. This allows your application to display records differently, depending on whether they are the first record in the group, in the middle of a group, or the last record in a group. For example, you might want to display a field value only if it is on the first record of the group, eliminating the duplicate values. To do this with the previous table results in the following:

SalesRep              Customer             OrderNo              Amount

1                             1                             5                             100

2                             50

2                             3                             200

6                             75

2                             1                             1                             10

3                             4                             200

To determine where the current record falls within any group, use the GetGroupState method. GetGroupState takes an integer giving the level of the group and returns a value indicating where the current record falls the group (first record, last record, or neither).

When you create an index, you can specify the level of grouping it supports (up to the number of fields in the index). GetGroupState can’t provide information about groups beyond that level, even if the index sorts records on additional fields.

 

Representing calculated values

As with any dataset, you can add calculated fields to your clientdataset. These are fields whose values you calculate dynamically, usually based on the values of other fields in the same record. For more information about using calculated fields, see “Defining a calculated field” on page 25-7.

Clientdatasets, however, let you optimize when fields are calculated by using internally calculated fields. For more information on internally calculated fields, see “Using internally calculated fields in clientdatasets” below.

You can also tell clientdatasets to create calculated values that summarize the data in several records using maintained aggregates. For more information on maintained aggregates, see “Using maintained aggregates” on page 29-11.

 

Using internally calculated fields in clientdatasets

In other datasets, your application must compute the value of calculated fields every time the record changes or the user edits any fields in the current record. It does this in an OnCalcFields event handler.

While you can still do this, clientdatasets let you minimize the number of times calculated fields must be recomputed by saving calculated values in the clientdataset’s data. When calculated values are saved with the clientdataset, they must still be recomputed when the user edits the current record, but your application need not recompute values every time the current record changes. To save calculated values in the clientdataset’s data, use internally calculated fields instead of calculated fields.

Internally calculated fields, just like calculated fields, are calculated in an OnCalcFields event handler. However, you can optimize your event handler by checking the State property of your clientdataset. When State is dsInternalCalc, you must recompute internally calculated fields. When State isdsCalcFields, you need only recompute regular calculated fields.

To use internally calculated fields, you must define the fields as internally calculated before you create the clientdataset. Depending on whether you use persistent fields or field definitions, you do this in one of the following ways:

  • If you use persistent fields, define fields as internally calculated by selecting InternalCalc in the Fields editor.
  • If you use field definitions, set the InternalCalcField property of the relevant field definition to True.

Note Other types of datasets use internally calculated fields. However, with other datasets, you do not calculate these values in an OnCalcFields event handler. Instead, they are computed automatically by the BDE or remote database server.

 

Using maintained aggregates

Clientdatasets provide support for summarizing data over groups of records.

Because these summaries are automatically updated as you edit the data in the dataset, this summarized data is called a “maintained aggregate.”

In their simplest form, maintained aggregates let you obtain information such as the sum of all values in a column of the clientdataset. They are flexible enough, however, to support a variety of summary calculations and to provide subtotals over groups of records defined by a field in an index that supports grouping.

 

Specifying aggregates

To specify that you want to calculate summaries over the records in a clientdataset, use the Aggregates property. Aggregates is a collection of aggregate specifications (TAggregate). You can add aggregate specifications to your clientdataset using the Collection Editor at design time, or using the Addmethod of Aggregates at runtime. If you want to create field components for the aggregates, create persistent fields for the aggregated values in the Fields Editor.

Note When you create aggregated fields, the appropriate aggregate objects are added to the clientdataset’s Aggregates property automatically. Do not add them explicitly when creating aggregated persistent fields. For details on creating aggregated persistent fields, see “Defining an aggregate field” on page 25-10.

For each aggregate, the Expression property indicates the summary calculation it represents. Expression can contain a simple summary expression such as

Sum(Field1)

or a complex expression that combines information from several fields, such as

Sum(Qty * Price) – Sum(AmountPaid)

Aggregate expressions include one or more of the summary operators in Table 29.2

Table 29.2 Summary operators for maintained aggregates

Operator              Use

Sum                        Totals the values for a numeric field or expression

Avg                         Computes the average value for a numeric or date-time field or expression

Count                      Specifies the number of non-blank values for a field or expression

Min                         Indicates the minimum value for a string, numeric, or date-time field or expression

Max                        Indicates the maximum value for a string, numeric, or date-time field or expression

The summary operators act on field values or on expressions built from field values using the same operators you use to create filters. (You can’t nest summary operators, however.) You can create expressions by using operators on summarized values with other summarized values, or on summarized values and constants.

However, you can’t combine summarized values with field values, because such expressions are ambiguous (there is no indication of which record should supply the field value.) These rules are illustrated in the following expressions:

Sum(Qty * Price) {legal — summary of an expression on fields }

Max(Field1) – Max(Field2) {legal — expression on summaries }

Avg(DiscountRate) * 100 {legal — expression of summary and constant }

Min(Sum(Field1)) {illegal — nested summaries }

Count(Field1) – Field2 {illegal — expression of summary and field }

 

Aggregating over groups of records

By default, maintained aggregates are calculated so that they summarize all the records in the clientdataset. However, you can specify that you want to summarize over the records in a group instead. This lets you provide intermediate summaries such as subtotals for groups of records that share a common field value.

Before you can specify a maintained aggregate over a group of records, you must use an index that supports the appropriate grouping. See “Using indexes to group data” on page 29-9 for information on grouping support.

Once you have an index that groups the data in the way you want it summarized, specify the IndexName and GroupingLevel properties of the aggregate to indicate what index it uses, and which group or subgroup on that index defines the records it summarizes.

For example, consider the following fragment from an orders table that is grouped by SalesRep and, within SalesRep, by Customer:

SalesRep              Customer             OrderNo              Amount

1                             1                             5                             100

1                             1                             2                             50

1                             2                             3                             200

1                             2                             6                             75

2                             1                             1                             10

2                             3                             4                             200

The following code sets up a maintained aggregate that indicates the total amount for each sales representative:

Agg.Expression := ‘Sum(Amount)’;

Agg.IndexName := ‘SalesCust’;

Agg.GroupingLevel := 1;

Agg.AggregateName := ‘Total for Rep’;

To add an aggregate that summarizes for each customer within a given sales representative, create a maintained aggregate with level 2.

Maintained aggregates that summarize over a group of records are associated with a specific index. The Aggregates property can include aggregates that use different indexes. However, only the aggregates that summarize over the entire dataset and those that use the current index are valid. Changing the current index changes which aggregates are valid. To determine which aggregates are valid at any time, use the ActiveAggs property.

 

Obtaining aggregate values

To get the value of a maintained aggregate, call the Value method of the TAggregate object that represents the aggregate. Value returns the maintained aggregate for the group that contains the current record of the clientdataset.

When you are summarizing over the entire clientdataset, you can call Value at any time to obtain the maintained aggregate. However, when you are summarizing over grouped information, you must be careful to ensure that the current record is in the group whose summary you want. Because of this, it is a good idea to obtain aggregate values at clearly specified times, such as when you move to the first record of a group or when you move to the last record of a group. Use the GetGroupState method to determine where the current record falls within a group.

To display maintained aggregates in data-aware controls, use the Fields editor to create a persistent aggregate field component. When you specify an aggregate field in the Fields editor, the clientdataset’s Aggregates is automatically updated to include the appropriate aggregate specification. TheAggFields property contains the new aggregated field component, and the FindField method returns it.

 

Copying data from another dataset

To copy the data from another dataset at design time, right click the clientdataset and choose Assign Local Data. A dialog appears listing all the datasets available in your project. Select the one whose data and structure you want to copy and choose OK. When you copy the source dataset, your clientdataset is automatically activated.

To copy from another dataset at runtime, you can assign its data directly or, if the source is another clientdataset, you can clone the cursor.

 

Assigning data directly

You can use the clientdataset’s Data property to assign data to a clientdataset from another dataset. Data is a data packet in the form of an OleVariant. A data packet can come from another clientdataset or from any other dataset by using a provider. Once a data packet is assigned to Data, its contents are displayed automatically in dataaware controls connected to the clientdataset by a data source component.

When you open a clientdataset that represents server data or that uses an external provider component, data packets are automatically assigned to Data.

When your clientdataset does not use a provider, you can copy the data from another clientdataset as follows:

ClientDataSet1.Data := ClientDataSet2.Data;

Note When you copy the Data property of another clientdataset, you copy the change log as well, but the copy does not reflect any filters or ranges that have been applied. To include filters or ranges, you must clone the source dataset’s cursor instead.

If you are copying from a dataset other than a clientdataset, you can create a dataset provider component, link it to the source dataset, and then copy its data:

TempProvider := TDataSetProvider.Create(Form1);

TempProvider.DataSet := SourceDataSet;

ClientDataSet1.Data := TempProvider.Data;

TempProvider.Free;

Note When you assign directly to the Data property, the new data packet is not merged into the existing data. Instead, all previous data is replaced.

If you want to merge changes from another dataset, rather than copying its data, you must use a provider component. Create a dataset provider as in the previous example, but attach it to the destination dataset and instead of copying the data property, use the ApplyUpdates method:

TempProvider := TDataSetProvider.Create(Form1);

TempProvider.DataSet := ClientDataSet1;

TempProvider.ApplyUpdates(SourceDataSet.Delta, -1, ErrCount);

TempProvider.Free;

 

Cloning a clientdataset cursor

Clientdatasets use the CloneCursor method to let you work with a second view of the data at runtime. CloneCursor lets a second clientdataset share the original clientdataset’s data. This is less expensive than copying all the original data, but, because the data is shared, the second clientdataset can’t modify the data without affecting the original clientdataset.

CloneCursor takes three parameters: Source specifies the clientdataset to clone. The last two parameters (Reset and KeepSettings) indicate whether to copy information other than the data. This information includes any filters, the current index, links to a master table (when the source dataset is a detail set), the ReadOnly property, and any links to a connection component or provider.

When Reset and KeepSettings are False, a cloned clientdataset is opened, and the settings of the source clientdataset are used to set the properties of the destination.

When Reset is True, the destination dataset’s properties are given the default values (no index or filters, no master table, ReadOnly is False, and no connection component or provider is specified). When KeepSettings is True, the destination dataset’s properties are not changed.

 

Adding application-specific information to the data

Application developers can add custom information to the clientdataset’s Data property. Because this information is bundled with the data packet, it is included when you save the data to a file or stream. It is copied when you copy the data to another dataset. Optionally, it can be included with the Deltaproperty so that a provider can read this information when it receives updates from the clientdataset.

To save application-specific information with the Data property, use the SetOptionalParam method. This method lets you store an OleVariant that contains the data under a specific name.

To retrieve this application-specific information, use the GetOptionalParam method, passing in the name that was used when the information was stored.

 

Using a clientdataset to cache updates

By default, when you edit data in most datasets, every time you delete or post a record, the dataset generates a transaction, deletes or writes that record to the database server, and commits the transaction. If there is a problem writing changes to the database, your application is notified immediately: the dataset raises an exception when you post the record.

If your dataset uses a remote database server, this approach can degrade performance due to network traffic between your application and the server every time you move to a new record after editing the current record. To minimize the network traffic, you may want to cache updates locally. When you cache updates, you application retrieves data from the database, caches and edits it locally, and then applies the cached updates to the database in a single transaction. When you cache updates, changes to a dataset (such as posting changes or deleting records) are stored locally instead of being written directly to the dataset’s underlying table. When changes are complete, your application calls a method that writes the cached changes to the database and clears the cache.

Caching updates can minimize transaction times and reduce network traffic.

However, cached data is local to your application and is not under transaction control. This means that while you are working on your local, in-memory, copy of the data, other applications can be changing the data in the underlying database table.

They also can’t see any changes you make until you apply the cached updates.

Because of this, cached updates may not be appropriate for applications that work with volatile data, as you may create or encounter too many conflicts when trying to merge your changes into the database.

Although the BDE and ADO provide alternate mechanisms for caching updates, using a clientdataset for caching updates has several advantages:

  • Applying updates when datasets are linked in master/detail relationships is handled for you. This ensures that updates to multiple linked datasets are applied in the correct order.
  • Clientdatasets give you the maximum of control over the update process. You can set properties to influence the SQL that is generated for updating records, specify the table to use when updating records from a multi-table join, or even apply updates manually from a BeforeUpdateRecord event handler.
  • When errors occur applying cached updates to the database server, only clientdatasets (and dataset providers) provide you with information about the current record value on the database server in addition to the original (unedited) value from your dataset and the new (edited) value of the update that failed.
  • Clientdatasets let you specify the number of update errors you want to tolerate before the entire update is rolled back.

 

Overview of using cached updates

To use cached updates, the following order of processes must occur in an application:

1 Indicate the data you want to edit. How you do this depends on the type of clientdataset you are using:

  • If you are using TClientDataSet, Specify the provider component that represent the data you want to edit. This is described in “Specifying a provider” on page 29-25.
  • If you are using a clientdataset associated with a particular data access mechanism, you must

–         Identify the database server by setting the DBConnectionproperty to an appropriate connection component.

–         Indicate what data you want to see by specifying the CommandText and CommandTypeproperties. CommandType indicates whether CommandText is an SQL statement to execute, the name of a stored procedure, or the name of a table. If CommandText is a query or stored procedure, use the Params property to provide any input parameters.

–         Optionally, use the Options property to indicate whether nested detail sets and BLOB data should be included in data packets or fetched separately, whether specific types of edits (insertions, modifications, or deletions) should be disabled, whether a single update can affect multiple server records, and whether the clientdataset’s records are refreshed when it applies updates.
Options is identical to a provider’s Options property. As a result, it allows you to set options that are not relevant or appropriate. For example, there is no reason to include poIncFieldProps, because the clientdataset does not fetch its data from a dataset with persistent fields. Conversely, you do not want to exclude poAllowCommandText, which is included by default, because that would disable the CommandText property, which the clientdataset uses to specify what data it wants. For information on the provider’s Options property, see “Setting options that influence the data packets” on page 30-5.

2 Display and edit the data, permit insertion of new records, and support deletions of existing records. Both the original copy of each record and any edits to it are stored in memory. This process is described in “Editing data” on page 29-5.

3 Fetch additional records as necessary. By default, clientdatasets fetch all records and store them in memory. If a dataset contains many records or records with large BLOB fields, you may want to change this so that the clientdataset fetches only enough records for display and re-fetches as needed. For details on how to control the record-fetching process, see “Requesting data from the source dataset or document” on page 29-26.

4 Optionally, refresh the records. As time passes, other users may modify the data on the database server. This can cause the clientdataset’s data to deviate more and more from the data on the server, increasing the chance of errors when you apply updates. To mitigate this problem, you can refresh records that have not already been edited. See “Refreshing records” on page 29-31 for details.

5 Apply the locally cached records to the database or cancel the updates. For each record written to the database, a BeforeUpdateRecord event is triggered. If an error occurs when writing an individual record to the database, an OnUpdateError event enables the application to correct the error, if possible, and continue updating.

When updates are complete, all successfully applied updates are cleared from the local cache. For more information about applying updates to the database, see “Updating records” on page 29-20.

Instead of applying updates, an application can cancel the updates, emptying the change log without writing the changes to the database. You can cancel the updates by calling CancelUpdates method. All deleted records in the cache are undeleted, modified records revert to original values, and newly inserted record simply disappear.

 

Choosing the type of dataset for caching updates

Delphi includes some specialized clientdataset components for caching updates.

Each clientdataset is associated with a particular data access mechanism. These are listed in Table 29.3:

Table 29.3 Specialized clientdatasets for caching updates

Clientdataset                      Data access mechanism

TBDEClientDataSet                 Borland Database Engine

TSimpleDataSet                      dbExpress

TIBClientDataSet                    InterBase Express

——————————————————————

In addition, you can cache updates using the generic clientdataset (TClientDataSet) with an external provider and source dataset. For information about using TClientDataSet with an external provider, see “Using a clientdataset with a provider” on page 29-24.

Note The specialized clientdatasets associated with each data access mechanism actually use a provider and source dataset as well. However, both the provider and the source dataset are internal to the clientdataset.

It is simplest to use one of the specialized clientdatasets to cache updates. However, there are times when it is preferable to use TClientDataSet with an external provider:

  • If you are using a data access mechanism that does not have a specialized clientdataset, you must use TClientDataSet with an external provider component. For example, if the data comes from an XML document or custom dataset.
  • If you are working with tables that are related in a master/detail relationship, you should use TClientDataSet and connect it, using a provider, to the master table of two source datasets linked in a master/detail relationship. The clientdataset sees the detail dataset as a nested dataset field. This approach is necessary so that updates to master and detail tables can be applied in the correct order.
  • If you want to code event handlers that respond to the communication between the clientdataset and the provider (for example, before and after the clientdataset fetches records from the provider), you must use TClientDataSet with an external provider component. The specialized clientdatasets publish the most important events for applying updates (OnReconcileError, BeforeUpdateRecord and OnGetTableName), but do not publish the events surrounding communication between the clientdataset and its provider, because  they are intended primarily for multi-tiered applications.
  • When using the BDE, you may want to use an external provider and source dataset if you need to use an update object. Although it is possible to code an update object from the BeforeUpdateRecord event handler of TBDEClientDataSet, it can be simpler just to assign the UpdateObject property of the source dataset. For information about using update objects, see “Using update objects to update a dataset” on page 26-40.

 

Indicating what records are modified

While the user edits a clientdataset, you may find it useful to provide feedback about the edits that have been made. This is especially useful if you want to allow the user to undo specific edits, for example, by navigating to them and clicking an “Undo” button.

The UpdateStatus method and StatusFilter properties are useful when providing feedback on what updates have occurred:

  • UpdateStatus indicates what type of update, if any, has occurred for the current record. It can be any of the following values:
  • usUnmodified indicates that the current record is unchanged.
  • usModified indicates that the current record has been edited.
  • usInserted indicates a record that was inserted by the user.
  • usDeleted indicates a record that was deleted by the user.

o    StatusFilter controls what type of updates in the change log are visible. StatusFilter works on cached records in much the same way as filters work on regular data.
StatusFilter is a set, so it can contain any combination of the following values:

  • usUnmodified indicates an unmodified record.
  • usModified indicates a modified record.
  • usInserted indicates an inserted record.
  • usDeleted indicates a deleted record.

By default, StatusFilter is the set [usModified, usInserted, usUnmodified]. You can add usDeleted to this set to provide feedback about deleted records as well.

Note UpdateStatus and StatusFilter are also useful in BeforeUpdateRecord and OnReconcileError event handlers. For information about BeforeUpdateRecord, see “Intervening as updates are applied” on page 29-21. For information about OnReconcileError, see “Reconciling update errors” on page 29-23.

The following example shows how to provide feedback about the update status of records using the UpdateStatus method. It assumes that you have changed the StatusFilter property to include usDeleted, allowing deleted records to remain visible in the dataset. It further assumes that you have added a calculated field to the dataset called “Status.”

procedure TForm1.ClientDataSet1CalcFields(DataSet: TDataSet);

begin

with ClientDataSet1 do begin

case UpdateStatus of

usUnmodified: FieldByName(‘Status’).AsString := ”;

usModified: FieldByName(‘Status’).AsString := ‘M’;

usInserted: FieldByName(‘Status’).AsString := ‘I’;

usDeleted: FieldByName(‘Status’).AsString := ‘D’;

end;

end;

end;

 

Updating records

The contents of the change log are stored as a data packet in the clientdataset’s Delta property. To make the changes in Delta permanent, the clientdataset must apply them to the database (or source dataset or XML document).

When a client applies updates to the server, the following steps occur:

1 The client application calls the ApplyUpdates method of a clientdataset object. This method passes the contents of the clientdataset’s Delta property to the (internal or external) provider. Delta is a data packet that contains a clientdataset’s updated, inserted, and deleted records.

2 The provider applies the updates, caching any problem records that it can’t resolve itself. See “Responding to client update requests” on page 30-8 for details on how the provider applies updates.

3 The provider returns all unresolved records to the clientdataset in a Result data packet. The Result data packet contains all records that were not updated. It also contains error information, such as error messages and error codes.

4 The clientdataset attempts to reconcile update errors returned in the Result data packet on a record-by-record basis.

 

Applying updates

Changes made to the clientdataset’s local copy of data are not sent to the database server (or XML document) until the client application calls the ApplyUpdates method.

ApplyUpdates takes the changes in the change log, and sends them as a data packet (called Delta) to the provider. (Note that, when using most clientdatasets, the provider is internal to the clientdataset.)

ApplyUpdates takes a single parameter, MaxErrors, which indicates the maximum number of errors that the provider should tolerate before aborting the update process. If MaxErrors is 0, then as soon as an update error occurs, the entire update process is terminated. No changes are written to the database, and the clientdataset’s change log remains intact. If MaxErrors is -1, any number of errors is tolerated, and the change log contains all records that could not be successfully applied. If MaxErrors is a positive value, and more errors occur than are permitted by MaxErrors, all updates are aborted. If fewer errors occur than specified by MaxErrors, all records successfully applied are automatically cleared from the clientdataset’s change log.

ApplyUpdates returns the number of actual errors encountered, which is always less than or equal to MaxErrors plus one. This return value indicates the number of records that could not be written to the database.

The clientdataset’s ApplyUpdates method does the following:

1 It indirectly calls the provider’s ApplyUpdates method. The provider’s ApplyUpdates method writes the updates to the database, source dataset, or XML document and attempts to correct any errors it encounters. Records that it cannot apply because of error conditions are sent back to the clientdataset.

2 The clientdataset ‘s ApplyUpdates method then attempts to reconcile these problem records by calling the Reconcile method. Reconcile is an error-handling routine that calls the OnReconcileError event handler. You must code the OnReconcileError event handler to correct errors. For details about using OnReconcileError, see “Reconciling update errors” on page 29-23.

3 Finally, Reconcile removes successfully applied changes from the change log and updates Data to reflect the newly updated records. When Reconcile completes, ApplyUpdates reports the number of errors that occurred.

Important In some cases, the provider can’t determine how to apply updates (for example, when applying updates from a stored procedure or multi-table join). Clientdatasets and provider components generate events that let you handle these situations. See “Intervening as updates are applied” below for details.

Tip If the provider is on a stateless application server, you may want to communicate with it about persistent state information before or after you apply updates.

TClientDataSet receives a BeforeApplyUpdates event before the updates are sent, which lets you send persistent state information to the server. After the updates are applied (but before the reconcile process), TClientDataSet receives an AfterApplyUpdates event where you can respond to any persistent state information returned by the application server.

 

Intervening as updates are applied

When a clientdataset applies its updates, the provider determines how to handle writing the insertions, deletions, and modifications to the database server or source dataset. When you use TClientDataSet with an external provider component, you can use the properties and events of that provider to influence the way updates are applied. These are described in “Responding to client update requests” on page 30-8.

When the provider is internal, however, as it is for any clientdataset associated with a data access mechanism, you can’t set its properties or provide event handlers. As a result, the clientdataset publishes one property and two events that let you influence how the internal provider applies updates.

  • UpdateMode controls what fields are used to locate records in the SQL statements the provider generates for applying updates. UpdateMode is identical to the provider’s UpdateMode property. For information on the provider’s UpdateMode property, see “Influencing how updates are applied” on page 30-10.
  • OnGetTableName lets you supply the provider with the name of the database table to which it should apply updates. This lets the provider generate the SQL statements for updates when it can’t identify the database table from the stored procedure or query specified by CommandText. For example, if the query executes a multi-table join that only requires updates to a single table, supplying an OnGetTableName event handler allows the internal provider to correctly apply updates.
    An OnGetTableName event handler has three parameters: the internal provider component, the internal dataset that fetched the data from the server, and a parameter to return the table name to use in the generated SQL.
  • BeforeUpdateRecord occurs for every record in the delta packet. This event lets you make any last-minute changes before the record is inserted, deleted, or modified. It also provides a way for you to execute your own SQL statements to apply the update in cases where the provider can’t generate correct SQL (for example, for multi-table joins where multiple tables must be updated.).
    A BeforeUpdateRecord event handler has five parameters: the internal provider component, the internal dataset that fetched the data from the server, a delta packet that is positioned on the record that is about to be updated, an indication of whether the update is an insertion, deletion, or modification, and a parameter that returns whether the event handler performed the update.The use of these is illustrated in the following event handler. For simplicity, the example assumes the SQL statements are available as global variables that only need field values:

procedure TForm1.SimpleDataSet1BeforeUpdateRecord(Sender: TObject;

SourceDS: TDataSet; DeltaDS: TCustomClientDataSet; UpdateKind: TUpdateKind;

var Applied Boolean);

var

SQL: string;

Connection: TSQLConnection;

begin

Connection := (SourceDS as TSimpleDataSet).Connection;

case UpdateKind of

ukModify:

begin

{ 1st dataset: update Fields[1], use Fields[0] in where clause }

SQL := Format(UpdateStmt1, [DeltaDS.Fields[1].NewValue, DeltaDS.Fields[0].OldValue]);

Connection.Execute(SQL, nil, nil);

{ 2nd dataset: update Fields[2], use Fields[3] in where clause }

SQL := Format(UpdateStmt2, [DeltaDS.Fields[2].NewValue, DeltaDS.Fields[3].OldValue]);

Connection.Execute(SQL, nil, nil);

     end;

ukDelete:

begin

{ 1st dataset: use Fields[0] in where clause }

SQL := Format(DeleteStmt1, [DeltaDS.Fields[0].OldValue]);

Connection.Execute(SQL, nil, nil);

{ 2nd dataset: use Fields[3] in where clause }

SQL := Format(DeleteStmt2, [DeltaDS.Fields[3].OldValue]);

Connection.Execute(SQL, nil, nil);

     end;

ukInsert:

begin

{ 1st dataset: values in Fields[0] and Fields[1] }

SQL := Format(InsertStmt1, [DeltaDS.Fields[0].NewValue, DeltaDS.Fields[1].NewValue]);

Connection.Execute(SQL, nil, nil);

{ 2nd dataset: values in Fields[2] and Fields[3] }

SQL := Format(InsertStmt2, [DeltaDS.Fields[2].NewValue, DeltaDS.Fields[3].NewValue]);

Connection.Execute(SQL, nil, nil);

end;

     end;

Applied := True;

end;

 

Reconciling update errors

There are two events that let you handle errors that occur during the update process:

  • During the update process, the internal provider generates an OnUpdateError event every time it encounters an update that it can’t handle. If you correct the problem in an OnUpdateError event handler, then the error does not count toward the maximum number of errors passed to theApplyUpdates method. This event only occurs for clientdatasets that use an internal provider. If you are using TClientDataSet, you can use the provider component’s OnUpdateError event instead.
  • After the entire update operation is finished, the clientdataset generates an OnReconcileError event for every record that the provider could not apply to the database server.

You should always code an OnReconcileError or OnUpdateError event handler, even if only to discard the records returned that could not be applied. The event handlers for these two events work the same way. They include the following parameters:

  • DataSet: A clientdataset that contains the updated record which couldn’t be applied. You can use this dataset’s methods to get information about the problem record and to edit the record in order to correct any problems. In particular, you will want to use the CurValue, OldValue, and NewValueproperties of the fields in the current record to determine the cause of the update problem. However, you must not call any clientdataset methods that change the current record in your event handler.
  • E: An object that represents the problem that occurred. You can use this exception to extract an error message or to determine the cause of the update error.
  • UpdateKind: The type of update that generated the error. UpdateKind can be ukModify (the problem occurred updating an existing record that was modified), ukInsert (the problem occurred inserting a new record), or ukDelete (the problem occurred deleting an existing record).
  • Action: A var parameter that indicates what action to take when the event handler exits. In your event handler, you set this parameter to

o    Skip this record, leaving it in the change log. (rrSkip or raSkip)

o    Stop the entire reconcile operation. (rrAbort or raAbort)

o    Merge the modification that failed into the corresponding record from the server. (rrMerge or raMerge) This only works if the server record does not include any changes to fields modified in the clientdataset’s record.

o    Replace the current update in the change log with the value of the record in the event handler, which has presumably been corrected. (rrApply or raCorrect)

o    Ignore the error completely. (rrIgnore) This possibility only exists in the OnUpdateError event handler, and is intended for the case where the event handler applies the update back to the database server. The updated record is removed from the change log and merged into Data, as if the provider had applied the update.

o    Back out the changes for this record on the clientdataset, reverting to the originally provided values. (raCancel) This possibility only exists in the OnReconcileError event handler.

o    Update the current record value to match the record on the server. (raRefresh) This possibility only exists in the OnReconcileError event handler.

The following code shows an OnReconcileError event handler that uses the reconcile error dialog from the RecError unit which ships in the objrepos directory. (To use this dialog, add RecError to your uses clause.)

procedure TForm1.ClientDataSetReconcileError(DataSet: TCustomClientDataSet; E:

EReconcileError; UpdateKind: TUpdateKind, var Action TReconcileAction);

begin

Action := HandleReconcileError(DataSet, UpdateKind, E);

end;

Using a clientdataset with a provider

A clientdataset uses a provider to supply it with data and apply updates when

  • It caches updates from a database server or another dataset.
  • It represents the data in an XML document.
  • It stores the data in the client portion of a multi-tiered application.

For any clientdataset other than TClientDataSet, this provider is internal, and so not directly accessible by the application. With TClientDataSet, the provider is an external component that links the clientdataset to an external source of data.

An external provider component can reside in the same application as the clientdataset, or it can be part of a separate application running on another system. For more information about provider components, see Chapter 30, “Using provider components.” For more information about applications where the provider is in a separate application on another system, see Chapter 31, “Creating multi-tiered applications.”

When using an (internal or external) provider, the clientdataset always caches any updates. For information on how this works, see “Using a clientdataset to cache updates” on page 29-16.

The following topics describe additional properties and methods of the clientdataset that enable it to work with a provider.

 

Specifying a provider

Unlike the clientdatasets that are associated with a data access mechanism, TClientDataSet has no internal provider component to package data or apply updates. If you want it to represent data from a source dataset or XML document, therefore, you must associated the clientdataset with an external provider component.

The way you associate TClientDataSet with a provider depends on whether the provider is in the same application as the clientdataset or on a remote application server running on another system.

  • If the provider is in the same application as the clientdataset, you can associate it with a provider by choosing a provider from the drop-down list for the ProviderName property in the Object Inspector. This works as long as the provider has the same Owner as the clientdataset. (The clientdataset and the provider have the same Owner if they are placed in the same form or data module.) To use a local provider that has a different Owner, you must form the association at runtime using the clientdataset’s SetProvider method.
    If you think you may eventually scale up to a remote provider, or if you want to make calls directly to the IAppServer interface, you can also set the RemoteServer property to a TLocalConnection component. If you use TLocalConnection, the TLocalConnection instance manages the list of all providers that are local to the application, and handles the clientdataset’s IAppServer calls. If you do not use TLocalConnection, the application creates a hidden object that handles the IAppServer calls from the clientdataset.
  • If the provider is on a remote application server, then, in addition to the ProviderName property, you need to specify a component that connects the clientdataset to the application server. There are two properties that can handle this task: RemoteServer, which specifies the name of a connection component from which to get a list of providers, or ConnectionBroker, which specifies a centralized broker that provides an additional level of indirection between the clientdataset and the connection component. The connection component and, if used, the connection broker, reside in the same data module as the clientdataset. The connection component establishes and maintains a connection to an application server, sometimes called a “data broker”. For more information, see “The structure of the client application” on page 31-4.
    At design time, after you specify RemoteServer or ConnectionBroker, you can select a provider from the drop-down list for the ProviderName property in the Object Inspector. This list includes both local providers (in the same form or data module) and remote providers that can be accessed through the connection component.

Note If the connection component is an instance of TDCOMConnection, the application server must be registered on the client machine.

At runtime, you can switch among available providers (both local and remote) by setting ProviderName in code.

 

Requesting data from the source dataset or document

Clientdatasets can control how they fetch their data packets from a provider. By default, they retrieve all records from the source dataset. This is true whether the source dataset and provider are internal components (as with TBDEClientDataSet, TSimpleDataSet, and TIBClientDataSet), or separate components that supply the data for TClientDataSet.

You can change how the clientdataset fetches records using the PacketRecords and FetchOnDemand properties.

 

Incremental fetching

By changing the PacketRecords property, you can specify that the clientdataset fetches data in smaller chunks. PacketRecords specifies either how many records to fetch at a time, or the type of records to return. By default, PacketRecords is set to -1, which means that all available records are fetched at once, either when the clientdataset is first opened, or when the application explicitly calls GetNextPacket. When PacketRecords is -1, then after the clientdataset first fetches data, it never needs to fetch more data because it already has all available records.

To fetch records in small batches, set PacketRecords to the number of records to fetch.

For example, the following statement sets the size of each data packet to ten records:

ClientDataSet1.PacketRecords := 10;

This process of fetching records in batches is called “incremental fetching”. Clientdatasets use incremental fetching when PacketRecords is greater than zero.

To fetch each batch of records, the clientdataset calls GetNextPacket. Newly fetched packets are appended to the end of the data already in the clientdataset.

GetNextPacket returns the number of records it fetches. If the return value is the same as PacketRecords, the end of available records was not encountered. If the return value is greater than 0 but less than PacketRecords, the last record was reached during the fetch operation. If GetNextPacket returns 0, then there are no more records to fetch.

Warning Incremental fetching does not work if you are fetching data from a remote provider on a stateless application server. See “Supporting state information in remote data modules” on page 31-19 for information on how to use incremental fetching with stateless remote data modules.

Note You can also use PacketRecords to fetch metadata information about the source dataset. To retrieve metadata information, set PacketRecords to 0.

 

Fetch-on-demand

Automatic fetching of records is controlled by the FetchOnDemand property. When FetchOnDemand is True (the default), the clientdataset automatically fetches records as needed. To prevent automatic fetching of records, set FetchOnDemand to False.

When FetchOnDemand is False, the application must explicitly call GetNextPacket to fetch records.

For example, Applications that need to represent extremely large read-only datasets can turn off FetchOnDemand to ensure that the clientdatasets do not try to load more data than can fit into memory. Between fetches, the clientdataset frees its cache using the EmptyDataSet method. This approach, however, does not work well when the client must post updates to the server.

The provider controls whether the records in data packets include BLOB data and nested detail datasets. If the provider excludes this information from records, the FetchOnDemand property causes the clientdataset to automatically fetch BLOB data and detail datasets on an as-needed basis. IfFetchOnDemand is False, and the provider does not include BLOB data and detail datasets with records, you must explicitly call the FetchBlobs or FetchDetails method to retrieve this information.

 

Getting parameters from the source dataset

There are two circumstances when the clientdataset needs to fetch parameter values:

  • The application needs the value of output parameters on a stored procedure.
  • The application wants to initialize the input parameters of a query or stored procedure to the current values on the source dataset.

Clientdatasets store parameter values in their Params property. These values are refreshed with any output parameters when the clientdataset fetches data from the source dataset. However, there may be times a TClientDataSet component in a client application needs output parameters when it is not fetching data.

To fetch output parameters when not fetching records, or to initialize input parameters, the clientdataset can request parameter values from the source dataset by calling the FetchParams method. The parameters are returned in a data packet from the provider and assigned to the clientdataset’s Paramsproperty.

At design time, the Params property can be initialized by right-clicking the clientdataset and choosing Fetch Params.

Note There is never a need to call FetchParams when the clientdataset uses an internal provider and source dataset, because the Params property always reflects the parameters of the internal source dataset. With TClientDataSet, the FetchParams method (or the Fetch Params command) only works if the clientdataset is connected to a provider whose associated dataset can supply parameters. For example, if the source dataset is a table type dataset, there are no parameters to fetch.

If the provider is on a separate system as part of a stateless application server, you can’t use FetchParams to retrieve output parameters. In a stateless application server, other clients can change and rerun the query or stored procedure, changing output parameters before the call to FetchParams. To retrieve output parameters from a stateless application server, use the Execute method. If the provider is associated with a query or stored procedure, Execute tells the provider to execute the query or stored procedure and return any output parameters. These returned parameters are then used to automatically update the Params property.

 

Passing parameters to the source dataset

Clientdatasets can pass parameters to the source dataset to specify what data they want provided in the data packets it sends. These parameters can specify

  • Input parameter values for a query or stored procedure that is run on the application server
  • Field values that limit the records sent in data packets

You can specify parameter values that your clientdataset sends to the source dataset at design time or at runtime. At design time, select the clientdataset and double-click the Params property in the Object Inspector. This brings up the collection editor, where you can add, delete, or rearrange parameters. By selecting a parameter in the collection editor, you can use the Object Inspector to edit the properties of that parameter.

At runtime, use the CreateParam method of the Params property to add parameters to your clientdataset. CreateParam returns a parameter object, given a specified name, parameter type, and datatype. You can then use the properties of that parameter object to assign a value to the parameter.

For example, the following code adds an input parameter named CustNo with a value of 605:

with ClientDataSet1.Params.CreateParam(ftInteger, ‘CustNo’, ptInput) do

AsInteger := 605;

If the clientdataset is not active, you can send the parameters to the application server and retrieve a data packet that reflects those parameter values simply by setting the Active property to True.

 

Sending query or stored procedure parameters

When the clientdataset’s CommandType property is ctQuery or ctStoredProc, or, if the clientdataset is a TClientDataSet instance, when the associated provider represents the results of a query or stored procedure, you can use the Params property to specify parameter values. When the clientdataset requests data from the source dataset or uses its Execute method to run a query or stored procedure that does not return a dataset, it passes these parameter values along with the request for data or the execute command. When the provider receives these parameter values, it assigns them to its associated dataset. It then instructs the dataset to execute its query or stored procedure using these parameter values, and, if the clientdataset requested data, begins providing data, starting with the first record in the result set.

Note Parameter names should match the names of the corresponding parameters on the source dataset.

 

Limiting records with parameters

If the clientdataset is

  • a TClientDataSet instance whose associated provider represents a TTable or TSQLTable component
  • a TSimpleDataSet or a TBDEClientDataSet instance whose CommandType property Is ctTable

then it can use the Params property to limit the records that it caches in memory. Each parameter represents a field value that must be matched before a record can be included in the clientdataset’s data. This works much like a filter, except that with a filter, the records are still cached in memory, but unavailable.

Each parameter name must match the name of a field. When using TClientDataSet, these are the names of fields in the TTable or TSQLTable component associated with the provider. When using TSimpleDataSet or TBDEClientDataSet, these are the names of fields in the table on the database server. The data in the clientdataset then includes only those records whose values on the corresponding fields match the values assigned to the parameters.

For example, consider an application that displays the orders for a single customer.

When the user identifies the customer, the clientdataset sets its Params property to include a single parameter named CustID (or whatever field in the source table is called) whose value identifies the customer whose orders should be displayed. When the clientdataset requests data from the source dataset, it passes this parameter value. The provider then sends only the records for the identified customer. This is more efficient than letting the provider send all the orders records to the client application and then filtering the records using the clientdataset.

 

Handling constraints from the server

When a database server defines constraints on what data is valid, it is useful if the clientdataset knows about them. That way, the clientdataset can ensure that user edits never violate those server constraints. As a result, such violations are never passed to the database server where they would be rejected. This means fewer updates generate error conditions during the updating process.

Regardless of the source of data, you can duplicate such server constraints by explicitly adding them to the clientdataset. This process is described in “Specifying custom constraints” on page 29-7.

It is more convenient, however, if the server constraints are automatically included in data packets. Then you need not explicitly specify default expressions and constraints, and the clientdataset changes the values it enforces when the server constraints change. By default, this is exactly what happens: if the source dataset is aware of server constraints, the provider automatically includes them in data packets and the clientdataset enforces them when the user posts edits to the change log.

Note Only datasets that use the BDE can import constraints from the server. This means that server constraints are only included in data packets when using TBDEClientDataSet or TClientDataSet with a provider that represents a BDE-based dataset. For more information on how to import server constraints and how to prevent a provider from including them in data packets, see “Handling server constraints” on page 30-13.

Note For more information on working with the constraints once they have been imported, see “Using server constraints” on page 25-23.

While importing server constraints and expressions is an extremely valuable feature that helps an application preserve data integrity, there may be times when it needs to disable constraints on a temporary basis. For example, if a server constraint is based on the current maximum value of a field, but the clientdataset uses incremental fetching, the current maximum value for a field in the clientdataset may differ from the maximum value on the database server, and constraints may be invoked differently. In another case, if a clientdataset applies a filter to records when constraints are enabled, the filter may interfere in unintended ways with constraint conditions. In each of these cases, an application may disable constraint-checking.

To disable constraints temporarily, call the DisableConstraints method. Each time DisableConstraints is called, a reference count is incremented. While the reference count is greater than zero, constraints are not enforced on the clientdataset.

To reenable constraints for the clientdataset, call the dataset’s EnableConstraints method. Each call to EnableConstraints decrements the reference count. When the reference count is zero, constraints are enabled again.

Tip Always call DisableConstraints and EnableConstraints in paired blocks to ensure that constraints are enabled when you intend them to be.

 

Refreshing records

Clientdatasets work with an in-memory snapshot of the data from the source dataset. If the source dataset represents server data, then as time elapses other users may modify that data. The data in the clientdataset becomes a less accurate picture of the underlying data.

Like any other dataset, clientdatasets have a Refresh method that updates its records to match the current values on the server. However, calling Refresh only works if there are no edits in the change log. Calling Refresh when there are unapplied edits results in an exception.

Clientdatasets can also update the data while leaving the change log intact. To do this, call the RefreshRecord method. Unlike the Refresh method, RefreshRecord updates only the current record in the clientdataset. RefreshRecord changes the record value originally obtained from the provider but leaves any changes in the change log.

Warning It is not always appropriate to call RefreshRecord. If the user’s edits conflict with changes made to the underlying dataset by other users, calling RefreshRecord masks this conflict. When the clientdataset applies its updates, no reconcile error occurs and the application can’t resolve the conflict.

In order to avoid masking update errors, you may want to check that there are no pending updates before calling RefreshRecord. For example, the following AfterScroll refreshes the current record every time the user moves to a new record (ensuring the most up-to-date value), but only when it is safe to do so.:

procedure TForm1.ClientDataSet1AfterScroll(DataSet: TDataSet);

begin

if ClientDataSet1.UpdateStatus = usUnModified then

ClientDataSet1.RefreshRecord;

end;

 

Communicating with providers using custom events

Clientdatasets communicate with a provider component through a special interface called IAppServer. If the provider is local, IAppServer is the interface to an automatically-generated object that handles all communication between the clientdataset and its provider. If the provider is remote, IAppServer is the interface to a remote data module on the application server, or (in the case of a SOAP server) an interface generated by the connection component.

TClientDataSet provides many opportunities for customizing the communication that uses the IAppServer interface. Before and after every IAppServer method call that is directed at the clientdataset’s provider, TClientDataSet receives special events that allow it to communicate arbitrary information with its provider. These events are matched with similar events on the provider. Thus for example, when the clientdataset calls its ApplyUpdates method, the following events occur:

1 The clientdataset receives a BeforeApplyUpdates event, where it specifies arbitrary custom information in an OleVariant called OwnerData.

2 The provider receives a BeforeApplyUpdates event, where it can respond to the OwnerData from the clientdataset and update the value of OwnerData to new information.

3 The provider goes through its normal process of assembling a data packet (including all the accompanying events).

4 The provider receives an AfterApplyUpdates event, where it can respond to the current value of OwnerData and update it to a value for the clientdataset.

5 The clientdataset receives an AfterApplyUpdates event, where it can respond to the returned value of OwnerData.

Every other IAppServer method call is accompanied by a similar set of BeforeXXX and AfterXXX events that let you customize the communication between clientdataset and provider.

In addition, the clientdataset has a special method, DataRequest, whose only purpose is to allow application-specific communication with the provider. When the clientdataset calls DataRequest, it passes an OleVariant as a parameter that can contain any information you want. This, in turn, generates an is the OnDataRequest event on the provider, where you can respond in any application-defined way and return a value to the clientdataset.

 

Overriding the source dataset

The clientdatasets that are associated with a particular data access mechanism use the CommandText and CommandType properties to specify the data they represent.

When using TClientDataSet, however, the data is specified by the source dataset, not the clientdataset. Typically, this source dataset has a property that specifies an SQL statement to generate the data or the name of a database table or stored procedure.

If the provider allows, TClientDataSet can override the property on the source dataset that indicates what data it represents. That is, if the provider permits, the clientdataset’s CommandText property replaces the property on the provider’s dataset that specifies what data it represents. This allowsTClientDataSet to specify dynamically what data it wants to see.

By default, external provider components do not let clientdatasets use the CommandText value in this way. To allow TClientDataSet to use its CommandText property, you must add poAllowCommandText to the Options property of the provider.

Otherwise, the value of CommandText is ignored.

Note Never remove poAllowCommandText from the Options property of TBDEClientDataSet or TIBClientDataSet. The clientdataset’s Options property is forwarded to the internal provider, so removing poAllowCommandText prevents the clientdataset from specifying what data to access.

The clientdataset sends its CommandText string to the provider at two times:

  • When the clientdataset first opens. After it has retrieved the first data packet from the provider, the clientdataset does not send CommandText when fetching subsequent data packets.
  • When the clientdataset sends an Execute command to provider.

To send an SQL command or to change a table or stored procedure name at any other time, you must explicitly use the IAppServer interface that is available as the AppServer property. This property represents the interface through which the clientdataset communicates with its provider.

 

Using a clientdataset with file-based data

Clientdatasets can work with dedicated files on disk as well as server data. This allows them to be used in file-based database applications and “briefcase model” applications. The special files that clientdatasets use for their data are called MyBase.

Tip All clientdatasets are appropriate for a briefcase model application, but for a pure MyBase application (one that does not use a provider), it is preferable to use TClientDataSet, because it involves less overhead.

In a pure MyBase application, the client application cannot get table definitions and data from the server, and there is no server to which it can apply updates. Instead, the clientdataset must independently

  • Define and create tables
  • Load saved data
  • Merge edits into its data
  • Save data

 

Creating a new dataset

There are three ways to define and create clientdatasets that do not represent server data:

  • You can define and create a new clientdataset using persistent fields or field and index definitions. This follows the same scheme as creating any table type dataset.
    See “Creating and deleting tables” on page 24-38 for details.
  • You can copy an existing dataset (at design or runtime). See “Copying data from another dataset” on page 29-14 for more information about copying existing datasets.
  • You can create a clientdataset from an arbitrary XML document. See “Converting XML documents into data packets” on page 32-6 for details.

Once the dataset is created, you can save it to a file. From then on, you do not need to recreate the table, only load it from the file you saved. When beginning a file-based database application, you may want to first create and save empty files for your datasets before writing the application itself. This way, you start with the metadata for your clientdataset already defined, making it easier to set up the user interface.

 

Loading data from a file or stream

To load data from a file, call a clientdataset’s LoadFromFile method. LoadFromFile takes one parameter, a string that specifies the file from which to read data. The file name can be a fully qualified path name, if appropriate. If you always load the clientdataset’s data from the same file, you can use theFileName property instead. If FileName names an existing file, the data is automatically loaded when the clientdataset is opened.

To load data from a stream, call the clientdataset’s LoadFromStream method.

LoadFromStream takes one parameter, a stream object that supplies the data.

The data loaded by LoadFromFile (LoadFromStream) must have previously been saved in a clientdataset’s data format by this or another clientdataset using the SaveToFile (SaveToStream) method, or generated from an XML document. For more information about saving data to a file or stream, see “Saving data to a file or stream” on page 29-35. For information about creating clientdataset data from an XML document, see Chapter 32, “Using XML in database applications.”

When you call LoadFromFile or LoadFromStream, all data in the file is read into the Data property. Any edits that were in the change log when the data was saved are read into the Delta property. However, the only indexes that are read from the file are those that were created with the dataset.

 

Merging changes into data

When you edit the data in a clientdataset, all edits to the data exist only in an in memory change log. This log can be maintained separately from the data itself, although it is completely transparent to objects that use the clientdataset. That is, controls that navigate the clientdataset or display its data see a view of the data that includes the changes. If you do not want to back out of changes, however, you should merge the change log into the data of the clientdataset by calling the MergeChangeLog method. MergeChangeLog overwrites records in Data with any changed field values in the change log.

After MergeChangeLog executes, Data contains a mix of existing data and any changes that were in the change log. This mix becomes the new Data baseline against which further changes can be made. MergeChangeLog clears the change log of all records and resets the ChangeCount property to 0.

Warning Do not call MergeChangeLog for clientdatasets that use a provider. In this case, call ApplyUpdates to write changes to the database. For more information, see “Applying updates” on page 29-20.

Note It is also possible to merge changes into the data of a separate clientdataset if that dataset originally provided the data in the Data property. To do this, you must use a dataset provider. For an example of how to do this, see “Assigning data directly” on page 29-14.

If you do not want to use the extended undo capabilities of the change log, you can set the clientdataset’s LogChanges property to False. When LogChanges is False, edits are automatically merged when you post records and there is no need to call MergeChangeLog.

 

Saving data to a file or stream

Even when you have merged changes into the data of a clientdataset, this data still exists only in memory. While it persists if you close the clientdataset and reopen it in your application, it will disappear when your application shuts down. To make the data permanent, it must be written to disk. Write changes to disk using the SaveToFile method.

SaveToFile takes one parameter, a string that specifies the file into which to write data.

The file name can be a fully qualified path name, if appropriate. If the file already exists, its current contents are completely overwritten.

Note SaveToFile does not preserve any indexes you added to the clientdataset at runtime, only indexes that were added when you created the clientdataset.

If you always save the data to the same file, you can use the FileName property instead. If FileName is set, the data is automatically saved to the named file when the clientdataset is closed.

You can also save data to a stream, using the SaveToStream method. SaveToStream takes one parameter, a stream object that receives the data.

Note If you save a clientdataset while there are still edits in the change log, these are not merged with the data. When you reload the data, using the LoadFromFile or LoadFromStream method, the change log will still contain the unmerged edits. This is important for applications that support the briefcase model, where those changes will eventually have to be applied to a provider component on the application server.

 

Using a simple dataset

TSimpleDataSet is a special type of clientdataset designed for simple two-tiered applications. Like a unidirectional dataset, it can use an SQL connection component to connect to a database server and specify an SQL statement to execute on that server. Like other clientdatasets, it buffers data in memory to allow full navigation and editing support.

TSimpleDataSet works the same way as a generic clientdataset (TClientDataSet) that is linked to a unidirectional dataset by a dataset provider. In fact, TSimpleDataSet has its own, internal provider, which it uses to communicate with an internally created unidirectional dataset.

 

Using a simple dataset can simplify the process of two-tiered application development because you don’t need to work with as many components.

 

When to use TSimpleDataSet

TSimpleDataSet is intended for use in a simple two-tiered database applications and briefcase model applications. It provides an easy-to-set up component for linking to the database server, fetching data, caching updates, and applying them back to the server. It can be used in most two-tiered applications.

There are times, however, when it is more appropriate to use TClientDataSet:

  • If you are not using data from a database server (for example, if you are using a dedicated file on disk), then TClientDataSet has the advantage of less overhead.
  • Only TClientDataSet can be used in a multi-tiered database application. Thus, if you are writing a multi-tiered application, or if you intend to scale up to a multitiered application eventually, you should use TClientDataSet with an external provider and source dataset.
  • Because the source dataset is internal to the simple dataset component, you can’t link two source datasets in a master/detail relationship to obtain nested detail sets. (You can, however, link two simple datasets into a master/detail relationship.)
  • The simple dataset does not surface any of the events or properties that occur on its internal dataset provider. However, in most cases, these events are used in multi-tiered applications, and are not needed for two-tiered applications.

 

Setting up a simple dataset

Setting up a simple dataset requires two essential steps. Set up:

1 The connection information.

2 The dataset information.

The following steps describe setting up a simple dataset in more detail.

To use TSimpleDataSet:

1 Place the TSimpleDataSet component in a data module or on a form. Set its Name property to a unique value appropriate to your application.

2 Identify the database server that contains the data. There are two ways to do this:

  • If you have a named connection in the connections file, expand the Connection property and specify the ConnectionName value.
  • For greater control over connection properties, transaction support, login support, and the ability to use a single connection for more than one dataset, use a separate TSQLConnection component instead. Specify the TSQLConnection component as the value of the Connection property. For details on TSQLConnection, see Chapter 23, “Connecting to databases”.

 

3 To indicate what data you want to fetch from the server, expand the DataSet property and set the appropriate values. There are three ways to fetch data from the server:

  • Set CommandType to ctQuery and set CommandText to an SQL statement you want to execute on the server. This statement is typically a SELECT statement.
    Supply the values for any parameters using the Params property.
  • Set CommandType to ctStoredProc and set CommandText to the name of the stored procedure you want to execute. Supply the values for any input parameters using the Params property.
  • Set CommandType to ctTable and set CommandText to the name of the database tables whose records you want to use.

4 If the data is to be used with visual data controls, add a data source component to the form or data module, and set its DataSet property to the TSimpleDataSet object.

The data source component forwards the data in the clientdataset’s in-memory cache to data-aware components for display. Connect data-aware components to the data source using their DataSource and DataField properties.

5 Activate the dataset by setting the Active property to true (or, at runtime, calling the Open method).

6 If you executed a stored procedure, use the Params property to retrieve any outpu parameters.

7 When the user has edited the data in the simple dataset, you can apply those edits back to the database server by calling the ApplyUpdates method. Resolve any update errors in an OnReconcileError event handler. For more information on applying updates, see “Updating records” on page 29-20.

 

ref:http://www.insoftweb.com/cds.htm

RO39 – in a transaction multiple ClientDataSets updates

RO39 – in a transaction multiple ClientDataSets updates

All the business logic should be placed in the server migration from two layers to three to face the trick is to get rid of client control transaction client should not start and commit the transaction. Fact that the client should not know anything. .
This article shows you how to create a method to pass a ClientDataSet RemObjects the DataSnap services in the Delta collection and updated to the database in a single transaction, in order to update to roll back the transaction when an exception occurs.
This method is only applicable to the use of non-nested (master-slave relationship) ClientDataSets, because the the DataSnap default has nested in a separate transaction (master-slave relationship) data sets.
You need a RO DataSnap server, because I am not going to describe how to create this kind of service, so we use an existing service.
First, you need to create in the server RODL in data types. Use RO Service Builder to create a ProviderName (string) and a member of the Delta (binary) DeltaToApply structure:

Then create array of DeltaToApply type:
Next, create a function to receive the array on the server side, I will add this function to contain all the same Service Provider, because we want to use this Provider to update the data:

The method implemented as follows:
procedure TNewService.ApplyUpdates (var ADeltaArray: DeltaArray);
var
I: Integer;
Provider: TDataSetProvider;
ErrorCount: Integer;
begin
/ / Put your code to start transaction
try
for I: = 0 to ADeltaArray.Count – 1 do
begin
Provider: = FindProvider (ADeltaArray [I]. ProviderName);
if not Assigned (Provider) then
raise Exception.Create (‘Provider not found:’ + ADeltaArray [I]. ProviderName);

Provider.ApplyUpdates (VariantFromBinary (ADeltaArray [I]. Delta), 0, ErrorCount);
if ErrorCount> 0 then
/ / Put your code to handle errors
raise Exception.Create (‘Errors during applyupdates:’ + Provider.Name);
end;
/ / Put your code to commit the transaction
except
/ / Put your code to rollback the transaction
raise;
end;
end;
I have created a helper function by name Gets a Provider:
function TNewService.FindProvider (ProviderName: string): TDataSetProvider;
var
Component: TObject;
begin
Component: = FindComponent (ProviderName);
if Component is TDataSetProvider then
Result: = Component as TDataSetProvider
else
Result: = nil;
end;
Server to complete the client side, you need to create an all ClientDataSet.Delta stored in the array and sent to the server:
procedure TClientForm.ApplyUpdates (ClientDataSets: array of TClientDataSet);
var
Deltas: DeltaArray;
Delta: DeltaToApply;
I: Integer;
begin
Deltas: = DeltaArray.Create;
try
for I: = Low (ClientDataSets) to High (ClientDataSets) do
begin
if ClientDataSets [I]. ChangeCount = 0 then
Continue;

Delta: = Deltas.Add;
Delta.ProviderName: = ClientDataSets [I]. ProviderName;
Delta.Delta: = BinaryFromVariant (ClientDataSets [I]. Delta);
end;
CoNewService.Create (ROMessage, ROChannel). ApplyUpdates (Deltas);
finally
Deltas.Free;
end;
end;
VariantFromBinary and BinaryFromVariant in uROBinaryHelpers units.
Finally, you only need on the client calls this method to pass all you need to be updated in the same transaction ClientDataSet:
ApplyUpdates ([ClientDataSet1, ClientDataSet2, ClientDataSet3]);
Well I hope this can help you. Notify me as soon as possible if you find a problem with the code or areas for improvement in order to correction.

ref: http://www.databasesql.info/article/5848310304/