Sixty Plus FireDAC Sample Apps With Source To Make Accessing Databases Easy

2e1ax_embarcadero_entry_60firedac

FireDAC is a unique set of Universal Data Access Components for developing multi-device database applications for Delphi and C++Builder. With its powerful common architecture, FireDAC enables native high-speed direct access from Delphi to InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, IBM DB2, SQL Anywhere, Access, Firebird, Informix and more.

Based on 10 years of experience writing native drivers for database back-ends, FireDAC was built as a powerful access layer that supports all the features needed to build real-world high-load applications. FireDAC provides a common API for accessing different database back-ends, without giving up access to unique database-specific features, or compromising on performance.

It enables the developer to concentrate on developing the application, not on the complexities of database interaction. It provides a modern feature-rich set of components to address all major RDBMS environments. It uses one set of components to address all supported RDBMS types. It reduces the Total Cost of Ownership by providing:

  • Less initial cost, compared to separate expensive libraries.
  • Less time required to learn and become familiar with the unique powerful libraries.
  • Straight-ahead application design and development when targeting different databases.

Here are the samples:

FireDAC.AutoincFields Sample

FireDAC.ConnectionDefs Sample

FireDAC.DAptLayerCommands Sample

FireDAC.DAptLayerGettingStarted Sample

FireDAC.DBX2FDMigration Demo Sample

FireDAC.DLLSharing Sample

FireDAC.Firebird Sample

FireDAC.GetFieldNames Sample

FireDAC.IFDPhysCommand.Async Sample

FireDAC.IFDPhysCommand.Batch Sample

FireDAC.IFDPhysCommand.EscapeFunctions Sample

FireDAC.IFDPhysCommand.FetchTables Sample

FireDAC.IFDPhysCommand.Macros Sample

FireDAC.IFDPhysConnection.CreateConnection Sample

FireDAC.IFDPhysConnection.Pooling Sample

FireDAC.IFDPhysConnection.Transactions Sample

FireDAC.InfoReport Sample

FireDAC.InterBase AdminUtility Sample

FireDAC.InterBase Arrays Sample

FireDAC.InterBase Sample

FireDAC.MappingColumns Sample

FireDAC.MongoDB Explore Sample

FireDAC.MongoDB.ListView Sample

FireDAC.MoniLayerClients Sample

FireDAC.MSAccess Sample

FireDAC.MSSQL Sample

FireDAC.MySQL Sample

FireDAC.Oracle CQN Sample

FireDAC.OraclStoredProc Sample

FireDAC.Pooling Sample

FireDAC.PostgreSQL Arrays Sample

FireDAC.PostgreSQL Ranges Sample

FireDAC.SchemaAdapterMemTable Sample

FireDAC.SQLite Desktop Sample

FireDAC.SQLite Encryption Sample

FireDAC.SQLite Sample

FireDAC.SQLiteIniFile Demo Sample

FireDAC.TFDBatchMove Sample

FireDAC.TFDEventAlerter Sample

FireDAC.TFDGUIxLoginDialog Sample

FireDAC.TFDLocalSQL InMemDB Sample

FireDAC.TFDLocalSQL MegaFMX Sample

FireDAC.TFDLocalSQL.xDBMS Sample

FireDAC.TFDMemTable.CachedUpdates Sample

FireDAC.TFDMemTable.CloneCursor Sample

FireDAC.TFDMemTable.IncFetchingMSSQL Sample

FireDAC.TFDMemTable.Main Sample

FireDAC.TFDMemTable.MasterDetail Sample

FireDAC.TFDMemTable.NestedDataSet Sample

FireDAC.TFDQuery.Aggregates Sample

FireDAC.TFDQuery.ArrayDML Sample

FireDAC.TFDQuery.Async Sample

FireDAC.TFDQuery.Batch Sample

FireDAC.TFDQuery.BatchErrorHandling Sample

FireDAC.TFDQuery.Blobs Sample

FireDAC.TFDQuery.BlobStreams Sample

FireDAC.TFDQuery.CentralizedCachedUpdates Sample

FireDAC.TFDQuery.Filter Sample

FireDAC.TFDQuery.Indices Sample

FireDAC.TFDQuery.Macros Sample

FireDAC.TFDQuery.MasterDetail Sample

FireDAC.TFDQuery.OnUpdateRecord Sample

FireDAC.TFDTableAdapter.Main Sample

FireDAC.Transactions Sample

FireDAC.VSE Demo Sample

ref: http://community.embarcadero.com/blogs?view=entry&id=8761

TDocVariant custom variant type

With revision 1.18 of the framework, we just introduced two new custom types of variants:

  • TDocVariant kind of variant;
  • TBSONVariant kind of variant.

The second custom type (which handles MongoDB-specific extensions – like ObjectID or other specific types like dates or binary) will be presented later, when dealing with MongoDBsupport in mORMot, together with the BSON kind of content. BSON / MongoDB support is implemented in the SynMongoDB.pas unit.

We will now focus on TDocVariant itself, which is a generic container of JSON-like objects or arrays.
This custom variant type is implemented in SynCommons.pas unit, so is ready to be used everywhere in your code, even without any link to the mORMot ORM kernel, or MongoDB.

TDocVariant documents

TDocVariant implements a custom variant type which can be used to store any JSON/BSON document-based content, i.e. either:

  • Name/value pairs, for object-oriented documents;
  • An array of values (including nested documents), for array-oriented documents;
  • Any combination of the two, by nesting TDocVariant instances.

Here are the main features of this custom variant type:

  • DOM approach of any object or array documents;
  • Perfect storage for dynamic value-objects content, with a schema-less approach (as you may be used to in scripting languages like Python or JavaScript);
  • Allow nested documents, with no depth limitation but the available memory;
  • Assignment can be either per-value (default, safest but slower when containing a lot of nested data), or per-reference (immediate reference-counted assignment);
  • Very fast JSON serialization / un-serialization with support of MongoDB-like extended syntax;
  • Access to properties in code, via late-binding (including almost no speed penalty due to our VCL hack as already detailed);
  • Direct access to the internal variant names and values arrays from code, by trans-typing into a TDocVariantData record;
  • Instance life-time is managed by the compiler (like any other variant type), without the need to use interfaces or explicit try..finally blocks;
  • Optimized to use as little memory and CPU resource as possible (in contrast to most other libraries, it does not allocate one class instance per node, but rely on pre-allocated arrays);
  • Opened to extension of any content storage – for instance, it will perfectly integrate with BSON serialization and custom MongoDB types (ObjectID, RegEx…), to be used in conjunction with MongoDB servers;
  • Perfectly integrated with our Dynamic array wrapper and its JSON serialization as with the record serialization;
  • Designed to work with our mORMot ORM: any TSQLRecord instance containing suchvariant custom types as published properties will be recognized by the ORM core, and work as expected with any database back-end (storing the content as JSON in a TEXT column);
  • Designed to work with our mORMot SOA: any interface-based service is able to consume or publish such kind of content, as variant kind of parameters;
  • Fully integrated with the Delphi IDE: any variant instance will be displayed as JSON in the IDE debugger, making it very convenient to work with.

To create instances of such variant, you can use some easy-to-remember functions:

  • _Obj() _ObjFast() global functions to create a variant object document;
  • _Arr() _ArrFast() global functions to create a variant array document;
  • _Json() _JsonFast() _JsonFmt() _JsonFastFmt() global functions to create anyvariant object or array document from JSON, supplied either with standard orMongoDB-extended syntax.

Variant object documents

With _Obj(), an objectvariant instance will be initialized with data supplied two by two, asName,Value pairs, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Obj(['name','John','year',1972]);
  V2 := _Obj(['name','John','doc',_Obj(['one',1,'two',2.5])]); // with nested objects

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1)); // explicit conversion into RawUTF8
 writeln(V1);                  // implicit conversion from variant into string// both commands will write '{"name":"john","year":1982}'
 writeln(VariantSaveJson(V2)); // explicit conversion into RawUTF8
 writeln(V2);                  // implicit conversion from variant into string// both commands will write '{"name":"john","doc":{"one":1,"two":2.5}}'

As a consequence, the Delphi IDE debugger is able to display such variant values as their JSON representation.
That is, V1 will be displayed as '"name":"john","year":1982' in the IDE debugger Watch List window, or in the Evaluate/Modify (F7) expression tool.
This is pretty convenient, and much more user friendly than any class-based solution (which requires the installation of a specific design-time package in the IDE).

You can access to the object properties via late-binding, with any depth of nesting objects, in your code:

 writeln('name=',V1.name,' year=',V1.year);
 // will write 'name=John year=1972'
 writeln('name=',V2.name,' doc.one=',V2.doc.one,' doc.two=',doc.two);
 // will write 'name=John doc.one=1 doc.two=2.5
 V1.name := 'Mark';       // overwrite a property value
 writeln(V1.name);        // will write 'Mark'
 V1.age := 12;            // add a property to the object
 writeln(V1.age);         // will write '12'

Note that the property names will be evaluated at runtime only, not at compile time.
For instance, if you write V1.nome instead of V1.name, there will be no error at compilation, but an EDocVariant exception will be raised at execution (unless you set thedvoReturnNullForUnknownProperty option to _Obj/_Arr/_Json/_JsonFmt which will return a null variant for such undefined properties).

In addition to the property names, some pseudo-methods are available for such objectvariant instances:

  writeln(V1._Count); // will write 3 i.e. the number of name/value pairs in the object document
  writeln(V1._Kind);  // will write 1 i.e. ord(sdkObject)for i := 0 to V2._Count-1 do
    writeln(V2.Name(i),'=',V2.Value(i));
  // will write in the console://  name=John//  doc={"one":1,"two":2.5}//  age=12if V1.Exists('year') then
    writeln(V1.year);

You may also trans-type your variant instance into a TDocVariantData record, and access directly to its internals.
For instance:

 TDocVariantData(V1).AddValue('comment','Nice guy');
 with TDocVariantData(V1) do// direct transtypingif Kind=sdkObject then// direct access to the TDocVariantDataKind fieldfor i := 0 to Count-1 do// direct access to the Count: integer field
     writeln(Names[i],'=',Values[i]);    // direct access to the internal storage arrays

By definition, trans-typing via a TDocVariantData record is slightly faster than using late-binding.
But you must ensure that the variant instance is really a TDocVariant kind of data before transtyping e.g. by calling DocVariantType.IsOfType(aVariant).

Variant array documents

With _Arr(), an arrayvariant instance will be initialized with data supplied as a list ofValue1,Value2,…, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Arr(['John','Mark','Luke']);
  V2 := _Obj(['name','John','array',_Arr(['one','two',2.5])]); // as nested array

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1));
 writeln(V1);  // implicit conversion from variant into string// both commands will write '["John","Mark","Luke"]'
 writeln(VariantSaveJson(V2));
 writeln(V2);  // implicit conversion from variant into string// both commands will write '{"name":"john","array":["one","two",2.5]}'

As a with any object document, the Delphi IDE debugger is able to display such arrayvariant values as their JSON representation.

Late-binding is also available, with a special set of pseudo-methods:

  writeln(V1._Count); // will write 3 i.e. the number of items in the array document
  writeln(V1._Kind);  // will write 2 i.e. ord(sdkArray)for i := 0 to V1._Count-1 do
    writeln(V1.Value(i),':',V2._(i));
  // will write in the console://  John John//  Mark Mark//  Luke Lukeif V1.Exists('John') then
    writeln('John found in array');

Of course, trans-typing into a TDocVariantData record is possible, and will be slightly faster than using late-binding.

Create variant object or array documents from JSON

With _Json() or _JsonFmt(), either a document or arrayvariant instance will be initialized with data supplied as JSON, e.g.

var V1,V2,V3,V4: variant; // stored as any variant
 ...
  V1 := _Json('{"name":"john","year":1982}'); // strict JSON syntax
  V2 := _Json('{name:"john",year:1982}');     // with MongoDB extended syntax for names
  V3 := _Json('{"name":?,"year":?}',[],['john',1982]);
  V4 := _JsonFmt('{%:?,%:?}',['name','year'],['john',1982]);
  writeln(VariantSaveJSON(V1));
  writeln(VariantSaveJSON(V2));
  writeln(VariantSaveJSON(V3));
  // all commands will write '{"name":"john","year":1982}'

Of course, you can nest objects or arrays as parameters to the _JsonFmt() function.

The supplied JSON can be either in strict JSON syntax, or with the MongoDB extended syntax, i.e. with unquoted property names.
It could be pretty convenient and also less error-prone when typing in the Delphi code to forget about quotes around the property names of your JSON.

Note that TDocVariant implements an open interface for adding any custom extensions to JSON: for instance, if the SynMongoDB.pas unit is defined in your application, you will be able to create any MongoDB specific types in your JSON, like ObjectID(), new Date() or even /regex/option.

As a with any object or array document, the Delphi IDE debugger is able to display suchvariant values as their JSON representation.

Per-value or per-reference

By default, the variant instance created by _Obj() _Arr() _Json() _JsonFmt() will use a copy-by-value pattern.
It means that when an instance is affected to another variable, a new variant document will be created, and all internal values will be copied. Just like a record type.

This will imply that if you modify any item of the copied variable, it won’t change the original variable:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := V1;                // create a new variant, and copy all values
 V2.name := 'James';      // modifies V2.name, but not V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'John and James'

As a result, your code will be perfectly safe to work with, since V1 and V2 will be uncoupled.

But one drawback is that passing such a value may be pretty slow, for instance, when you nest objects:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := _Arr(['John','Mark','Luke']);
 V1.names := V2; // here the whole V2 array will be re-allocated into V1.names

Such a behavior could be pretty time and resource consuming, in case of a huge document.

All _Obj() _Arr() _Json() _JsonFmt() functions have an optional TDocVariantOptionsparameter, which allows to change the behavior of the created TDocVariant instance, especially setting dvoValueCopiedByReference.

This particular option will set the copy-by-reference pattern:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972],[dvoValueCopiedByReference]);
 V2 := V1;             // creates a reference to the V1 instance
 V2.name := 'James';   // modifies V2.name, but also V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'James and James'

You may think this behavior is somewhat weird for a variant type. But if you forget aboutper-value objects and consider those TDocVariant types as a Delphi class instance (which is a per-reference type), without the need of having a fixed schema nor handling manually the memory, it will probably start to make sense.

Note that a set of global functions have been defined, which allows direct creation of documents with per-reference instance lifetime, named _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast().
Those are just wrappers around the corresponding _Obj() _Arr() _Json() _JsonFmt()functions, with the following JSON_OPTIONS[true] constant passed as options parameter:

const/// some convenient TDocVariant options// - JSON_OPTIONS[false] is _Json() and _JsonFmt() functions default// - JSON_OPTIONS[true] are used by _JsonFast() and _JsonFastFmt() functions
  JSON_OPTIONS: array[Boolean] of TDocVariantOptions = (
    [dvoReturnNullForUnknownProperty],
    [dvoReturnNullForUnknownProperty,dvoValueCopiedByReference]);

When working with complex documents, e.g. with BSON / MongoDB documents, almost all content will be created in “fast” per-reference mode.

Advanced TDocVariant process

Object or array document creation options

As stated above, a TDocVariantOptions parameter enables to define the behavior of aTDocVariant custom type for a given instance.
Please refer to the documentation of this set of options to find out the available settings. Some are related to the memory model, other to case-sensitivity of the property names, other to the behavior expected in case of non-existing property, and so on…

Note that this setting is local to the given variant instance.

In fact, TDocVariant does not force you to stick to one memory model nor a set of global options, but you can use the best pattern depending on your exact process.
You can even mix the options – i.e. including some objects as properties in an object created with other options – but in this case, the initial options of the nested object will remain. So you should better use this feature with caution.

You can use the _Unique() global function to force a variant instance to have an unique set of options, and all nested documents to become by-value, or _UniqueFast() for all nested documents to become by-reference.

// assuming V1='{"name":"James","year":1972}' created by-reference
  _Unique(V1);             // change options of V1 to be by-value
  V2 := V1;                // creates a full copy of the V1 instance
  V2.name := 'John';       // modifies V2.name, but not V1.name
  writeln(V1.name);        // write 'James'
  writeln(V2.name);        // write 'John'
  V1 := _Arr(['root',V2]); // created as by-value by default, as V2 was
  writeln(V1._Count);      // write 2
  _UniqueFast(V1);         // change options of V1 to be by-reference
  V2 := V1;
  V1._(1).name := 'Jim';
  writeln(V1);
  writeln(V2);
  // both commands will write '["root",{"name":"Jim","year":1972}]'

The easiest is to stick to one set of options in your code, i.e.:

  • Either using the _*() global functions if your business code does send someTDocVariant instances to any other part of your logic, for further storage: in this case, the by-value pattern does make sense;
  • Or using the _*Fast() global functions if the TDocVariant instances are local to a small part of your code, e.g. used as schema-less Data Transfer Objects (DTO).

In all cases, be aware that, like any class type, the const, var and out specifiers of method parameters does not behave to the TDocVariant value, but to its reference.

Integration with other mORMot units

In fact, whenever a schema-less storage structure is needed, you may use a TDocVariantinstance instead of class or record strong-typed types:

  • Client-Server ORM will support TDocVariant in any of the TSQLRecord variantpublished properties;
  • Interface-based services will support TDocVariant as variant parameters of any method, which make them as perfect DTO;
  • Since JSON support is implemented with any TDocVariant value from the ground up, it makes a perfect fit for working with AJAX clients, in a script-like approach;
  • If you use our SynMongoDB.pas unit to access a MongoDB server, TDocVariant will be the native storage to create or access BSON arrays or objects documents;
  • Cross-cutting features (like logging or record / dynamic array enhancements) will also benefit from this TDocVariant custom type.

We are pretty convinced that when you will start playing with TDocVariant, you won’t be able to live without it any more.
It introduces the full power of late-binding and schema-less patterns to your application, which can be pretty useful for prototyping or in Agile development.
You do not need to use scripting engines like Python or JavaScript to have this feature, if you need it.

ref: http://blog.synopse.info/post/2014/02/25/TDocVariant-custom-variant-type

Connecting to legacy databases and publishing a RESTful interface to it

Most systems, especially in the DDD area, needs to integrate to a legacy system. In our case our we had to communicate to a Firebird 1.5 database.

The first step was to define our Data Transfer Objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
type
  TLegacyID = type RAWUTF8;
  TLegacyAccount = class(TSynPersistent)
  private
    fLegacyID: TLegacyID;
    fDateModified: TDateTime;
    fUserCreated: RAWUTF8;
    fDateCreated: TDateTime;
    fName: RAWUTF8;
    fisActive: Boolean;
  public
    constructor Create; overload; override;
    constructor Create( aID : TLegacyID; aName : RAWUTF8; aIsActive : Boolean;
      aDateCreated, aDateModified : TDateTime; aUserCreated : RAWUTF8 ); overload;
  published
    property ID : TLegacyID read fRevelightID write fRevelightID;
    property Name : RAWUTF8 read fName write fName;
    property isActive : Boolean read fisActive write fisActive;
    property DateCreated : TDateTime read fDateCreated write fDateCreated;
    property DateModified : TDateTime read fDateModified write fDateModified;
    property UserCreated : RAWUTF8 read fUserCreated write fUserCreated;
  end;
  TLegacySupplier = class(TLegacyAccount)
  end;

Here we declare a unique type to identify our legacy IDs (strings). We also do a basic map of our data layout, with a customized constructor for ease of creation. One can add other methods later to handle copies and assignments.

The next step was to define our service:

1
2
3
4
5
type
  ILegacyStockQuery = interface(IInvokable)
    ['{2BDC9F78-B9C2-4621-A557-F87F02AC0581}']
    function GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
  end;

we’ll publish a service as LegacyStockQuery with a single method GetSupplier. This method will return a JSON encoded representation of our TLegacySupplier, ready to be consumed by a client.

To implement it:

1
2
3
4
5
6
7
8
9
10
type
  TLegacyStockQuery = class(TInterfacedObject, ILegacyStockQuery)
  private
    fDbConnection : TSQLDBConnectionProperties;
  public
    constructor Create( const aProps: TSQLDBConnectionProperties ); overload;
    function GetSupplier(const aID: TRevelightID; out Supplier: TLegacySupplier): TCQRSResult;
    property DbConnection : TSQLDBConnectionProperties read fDbConnection write fDbConnection;
  end;

We keep a copy of our database connection properties local to our instance to ensure thread safety.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{ TLegacyStockQuery }
constructor TLegacyStockQuery.Create(const aProps: TSQLDBConnectionProperties);
begin
  fDbConnection := aProps;
  inherited Create;
end;
function TLegacyStockQuery.GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
var
  Res, Address : ISQLDBRows;
begin
  Result := cqrsNotFound;
  Res := fDbConnection.Execute( 'select * from SUPPLIERS where SUPPLIER_ID=? ', [aID] );
  if Res.Step then begin
    Result := cqrsSuccess;
    Supplier.ID := Res['SUPPLIER_ID'];
    Supplier.Name := Res['SUPPLIER_NAME'];
    Supplier.isActive := Res['ACTIVE_FLAG'] = 'Y';
    Supplier.DateCreated := Res['DATE_CREATED'];
    Supplier.DateModified := Res['DATE_MODIFIED'];
    Supplier.UserCreated := Res['USER_CREATED'];
  end;
end;

Execute the query against the legacy database and populate the DTO. Using the ISQLDBRows interface means less object maintenance and cleaner code.

To kick the whole thing off we have:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
procedure StartServer( aDbURI : RawURF8 );
var
  aDbConnection : TSQLDBConnectionProperties;
  aStockServer  : TSQLRestServerFullMemory;
  aHTTPServer   : TSQLHttpServer;
begin
  aDbConnection := TSQLDBZEOSConnectionProperties.Create( aDbURI, '', '', '' );
  aStockServer  := TSQLRestServerFullMemory.Create([]);
  try
    aStockServer.ServiceDefine( TLegacyStockQuery.Create( aDbConnection ), [ILegacyStockQuery]);
    aHTTPServer := TSQLHttpServer.Create( DEFAULT_HTTP_PORT, [aStockServer] );
    try
      aHttpServer.AccessControlAllowOrigin := '*'; // allow cross-site AJAX queries
      writeln('Background server is running.'#10);
      writeln('Cross-Platform wrappers are available at ',
          DEFAULT_HTTP_PORT ,'/', DEFAULT_SERVER_ROOT );
      write('Press [Enter] to close the server.');
      readln;
    finally
      aHTTPServer.Free;
    end;
  finally
    aStockServer.Free;
  end;
end;

It would be better to use dependency injection here, but we’ll get in to that later.

When invoking it, we used a Zeos DB URI, like this one:

Tips!

Remember to register your Object array types with

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(<ArrayType>),<ObjType>);

e.g.

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(TLegacySupplierObjArray),TLegacySupplier);

ref: https://tamingthemormot.wordpress.com/2015/07/14/connecting-to-legacy-databases/

External database speed improvements

Some major speed improvements have been made to our SynDB* units, and how they are used within the mORMot persistence layer.
It results in an amazing speed increase, in some cases.

Here are some of the optimizations how took place in the source code trunk:

Overall, I observed from x2 to x10 performance boost with simple Add() operations, using ODBC, OleDB and direct Oracle access, when compare to previous benchmarks (which were already impressive).
BATCH mode performance is less impacted, since it by-passed some of those limitations, but even in this operation mode, there is some benefits (especially with ODBC and OleDB).

Here are some results, directly generated by the supplied “15 – External DB performance” sample.

Insertion speed

Here we test insertion of some records, for most of our supplied engines.
We did the test with UNIK conditional undefined, i.e. with no index of the Name field.

A Core i7 notebook has been used, as hardware platform.
Oracle 11g database is remotely accessed over a corporate network, so latency and bandwidth is not optimal.
The hardrive is a SSD this time – so we will see how it affects the results.

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
Direct 501 911 81870 281848 288234 548 952 72697 518 512 4159
Batch 523 891 102614 409836 417257 557 868 91617 77155 509 4441
Trans 90388 95884 96612 279579 286188 99681 70950 105674 1024 1432 4920
Batch Trans 110869 117376 125190 412813 398851 127424 126627 121368 62601 1019 4926

Performance gain is impressive, especially for “ODBC Oracle” and also “OleDB Jet”.
Since Jet/MSAccess is a local engine, it is faster than Oracle for one record retrieval – it does not suffer from the network latency. But it is faster than SQlite3 at insertion, due to a multi-thread design – which is perhaps less ACID nor proven.
Note that this hardware configuration run on a SSD, so even “SQLite3 (file full)” configuration is very much boosted – about 3 times faster.
Our direct Oracle access classes achieve more than 77,000 inserts per second in BATCH mode (using the Array Binding feature).
Direct TObjectList in-memory engine reaches amazing speed, when used in BATCH mode – more than 400,000 inserts per second!

Read speed

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
By one 26777 26933 122016 298400 301041 135413 133571 131877 1289 1156 2413
All Virtual 429331 427423 447227 715717 241289 232385 167420 202839 63473 35029 127772
All Direct 443773 433463 427094 711035 700574 432189 334179 340136 90184 39485 186164

Reading speed was also increased. ODBC results have the biggest improvement.
Server-side statement cache for Oracle makes individual reading of records 2 times faster. Wow.
The SQLite3 engine is still the more reactive SQL database here, when it comes to reading.
Of course, direct TObjectList engine is pretty fast – more than 700,000 records per second.

ref: http://blog.synopse.info/post/2013/01/28/External-database-speed-improvements

RESTful mORMot

embarcadero-delphi

Our Synopse mORMot Framework was designed in accordance with Fielding’s REST architectural style without using HTTP and without interacting with the World Wide Web.
Such Systems which follow REST principles are often referred to as “RESTful”.

Optionally, the Framework is able to serve standard HTTP/1.1 pages over the Internet (by using the mORMotHttpClient / mORMotHttpServer units and the TSQLHttpServer andTSQLHttpClient classes), in an embedded low resource and fast HTTP server.

The standard RESTful methods are implemented, i.e. GET/PUT/POST/DELETE.

The following methods were added to the standard REST definition, for locking individual records and for handling database transactions (which speed up database process):

  • LOCK to lock a member of the collection;
  • UNLOCK to unlock a member of the collection;
  • BEGIN to initiate a transaction;
  • END to commit a transaction;
  • ABORT to rollback a transaction.

The GET method has an optional pagination feature, compatible with the YUI DataSource Request Syntax for data pagination – see TSQLRestServer.URI method andhttp://developer.yahoo.com/yui/datatable/#data . Of course, this breaks the “Every Resource is Identified by a Unique Identifier” RESTful principle – but it is much more easy to work with, e.g. to implement paging or custom filtering.

From the Delphi code point of view, a RESTful Client-Server architecture is implemented by inheriting some common methods and properties from a main class.

Then a full set of classes inherit from this TSQLRest abstract parent, e.g. TSQLRestClient TSQLRestClientURI TSQLRestServer.
This TSQLRest class implements therefore a common ancestor for both Client and Server classes.

BLOB fields

BLOB fields are defined as TSQLRawBlob published properties in the classes definition – which is an alias to the RawByteString type (defined in SynCommons.pas for Delphi up to 2007, since it appeared only with Delphi 2009). But their content is not included in standard RESTful methods of the framework, to spare network bandwidth.

The RESTful protocol allows BLOB to be retrieved (GET) or saved (PUT) via a specific URL, like:

 ModelRoot/TableName/TableID/BlobFieldName

This is even better than the standard JSON encoding, which works well but convert BLOB to/from hexadecimal values, therefore need twice the normal size of it. By using such dedicated URL, data can be transfered as full binary.

Some dedicated methods of the generic TSQLRest class handle BLOB fields: RetrieveBloband UpdateBlob.

JSON representation

The “04 – HTTP Client-Server” sample application available in the framework source code tree can be used to show how the framework is AJAX-ready, and can be proudly compared to any other REST server (like CouchDB) also based on JSON.

First deactivates the authentication by changing the parameter from true to false inUnit2.pas:

 DB := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'),
 false);

and by commenting the following line in Project04Client.dpr:

  Form1.Database := TSQLHttpClient.Create(Server,'8080',Form1.Model);
  // TSQLHttpClient(Form1.Database).SetUser('User','synopse');
  Application.Run;

Then you can use your browser to test the JSON content:

  • Start the Project04Server.exe program: the background HTTP server, together with its SQLite3 database engine;
  • Start any Project04Client.exe instances, and add/find any entry, to populate the database a little;
  • Close the Project04Client.exe programs, if you want;
  • Open your browser, and type into the address bar:
      http://localhost:8080/root
    
  • OYou’ll see an error message:
    TSQLHttpServer Server Error 400
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord
    
  • You’ll see the result of all SampleRecord IDs, encoded as a JSON list, e.g.
     [{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord/1
    
  • You’ll see the content of the SampleRecord of ID=1, encoded as JSON, e.g.
    {"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
    
  • Type into the address bar any other REST command, and the database will reply to your request…

You have got a full HTTP/SQLite3 RESTful JSON server in less than 400 KB. :)

Note that Internet Explorer or old versions of FireFox do not recognize theapplication/json; charset=UTF-8 content type to be viewed internally. This is a limitation of those softwares, so above requests will download the content as .json files, but won’t prevent AJAX requests to work as expected.

Stateless ORM

Our framework is implementing REST as a stateless protocol, just as the HTTP/1.1 protocol it could use as its communication layer.

A stateless server is a server that treats each request as an independent transaction that is unrelated to any previous request.

At first, you could find it a bit disappointing from a classic Client-Server approach. In a stateless world, you are never sure that your Client data is up-to-date. The only place where the data is safe is the server. In the web world, it’s not confusing. But if you are coming from a rich Client background, this may concern you: you should have the habit of writing some synchronization code from the server to replicate all changes to all its clients. This is not necessary in a stateless architecture any more.

The main rule of this architecture is to ensure that the Server is the only reference, and that the Client is able to retrieve any pending update from the Server side. That is, always modify a record content on a server side, then refresh the client to retrieve the modified value. Donot modify the client side directly, but always pass through the Server. The UI components of the framework follow these principles. Client-side modification could be performed, but must be made in a separated autonomous table/database. This will avoid any synchronization problem in case of concurrent client modification.

A stateless design is also pretty convenient when working with complex solutions.
Even Domain-Driven Design tends to restrain state to its smallest extend possible, since state introduces complexity.

ref:http://blog.synopse.info/post/2014/01/10/RESTful-mORMot