Quantcast
Channel: Synopse
Viewing all 166 articles
Browse latest View live

Using external MinGW/VisualC++ sqlite3.dll - including benchmark

$
0
0

With upcoming revision 1.18 of the framework, our SynSQlite3.pas unit is able to access the SQLite3 engine in two ways:

  • Either statically linked within the project executable;
  • Or from an external sqlite3.dll library file.

The SQLite3 APIs and constants are defined in SynSQlite3.pas, and accessible via a TSQLite3Library class definition. It defines a global sqlite3 variable as such:

var
  sqlite3: TSQLite3Library;

To use the SQLite3 engine, an instance of TSQLite3Library class shall be assigned to this global variable. Then all mORMot's calls will be made through it, calling e.g. sqlite3.open() instead of sqlite3_open().

There are two implementation classes:

ClassUnitPurpose
TSQLite3LibraryStaticSynSQLite3Static.pasStatically linked engine (.obj within the .exe)
TSQLite3LibraryDynamicSynSQLite3.pasInstantiate an external sqlite3.dll instance

Referring to SynSQLite3Static.pas in the uses clause of your project is enough to link the .obj engine into your executable.

Warning - breaking change: before version 1.18 of the framework, link of static .obj was forced - so you must add a reference to SynSQLite3Static in your project uses clause to work as expected.

In order to use an external sqlite3.dll library, you have to set the global sqlite3 variable as such:

 FreeAndNil(sqlite3); // release any previous instance (e.g. static)sqlite3 := TSQLite3LibraryDynamic.Create;

Of course, FreeAndNil(sqlite3) is not mandatory, and should be necessary only to avoid any memory leak if another SQLite3 engine instance was allocated (may be the case if SynSQLite3Static is referred somewhere in your project's units).

Here are some benchmarks, compiled with Delphi XE3, run in a 32 bit project, using either the static bcc-compiled engine, or an external sqlite3.dll, compiled via MinGW or Microsoft Visual C++.

Static bcc-compiled .obj

First of all, our version included with SynSQLite3Static.pas unit, is to be benchmarked.


Writing speed

 DirectBatchTransBatch Trans
SQLite3 (file full)47738997633122865
SQLite3 (file off)86886996827125862
SQLite3 (mem)84642108624104947135105
TObjectList (static)338478575373337336572147
TObjectList (virtual)338180554446331873575837
SQLite3 (ext full)4864961014197011
SQLite3 (ext off)799303105402135109
SQLite3 (ext mem)93893129550109027152811


Reading speed

 By oneAll VirtualAll Direct
SQLite3 (file full)26924494559500200
SQLite3 (file off)27750496919502714
SQLite3 (mem)124402444404495392
TObjectList (static)332778907605910249
TObjectList (virtual)331038404891905961
SQLite3 (ext full)102707261547521322
SQLite3 (ext off)131130255806513505
SQLite3 (ext mem)135784248780502664

Good old Borland C++ builder produces some efficient code here.
Those numbers are very good, when compared to the other two options.
Probably, using FastMM4 as memory manager and tuned compilation options do make sense.

Official MinGW-compiled sqlite3.dll

Here we used the official sqlite3.dll library, as published in the http://sqlite.org web site, and compiled with the MinGW/GCC compiler.


Writing speed

 DirectBatchTransBatch Trans
SQLite3 (file full)41850386322119420
SQLite3 (file off)91887393196127317
SQLite3 (mem)8310810695199892138003
TObjectList (static)320204573723324696547465
TObjectList (virtual)323247563697324443564716
SQLite3 (ext full)501410100152133679
SQLite3 (ext off)913438102806135545
SQLite3 (ext mem)96028122798108363150920


Reading speed

 By oneAll VirtualAll Direct
SQLite3 (file full)26883473529438904
SQLite3 (file off)27729472188451304
SQLite3 (mem)116550459432457959
TObjectList (static)318248891265905469
TObjectList (virtual)327739359040892697
SQLite3 (ext full)127346180812370288
SQLite3 (ext off)127749227759438096
SQLite3 (ext mem)129792224386436338


Visual C++ compiled sqlite3.dll

The Open Source wxsqlite project provides a sqlite3.dll library, compiled with Visual C++, and including RC4 and AES 128/256 encryption (better than the basic encryption implemented in SynSQLite3Static.pas) - not available in the official library.

See http://sourceforge.net/projects/wxcode/files/Components/wxSQLite3 to download the corresponding source code, and compiled .dll.


Writing speed

 DirectBatchTransBatch Trans
SQLite3 (file full)47049893801112170
SQLite3 (file off)88681990298132883
SQLite3 (mem)86897110287105207140896
TObjectList (static)332005596445321357570776
TObjectList (virtual)327225585000329272579240
SQLite3 (ext full)45950391086140599
SQLite3 (ext off)501519110338150394
SQLite3 (ext mem)98112133276117346158634


Reading speed

 By oneAll VirtualAll Direct
SQLite3 (file full)28527516689521159
SQLite3 (file off)28927513769519156
SQLite3 (mem)127740529100523176
TObjectList (static)335053869262879352
TObjectList (virtual)334739410374885269
SQLite3 (ext full)132594258371506277
SQLite3 (ext off)138159260892507717
SQLite3 (ext mem)139567254919516208

Under Windows, the Microsoft Visual C++ compiler gives very good results.
It is a bit faster than the other two, despite a somewhat less efficient virtual table process.

As a conclusion, our statically linked implementation sounds like the best overall approach: best speed for virtual tables (which is the core of our ORM), and no dll hell.
No library to deploy and copy, everything is embedded in the project executable, ready to run as expected.

Using external SQLite3 is also the open door to easy cross-platform of mORMot.
First step will be to finish the 64 bit compatibility of the framework, using the official x64SQLite3.dll, which is much easier to work with than linking to .obj in Delphi.

Feedback is welcome in our forum.


64 bit compatibility of mORMot units

$
0
0

I'm happy to announce that mORMot units are now compiling and working great in 64 bit mode, under Windows.
Need a Delphi XE2/XE3 compiler, of course!

ORM and services are now available in Win64, on both client and server sides.
Low-level x64 assembler stubs have been created, tested and optimized.
UI part is also available... that is grid display, reporting (with pdf export and display anti-aliasing), ribbon auto-generation, SynTaskDialog, i18n... the main SynFile demo just works great!

Overall impression is very positive, and speed is comparable to 32 bit version (only 10-15% slower).

Speed decrease seems to be mostly due to doubled pointer size, and some less optimized part of the official Delphi RTL.
But since mORMot core uses its own set of functions (e.g. for JSON serialization, RTTI support or interface calls or stubbing), we were able to release the whole 64 bit power of your hardware.

Delphi 64 bit compiler sounds stable and efficient. Even when working at low level, with assembler stubs.
Generated code sounds more optimized than the one emitted by FreePascalCompiler - and RTL is very close to 32 bit mode.
Overall, VCL conversion worked as easily than a simple re-build.
Embarcadero's people did a great job for VCL Win64 support, here!

SQlite3 works great in 64 bit mode.

You can find our own 3.7.16 version of the SQlite3 external library, to be used in 64 bit mode, from SQLite3-64.7z, since there is no official Win64 library released yet in http://sqlite.org
No problem so far, and pretty good performance.
Jut a weird bug about SQLITE_TRANSIENT constant, which should be pointer(integer(-1)) instead of pointer(-1) when working with virtual tables columns - but nothing to care of in your user code, since the framework will handle it for you.

I suspect some part of official System.RTTI.pas unit as provided in XE2/XE3 is broken in Win64.
For instance, I think it does not handle a method returning a string.
Our mORMot.pas implementation has been tested with the same regression code as in 32 bit mode.

Your own tests and feedback are welcome!
Feedback and detailed tests results are available in our forum!

x64 optimized asm of FillChar() and Move() for Win64

$
0
0

We have included x64 optimized asm of FillChar() and Move() for Win64 - for corresponding compiler targets, i.e. Delphi XE2 and XE3.
It will handle properly cache prefetch and appropriate SSE2 move instructions

The System.pas unit of Delphi RTL will be patched at startup, unless the NOX64PATCHRTL conditional is defined.
Therefore, whole application may benefit for this optimized version.

Performance improvement is noticeable, when compared with the original pascal-based version included in System.pas.

By the way, the Delphi x64 built-in assembler does not recognize the movnti opcode... so we had to inline it as plain db hexadecimal values.
A bit disappointing. Until now, we did not suffer from anything in regard to the x64 compatibility at Delphi level.

No stand-alone unit available yet, since it is included in our SynCommons.pas shared unit, starting with the 1.18 revision of mORMot.

Feedback is welcome, as usual!

Download latest version of sqlite3.dll for Windows 64 bit

$
0
0

Up to now, there is no official Win64 version of the SQlite3 library released in http://sqlite.org..
It is in fact very difficult to find a ready-to-use and up-to-date SQLite3-64.dll from Internet, for Win64.

You can find our own 3.7.16 version of the SQlite3 external library, to be used in 64 bit mode, to be downloaded from SQLite3-64.7z.

It includes FTS3/FTS4 virtual tables, and was compiled in release mode.

Compiled with the latest version of Rad Studio XE3, which uses the LLVM 64 bit compiler as back-end.
Thanks to Hans for compiling and sharing the binary!

This is the version we use when our mORMot framework targets Win64, using Delphi XE2/XE3.

SynProject tool 1.18

$
0
0

We have uploaded an updatetd compiled version of our Open Source SynProject tool in SynProject.zip.

Synopse SynProject is an open source application for code source versioning and automated documentation of software projects.
Licensed under a GPL license.

Main feature is a new (better-looking?) template for the generated files.
See our mORMot framework documentation for a good sample of rendering content.

The internal wiki pages related to this tool has also been refreshed.

Feedback is welcome on our forum!

Delphi is just a perfect fit for the average programmer

$
0
0

On the Embarcadero forums, some user did have a perfectly sane reaction, about a non obvious integer type cast like Int64Var := Int32Var*Int32Var, which may overflow.

We've got to stop becoming, as one poster put it, "human pre-compilers" for Delphi.
The compiler ought to have the common sense to not need the programmer to cast the two integer values.

I respectfully think just the opposite.
;)

Such a type cast is part of the language grammar.
If you know the grammar, you will know how it will be compiled.

To be honest, you have the same in all languages, with more or less range checking, optimization, and implicit conversion.
This is why I like Delphi: it can be mastered by any programmers, whereas truly mastering Java or .Net needs a genius.


Delphi is just... human...

This is exactly what I like so much with Delphi, just as with C or C++.
I like to know which asm code will be generated.
I like to know how memory is allocated/released.
I do not like to play with a black box, but I enjoy to rely on identified source code (e.g. the Delphi RTL).
I like to know how the hardware will react to my code.
I like to run a profiler and see what is going on here.
I like to know how my software will run on the customer network and database.

When you know how the language and the compiler works, when you know about the various data structures used and generated by the compiler and the RTL, you are more efficient with it.
Compilers are just tools.
Computer languages are just abstraction of the hardware.

Even with more "high level" languages like JavaScript, you have to use some dedicated structures, and be aware of the potential of the JIT compiler, if you want the process to be as efficient as possible.

One of the problems of today's programmers is that most of them do not know any more what is "under the hood".
Coding is seen as a list of recipes.
It is pretty easy to (ab)use of Intellisense/Resharper, Linq, and whatever the language/IDE is offering to you, and you forget about what will be executed.
Everything is fine on your own computer, but on the customer's side, it does not work as expected.
It just remembers me coding in ZX81 basic... already typing on a black box... good old days doing plenty of things with 768 bytes of free RAM... 

You can be a lazy programmer (which is something good, IMHO), and at the same time know what happens.
In fact, a true lazy programmer needs to know how it works.

Delphi is simple enough to be understood by an average human being like me (after years of daily efforts), whereas I consider mastering .Net or Java frameworks is out of scope of my poor brain.
Object pascal is a abstract enough to allow high-level style of coding (e.g. as I've done with SynProject), and also low-level enough to release the whole hardware potential.

Introducing TSQLTable.Step() method

$
0
0

We have just added TSQLTable.Step(), FieldBuffer() and Field() methods, handling a cursor at TSQLTable level, with optional late-binding column access.

It allows to retrieve results from a TSQLTable / TSQLTableJSON result sets within a "cursor-like" orientation.
That is, no need to specify the row number, but write a simple while aList.Step do ... loop.

Of course, you should better use TSQLRecord.FillPrepare most of the time, and access the data from a TSQLRecord instance.
But it can be very useful, e.g. when working on a custom JOINed SQL statement.

TSQLTableJSON will expect some JSON content as input, will parse it in rows and columns, associate it with one or more optional TSQLRecord class types, then will let you access the data via its Get* methods.

You can use this TSQLTableJSON class as in the following example:

procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var aList: TSQLTableJSON;
    Row: integer;
beginaList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate','Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);if aList=nil thenraise Exception.Create('Impossible to retrieve data from Server');
  tryfor Row := 1 to aList.RowCount do
      writeln('ID=',aList.GetAsInteger(Row,0),' BirthDate=',aList.Get(Row,1));
  finally
    aList.Free;
  end;
end;

For a record with a huge number of fields, specifying the needed fields could save some bandwidth. In the above sample code, the ID column has a field index of 0 (so is retrieved via aList.GetAsInteger(Row,0)) and the BirthDate column has a field index of 1 (so is retrieved as a PUTF8Char via aList.Get(Row,1)). All data rows are processed via a loop using the RowCount property count - first data row is indexed as 1, since the row 0 will contain the column names.

The TSQLTable class has some methods dedicated to direct cursor handling, as such:

procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var aList: TSQLTableJSON;
begin
  aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
    'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);
  trywhile aList.Step dowriteln('ID=',aList.Field(0),' BirthDate=',aList.Field(1));finally
    aList.Free;
  end;
end;

By using the TSQLTable.Step method, you do not need to check that aList<>nil, since it will return false if aList is not assigned. And you do not need to access the RowCount property, nor specify the current row number.

We may have used not the field index, but the field name, within the loop:

      writeln('ID=',aList.Field('ID'),' BirthDate=',aList.Field('BirthDate'));

You can also access the field values using late-binding and a local variant, which gives some perfectly readable code:

procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var baby: variant;
beginwith Client.MultiFieldValues(TSQLBaby,'ID,BirthDate'
    'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]) dotrywhile Step(false,@baby) dowriteln('ID=',baby.ID,' BirthDate=',baby.BirthDate);finally
    Free;
  end;
end;

In the above code, late-binding will search for the "ID" and "BirthDate" fields at runtime. But the ability to write baby.ID and baby.BirthDate is very readable. Using a with ... do statement makes the code shorter, but should be avoided if it leads into confusion, e.g. in case of more complex process within the loop.

See also the following methods of TSQLRest: OneFieldValue, OneFieldValues, MultiFieldValue, MultiFieldValues which are able to retrieve either a TSQLTableJSON, or a dynamic array of integer or RawUTF8. And also List and ListFmt methods of TSQLRestClient, if you want to make a JOIN against multiple tables at once.

Feedback is welcome in our forum, as usual.

Adding some generic-based methods to mORMot


Two videos about EXTjs client of mORMot server

TDataSet... now I'm confused

$
0
0

You perhaps know that I'm not a big fan of the TDataSet / RAD DB approach for end-user applications.
They are easy to define, almost no code to write, and you are able to publish a working solution very fast.

But it is a nightmare to debug and maintain. I prefer the new DataBinding feature, or... of course... ORM!
In mORMot, we have some auto-generated screens, and in our roadmap, we forcast to use some auto-binding features, using a KISS by-convention MVC pattern.

For some users, we made a ORM / TDataSet conversion unit.
And we discovered that TDataSet has a weird, and very misleading definition of its AsString property, for Unicode versions of Delphi.

In the Delphi 2009+ implementation, you have to use AsString property for AnsiString and AsWideString for string=UnicodeString.

In fact, the As*String properties are defined as such:

property AsString: string read GetAsString write SetAsString;
property AsWideString: UnicodeString read GetAsWideString write SetAsWideString;
property AsAnsiString: AnsiString read GetAsAnsiString write SetAsAnsiString;

How on earth may we be able to find out that AsString: string returns in fact an AnsiString?
It just does not make sense at all, when compared to the rest of the VCL/RTL.

The implementation, which uses TStringField class for AnsiString and TWideStringField for string=UnicodeString just appear to be broken.

Furthermore, the documentation is also broken:

Data.DB.TField.AsString
Represents the field's value as a string (Delphi) or an AnsiString (C++).

This does not represent a string in Delphi, but an AnsiString!
The fact that the property uses a plain string=UnicodeString type is perfectly misleading.

From the database point of view, it is up to the DB driver to handle Unicode or work with a specific charset.
But from the VCL point of view, in Delphi 2009+ you should only know about one string type, and be confident that using AsString: String will be Unicode-ready.

If you use our mORMotVCL.pas unit, it will behave as expected.
Thanks you sjerinic for your input and patience about this issue!
Feedback is welcome!

mORMots know how to swim like fishes

Delphi XE4 NextGen compiler: using byte instead of ansichar?

$
0
0

When I first read the technical white paper covering all of the language changes in XE4 for mobile development (tied to the new ARM LLVM-based Delphi compiler), I have to confess I was pretty much confused.

Two great mORMot users just asked for XE4/iOS support of mORMot.

Win32/Win64 support for XE4 will be done as soon as we got a copy of it.
I suspect the code already works, since it was working as expected with XE3, and we rely on our own set of low-level functions for most internal work.

But iOS-targetting is more complex, due to the NextGen compiler, mainly.

FireMonkey

It is first time we are explicitly asked for non-Windows support of mORMot.
This is the reason why we did not put the cross-platform item of the roadmap at first place.

We did not do any support for this yet, because:

  • No one did ask for cross-platform use of mORMot;
  • FireMonkey was broken several times, has some part of it very poorly written, and do not support L2R languages - is it mature enough?
  • iOS support was broken once - and I prefer FPC to this NextGen compiler (see below);
  • We do not use FireMonkey in any of our applications;
  • SmartMobileStudio is an innovative, fast growing, cheap, and stable alternative (with lack of documentation and 3rd party components, I admit);
  • We also considered WxForms (which seems not supported any more, but did work well);
  • Linux support is a goal for mORMot, on the server side.

Immutable non-AnsiString

Immutable strings are something I do not understand well, in the context of Delphi.  

I still do not understand any benefit, in comparison to the copy-on-write paradigm (COW) implemented in Delphi since the beginning for reference-counted value types.
With COW, you have the advantages of immutable strings and private copies and in-place modification, if needed, e.g. for fast parsing.
COW can allow your text buffer access to be safe and fast at the same time. 

Using "array of byte" as a workaround from AnsiString/RawByteString is possible, but will be slower and less convenient.
It will implements COW (if const is used as expected in method parameters), but it will fill the content with zeros, so slow down the process.
And it won't be displayed as text in the debugger, nor allow direct conversion to string.

Honestly, changing from everything from AnsiChar to Byte is just an awful workaround and breaking change.
Just like a regression from the modern/turbo Pascal paradigm to a low-level C data type.

The switch introduced by NextGen/ARM/LLVM is IMHO much bigger than the one introduced with Delphi 2009.
For instance, for third party libraries (like our Open Source mORMot), you can maintain an existing code base for all versions of Delphi (e.g. Delphi 6 up to XE4), but you will have to maintain two versions of the code (or nest it with IFDEF) if you want to support NextGen syntax.

I understand that conversion to NextGen compiler can be easy. 

See for instance how TMS reported it to be not difficult for Aurelius.
But... do not forget that it may be on the depend of the performance.
Using pointers is not evil, if done with knowledge of it.
See this user feedback about FireBird ODBC access using Aurelius or our Open Source mORMot (which allows remote access, by the way, in addition to plain ORM).

IMHO this is one of the great features of compiled object pascal, in comparison to managed code, or the "NextGen" model.
My point is that pointers are not evil, especially for performance.
Of course, I'm speaking about typed pointers, not blank untyped pointers.

Huge code modification... for nothing?

We could switch the mORMot code to be Next-Gen compatible, but since we use UTF-8 at the lowest level, it will need a lot of IFDEF.
Using "array of byte" instead of "AnsiString(CP_UTF8)" and "byte" instead of "AnsiChar" is just an awful regression and compatibility break.

We would have to use a lot of function wrappers, or perhaps re-create at hand a UTF-8 compatibility layer.
The whole mORMot core is depending on UTF-8, and IMHO this was *not* a wrong choice, on the contrary.

But why go in this direction?

I'm confused with the Embarcadero NextGen compiler. 
Performance is not a goal. The RTL is just worse at every Delphi version.
... and compilation time is just dead slow, in comparison to the "PreviousGen" compiler. More than 20 times slower.
Is it worth it?

Deprecation of AnsiString was never prepared by Embarcadero.
We knew it about shortstring - OK.
We were told that the with keyword is the root of all evil, and should be avoided - OK.
But deprecation of AnsiString in the NextGen compiler sounds like a showstopper to me.

And don't tell me it is required by the LLVM compiler to have immutable strings and UTF-16 encoding.
This is a pure Embarcadero choice.

And don't tell me it is for performance optimization.
The Delphi RTL can be dead slow and not scalable.
Current string process was not the main speed bottleneck.
And you have easy alternatives to circumvent those bottlenecks and unleash your CPU power.

And do not tell me that TStringBuilder is the answer.
In-place parsing of buffers is the fastest mean e.g. for JSON or XML performance.
TStringBuilder just replaces classic string concatenation of mutable strings associated with a modern memory manager.
It is a workaround to circumvent a performance problem. There is no benefit of using it.

I was impressed by the Win64 support of latest versions of Delphi.
Very small breaking changes, when adapting mORMot to this platform.
IDE stable enough.
Resulting executable fast enough (at least when it relies on our SynCommons unit, and with some tuned asm code).
Even nice features where re-introduced into the compiler, after a complain in the newsgroups, like ability to compile x64 assembler functions/methods.
But here, I do not understand the direction.

Future Object Pascal

I do not want Delphi to be another managed-but-compiled language.
I like the object pascal language because:

  • It has the benefits of high-level languages like C# or Java, with readability, strong typing, classes, generics, interfaces, dedicated (ansi/wide)char type, string and dynamic array reference counted types;
  • It has some nice features I miss in C# for instance, like sets, enumerates or array of enumerations;
  • The unit layout, with clear distinction between interface and implementation sections, is very powerful, in respect to C# or Java all-in-one syntax;
  • It has the low-level power of C, if needed, especially for the library cores;
  • It has a truly working class system (you can use TMyClassClass = class of TMyClass with success);
  • It has a strong typing paradigm, which mitigates e.g. the use of pointer;
  • Memory can be managed just as we need, with no Garbage Collector glitches;
  • We have the whole RTL source code at hand, still readable (the C# RTL is much bigger and complex);
  • It compiles very quickly, even for huge projects;
  • It generates stand-alone applications, with no dll hell;
  • It has a strong backward compatible history, with huge code available on the Internet.
If I want a C# or Java syntax, I would switch to those.
Trying to make Delphi more C#-like is a mistake.
Just think how the attributes syntax in modern Delphi is not pascal-oriented: it should be defined after the type definition, as in the free pascal syntax, not before it, as in C#/Java.

Please fix NextGen roadmap!

Using byte instead of AnsiChar is IMHO not a feature.
This is not "next generation".
This is a regression.
This is a breaking show-stopper.

I suspect (hope?) Embarcadero will be clever enough to re-introduce AnsiString process to their compiler.
Forget about zero-based strings.
Allow mutable strings and pointer access to their buffer.

Otherwise, I honestly do not have much hope of continuing in this direction.
Not worth it.

The "next gen" object pascal is not C#.
It may be the Oxygene compiler, even if I do not find so appealing a cross-compiler language with no cross-platform library.
It may either be FreePascal and its FCL/RTL, when targeting native compilation.
Either DelphiWebScript / SmartMobileStudio, when targeting interpreted / JIT / JavaScript platforms.

Your feedback is needed!

I understand that I may be one of the last one using Delphi as such.
That is, using Delphi not as a RAD tool, but as a strong platform to build huge applications.
With some unique benefits, in regard to alternatives (mainly Java or C#).
Huge list of users of mORMot, and continuous feedback on the forum, tends to prove that I'm not the only one!

Knowing that x64 assembler was re-introduced to Win64 compiler.
That Client-Server licensing was re-allowed after some newsgroup protest.

I do not want to troll, just to let Delphi have a long life!
I like Delphi, I do not want to stay with Delphi 7 for ever, I want the platform to be maintained and evolving!
But this NextGen roadmap may kill my hope, and switch to FPC and SMS.

Feedback is welcome on our forum!

Or even better please react on the Embarcadero forum directly!

Update: a lot of "reaction" did occur in the Embarcadero forum.
But no Embarcadero official did react by now.
Even Team-B members agreed about the need of a petition to integrate feedback of a lot of existing Delphi users.
Stay tuned!

"The shorter code, the better?"

$
0
0

One quick Sunday post, from a comment I wrote in a blog article.

I'm always wondering why a lot of programmers tend to implicitly assume that "the shorter source code, the better".

It is true when it means that with proper code refactoring, making small objects with dedicated methods, the code of your methods will be smaller.
It is true when you do not like to write as a "Copy & Paste" coder, without searching to put common code in shared places.

But is it true at the language level?
I mean, is it just because your ARC or GC model allow you not to manage the object memory, that it is always better?

Just some ideas...

1. Delphi interface types already allow to write such short code

First of all, it’s worth saying that you can use interfaces to write perfectly safe and short code, without ARC, just with the current version of Delphi.
See for instance how is implemented this GDI+ library.

Implicit try..finally free blocks are already written by the compiler.

See also our reference article about interfaces.

2. Delphi Owner property already allows you to manage object life time in the VCL

The whole Delphi components life time is based on ownership.
A form owns its sub components.
A module owns its sub components.
And so on...

It is when the owner is freed, that all its components are also released.

Nice and easy.
Safe and efficient.

3. Code length does not make it less readable

We all know that we read much more code than we write.

So first priority is to let the code be as readable as possible.
IMHO try…finally free patterns do not pollute the readability of the code.
On the contrary, from my own taste, it shows exact scope of a class instance life time.

4. Managing life time objects can help you write better code

When you manage the object life time, you are not tempted to re-create the same again and again. You factorize, optimize your code. And it results… in faster code.

I have seen plenty of Java and C# programmers how do not give a clue about memory allocation and internal process: they write code which works, without asking themselves what will be executed in fact.
Then, especially on a server side, performance scaling is a nightmare.
It works on the developer PC, but won't pass the first performance tests pass.

Having to manage memory life time by hand does not bother me.
On the contrary, it made me a better (less bad) lazy programmer.
And it helped me write faster code, which I know what it is doing when executed.

Of course, it is not automatic.
You can just write try…finally blocks and weep, without searching for code refactoring.
I have seen Delphi code written like that, especially from programmers with a GC model background.

So do not be afraid to learn how to manage your memory!

5. Managing life time objects is worth learning

I was a bit afraid about managing memory, when I came from old BASIC and ASM programming on 8 bit systems (good old days!).
A time where there was no heap, but only static allocation, with less than 64 KB of RAM.
It was working well. And such programs can run for years without any memory leak!

But managing life time is a good way of known how your objects are implemented.
When using an object method, you are not just getting the right result, you are calling perhaps a lot of process.
Worth looking at the internals.

In practice, when writing efficient code, in a GC world, you will have to learn a lot of unofficial information from the runtime designers, to know how GC is implemented.
As such, performance may vary on one revision of the runtime engine, in comparison to another.
If you manage your object life time by hand, you know what you are doing.

The ARC model is in the middle.
But introduces some issues, like need for weak references, and zeroing weak pointers.
AFAIK the RTL implements weak references with a global lock using the very slow TMonitor, which will slow down the whole process a lot, especially in multi-thread (whereas the weak pointer implementation in mORMot core is much more scalable, by the way).
BTW, in October last year I was already speaking about this global lock implementation issue, when I discovered a pre-version of it in the XE3 RTL source code. And the version shipped with XE4 did not improve anything.
And could they say that performance is a concern for them? Forcing the immutable strings use for performance is a just joke, when you look at the current RTL.

6. Source code size has nothing to do with execution speed

In practice, the more the compiler magic or the runtime will execute under the hood, the slower it will be.
So shorter code is most of the time slower code.

Of course, I know that using some high-level structures (like a hashed dictionary or an optimized sort) can be much faster than using a manual search (with a for ... loop), or writing a naive bubble sort.
It does not mean that the more verbose code means the faster.
But my point is that if you rely on some hidden low-level mechanisms, like memory handling, auto-generated structures (like closures), or some RTTI-based features, you will probably write less code, but it will be slower, or less stable.

If you do not handle memory, you are not able to tune the execution process, when needed.
It is not for nothing that the most speed-effective Java projects just use POJO and statically allocated instances, to by-pass the GC.

Worth some minutes thinking about...

Performance issue in NextGen ARC model

$
0
0

Apart from being very slow during compilation, the Delphi NextGen compiler introduced a new memory model, named ARC.

We already spoke about ARC years ago, so please refer to our corresponding blog article for further information, especially about how Apple did introduce ARC to iOS instead of the Garbage Collector model.

About how ARC is to be used in the NextGen compiler, take a look at Marco's blog article, and its linked resources.

But the ARC model, as implemented by Embarcadero, has at least one huge performance issue, in the way weak references, and zeroing weak pointers have been implemented.
I do not speak about the general slow down introduced during every class/record initialization/finalization, which is noticeable, but not a big concern.

If you look at XE4 internals, you will discover a disappointing global lock introduced in the RTL.

The main issue is that XE4 RTL implements weak references with a global lock, which will slow down the whole process a lot, especially in multi-thread.

procedure TInstHashMap.RegisterWeakRef(Address: Pointer; Instance: Pointer);
var
  H: Integer;
  Item: PInstItem;
begin
  Lock;
  try
    H := Hash(Instance);
    Item := FindInstItem(Instance, H);
    if Item = nil then
      Item := AddInstItem(Instance, H);
    Item.RegisterWeakRef(Address);
  finally
    Unlock;
  end;
end;

The Lock/Unlock methods are implemented via TMonitor.
This synchronization class has the benefit to be cross-platform, but the drawback of being slower than other approaches.

Such a global lock will just kill the performance in multi-thread process.

Our weak pointer implementation for interfaces in mORMot uses a diverse approach, with a list per class type and small critical sections, so will be much more multi-thread friendly than XE4's implementation.

We can use ARC when targeting mobile platforms.
But in its current implementation, ARC would be a disaster about performance for a server application.

I'm waiting for XE5!

Please do not kill Delphi desktop/server application performance!

In October last year we were already speaking about this global lock implementation issue, when we discovered a pre-version of it in the XE3 RTL source code.
And the version shipped with XE4 did not improve anything.
How could EMB say that performance is a concern for them?
The more I see it, the more I think that enforcing strings to be immutable for performance reasons is just a joke, when you look at the current RTL state.

REGEXP operator for SQLite3

$
0
0

Our SQLite3 engine can now use regular expression within its SQL queries, by enabling the REGEXP operator in addition to standard SQL operators (= == != <> IS IN LIKE GLOB MATCH). It will use the Open Source PCRE library (bounded since Delphi XE, or available as a separate download) to perform the queries.

It will enable advanced searches within text columns and our objects, when used as a WHERE clause of our mORMot's ORM.

In order to enable the operator, you should include unit SynSQLite3RegEx.pas to your uses clause, and register the RegExp() SQL function to a given SQLite3 database instance, as such:

uses SynCommons, mORmot, mORMotSQLite3,
  SynSQLite3RegEx;
 ...
Server := TSQLRestServerDB.Create(Model,'test.db3');
tryCreateRegExFunction(Server.DB.DB);with TSQLRecordPeople.CreateAndFillPrepare(Client,
    'FirstName REGEXP ?',['\bFinley\b']) dotrywhile FillOne do begin
      Check(LastName='Morse');
      Check(IdemPChar(pointer(FirstName),'SAMUEL FINLEY '));
    end;
  finally
    Free;
  end;
finally
  Server.Free;
end;

The above code will execute the following SQL statement (with a prepared parameter for the regular expression itself):

 SELECT * from People WHERE Firstname REGEXP '\bFinley\b';

That is, it will find all objects where TSQLRecordPeople.FirstName will contain the 'Finley' word - in a regular expression, \b defines a word boundary search.

In fact, the REGEXP operator is a special syntax for the regexp() user function. No regexp() user function is defined by default and so use of the REGEXP operator will normally result in an error message. Calling CreateRegExFunction() for a given connection will add a SQL function named "regexp()" at run-time, which will be called in order to implement the REGEXP operator.

It will use the statically linked PCRE library as available since Delphi XE, or will rely on the PCRE.pas wrapper unit as published at http://www.regular-expressions.info/download/TPerlRegEx.zip for older versions of Delphi.

This unit will call directly the UTF-8 API of the PCRE library, and maintain a per-connection cache of compiled regular expressions to ensure the best performance possible.

Feedback about this feature request implementation is welcome on our forum, as usual.


Authentication and Authorization

$
0
0

Our mORMot framework tries to implement security via:
- Process safety;
- Authentication;
- Authorization.

Process safety is implemented at every n-Tier level:
- Atomicity of the SQLite3 database core;
- RESTful architecture to avoid most synchronization issues;
- ORM associated to the Object pascal strong type syntax;
- Extended test coverage of the framework core.

Authentication allows user identification:
- Build-in optional authentication mechanism, implementing both per-user sessions and individual REST Query Authentication;
- Authentication groups are used for proper authorization;
- Several authentication schemes, from very secure SHA-256 based challenging to weak but simple authentication;
- Class-based architecture, allowing custom extension.

Authorization of a given process is based on the group policy, after proper authentication:
- Per-table access right functionalities built-in at lowest level of the framework;
- Per-method execution policy for interface-based services;
- General high-level security attributes, for SQL or Service remote execution.

We will now give general information about both authentication and authorization in the framework.

In particular, authentication is now implemented via a set of classes.

Authentication

Extracted from Wikipedia:

Authentication (from Greek: "real" or "genuine", from "author") is the act of confirming the truth of an attribute of a datum or entity. This might involve confirming the identity of a person or software program, tracing the origins of an artifact, or ensuring that a product is what its packaging and labeling claims to be. Authentication often involves verifying the validity of at least one form of identification.

Principles

How to handle authentication in a RESTful Client-Server architecture is a matter of debate.

Commonly, it can be achieved, in the SOA over HTTP world via:
- HTTP basic auth over HTTPS;
- Cookies and session management;
- Query Authentication with additional signature parameters.

We'll have to adapt, or even better mix those techniques, to match our framework architecture at best.

Each authentication scheme has its own PROs and CONs, depending on the purpose of your security policy and software architecture:

CriteriaHTTPS basic authCookies+SessionQuery Auth.
Browser integrationNativeNativeVia JavaScript
User InteractionRudeCustomCustom
Web Service use
(rough estimation)
95%4%1%
Session handlingYesYesNo
Session managed byClientServerN/A
Password on ServerYesYes/NoN/A
Truly StatelessYesNoYes
Truly RESTfulNoNoYes
HTTP-freeNoNoYes

HTTP basic auth over HTTPS

This first solution, based on the standard HTTPS protocol, is used by most web services. It's easy to implement, available by default on all browsers, but has some known draw-backs, like the awful authentication window displayed on the Browser, which will persist (there is no LogOut-like feature here), some server-side additional CPU consumption, and the fact that the user-name and password are transmitted (over HTTPS) into the Server (it should be more secure to let the password stay only on the client side, during keyboard entry, and be stored as secure hash on the Server).

The supplied TSQLHttpClientWinHTTP and TSQLHttpClientWinINet clients classes are able to connect using HTTPS, and the THttpApiServer server class can send compatible content.

Session via Cookies

To be honest, a session managed on the Server is not truly Stateless. One possibility could be to maintain all data within the cookie content. And, by design, the cookie is handled on the Server side (Client in fact don’t even try to interpret this cookie data: it just hands it back to the server on each successive request). But this cookie data is application state data, so the client should manage it, not the server, in a pure Stateless world.

The cookie technique itself is HTTP-linked, so it's not truly RESTful, which should be protocol-independent. Since our framework does not provide only HTTP protocol, but offers other ways of transmission, Cookies were left at the baker's home.

Query Authentication

Query Authentication consists in signing each RESTful request via some additional parameters on the URI. See http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html about this technique. It was defined as such in this article:

All REST queries must be authenticated by signing the query parameters sorted in lower-case, alphabetical order using the private credential as the signing token. Signing should occur before URI encoding the query string.

For instance, here is a generic URI sample from the link above:

 GET /object?apiKey=Qwerty2010

should be transmitted as such:

 GET /object?timestamp=1261496500&apiKey=Qwerty2010&signature=abcdef0123456789

The string being signed is "/object?apikey=Qwerty2010×tamp=1261496500" and the signature is the SHA256 hash of that string using the private component of the API key.

This technique is perhaps the more compatible with a Stateless architecture, and can also been implemented with a light session management.

Server-side data caching is always available. In our framework, we cache the responses at the SQL level, not at the URI level (thanks to our optimized implementation of GetJSONObjectAsSQL, the URI to SQL conversion is very fast). So adding this extra parameter doesn't break the cache mechanism.

Framework authentication

Even if, theoretically speaking, Query Authentication sounds to be the better for implementing a truly RESTful architecture, our framework tries to implement a Client-Server design.

In practice, we may consider two way of using it:
- With no authentication nor user right management (e.g. for local access of data, or framework use over a secured network);
- With per-user authentication and right management via defined security groups, and a per-query authentication.

According to RESTful principle, handling per-session data is not to be implemented in such architecture. A minimal "session-like" feature was introduced only to handle user authentication with very low overhead on both Client and Server side. The main technique used for our security is therefore Query Authentication, i.e. a per-URI signature.

If both AuthGroup and AuthUser are not available on the Server TSQLModel (i.e. if the aHandleUserAuthentication parameter was set to false for the TSQLRestServer. Create constructor), no authentication is performed. All tables will be accessible by any client, as stated in 19. As stated above, for security reasons, the ability to execute INSERT / UPDATE / DELETE SQL statement via a RESTful POST command is never allowed by default with remote connections: only SELECT can be executed via this POST verb.

On the Server side, a dedicated service, accessible via the ModelRoot/Auth URI is to be called to register an User, and create a session.

If authentication is enabled for the Client-Server process (i.e. if both AuthGroup and AuthUser are available in the Server TSQLModel, and the aHandleUserAuthentication parameter was set to true at the TSQLRestServer instance construction), the following security features will be added:
- Client should open a session to access to the Server, and provide a valid UserName / Password pair (see next paragraph);
- Each CRUD statement is checked against the authenticated User security group, via the AccessRights column and its GET / POST / PUT / DELETE per-table bit sets;
- Thanks to Per-User authentication, any SQL statement commands may be available via the RESTful POST verb for an user with its AccessRights group field containing AllowRemoteExecute=true;
- Each REST request will expect an additional parameter, named session_signature, to every URL. Using the URI instead of cookies allows the signature process to work with all communication protocols, not only HTTP.

Per-User authentication

On the Server side, two tables, defined by the TSQLAuthGroup and TSQLAuthUser classes will handle respectively per-group access rights, and user authentication.

The corresponding AccessRights column is a textual CSV serialization of the TSQLAccessRights record content, as expected by the TSQLRestServer.URI method. Using a CSV serialization, instead of a binary serialization, will allow the change of the MAX_SQLTABLES constant value.

The AuthUser table, as defined by the TSQLAuthUser class type.

Each user has therefore its own associated AuthGroup table, a name to be entered at login, a name to be displayed on screen or reports, and a SHA-256 hash of its registered password. A custom Data BLOB field is specified for your own application use, but not accessed by the framework.

By default, the following security groups are created on a void database:

AuthGroupPOST SQLAuth ReadAuth WriteTables RTables W
AdminYesYesYesYesYes
SupervisorNoYesNoYesYes
UserNoNoNoYesYes
GuestNoNoNoYesNo

Then the corresponding 'Admin', 'Supervisor' and 'User' AuthUser accounts are created, with the default 'synopse' password.

You MUST override those default 'synopse' passwords for each AuthUser record to a custom genuine value.

'Admin' will be the only group able to execute remote not SELECT SQL statements for POST commands (i.e. to have TSQLAccessRights. AllowRemoteExecute = true) and modify the Auth* tables (i.e. AuthUser and AuthGroup) content.

A typical JSON representation of the default security user/group definitions are the following:

[{"AuthUser":[
{"RowID":1,"LogonName":"Admin","DisplayName":"Admin","PasswordHashHexa":"67aeea294e1cb515236fd7829c55ec820ef888e8e221814d24d83b3dc4d825dd","GroupRights":1,"Data":null},
{"RowID":2,"LogonName":"Supervisor","DisplayName":"Supervisor","PasswordHashHexa":"67aeea294e1cb515236fd7829c55ec820ef888e8e221814d24d83b3dc4d825dd","GroupRights":2,"Data":null},
{"RowID":3,"LogonName":"User","DisplayName":"User","PasswordHashHexa":"67aeea294e1cb515236fd7829c55ec820ef888e8e221814d24d83b3dc4d825dd","GroupRights":3,"Data":null}]},
{"AuthGroup":[
{"RowID":1,"Ident":"Admin","SessionTimeout":10,"AccessRights":"11,1-256,0,1-256,0,1-256,0,1-256,0"},
{"RowID":2,"Ident":"Supervisor","SessionTimeout":60,"AccessRights":"10,1-256,0,3-256,0,3-256,0,3-256,0"},
{"RowID":3,"Ident":"User","SessionTimeout":60,"AccessRights":"10,3-256,0,3-256,0,3-256,0,3-256,0"},
{"RowID":4,"Ident":"Guest","SessionTimeout":60,"AccessRights":"0,3-256,0,0,0,0"}]}]

Of course, you can change AuthUser and AuthGroup table content, to match your security requirements, and application specifications. You can specify a per-table CRUD access, via the AccessRights column, as we stated above, speaking about the TSQLAccessRights record layout.

This will implement both Query Authentication together with a group-defined per-user right management.

Session handling

A dedicated RESTful service, available from the ModelRoot/Auth URI, is to be used for user authentication, handling so called sessions.

In mORMot, a very light in-memory set of sessions is implemented:
- The unique ModelRoot/Auth URI end-point will create a session after proper authorization;
- In-memory session allows very fast process and reactivity, on Server side;
- An integersession identifier is used for all authorization process, independently from the underlying authentication scheme (i.e. mORMot is not tied to cookies, and its session process is much more generic).

Those sessions are in-memory TAuthSession class instances. Note that this class does not inherit from a TSQLRecord table so won't be remotely accessible, for performance and security reasons.

The server methods should not have to access those instances directly, but rely on the SessionID identifier. The only access available is via the function TSQLRestServer.SessionGetUser(aSessionID: Cardinal): TSQLAuthUser method.

When the Client is about to close (typically in TSQLRestClientURI.Destroy), a GET ModelRoot/auth?UserName=...&Session=... request is sent to the remote server, in order to explicitly close the corresponding session in the server memory (avoiding most re-play attacks).

Note that each opened session has an internal TimeOut parameter (retrieved from the associated TSQLAuthGroup table content): after some time of inactivity, sessions are closed on the Server Side.

In addition, sessions are used to manage safe cross-client transactions:
- When a transaction is initiated by a client, it will store the corresponding client Session ID, and use it to allow client-safe writing;
- Any further write to the DB (Add/Update/Delete) will be accessible only from this Session ID, until the transaction is released (via commit or rollback);
- If a transaction began and another client session try to write on the DB, it will wait until the current transaction is released - a timeout may occur if the server is not able to acquire the write status within some time;
- This global write locking is implemented in the TSQLRest.AcquireWrite / ReleaseWrite protected methods, and used on the Server-Side by TSQLRestServer.URI;
- If the server do not handle Session/Authentication, transactions can be unsafe, in a multi-client concurrent architecture.

Therefore, for performance reasons in a multi-client environment, it's mandatory to release a transaction (via commit or rollback) as soon as possible.

Authentication schemes

Class-driven authentication

Authentication is implemented in mORMot via the following classes:

classScheme
TSQLRestServerAuthenticationDefaultmORMot secure authentication scheme, based on a proprietary dual-pass challenge and SHA-256 hashing
TSQLRestServerAuthenticationSSPIWindows authentication, via the logged user
TSQLRestServerAuthenticationNoneWeak but simple authentication, based on user name

All those classes will identify a TSQLAuthUser record from a user name. The associated TSQLAuthGroup is then used later for authorization.

You can add you own custom authentication scheme by defining your own class, inheriting from TSQLRestServerAuthentication.

By default, no authentication is performed.

If you set the aHandleUserAuthentication parameter to true when calling the constructor TSQLRestServer.Create(), both default secure mORMot authentication and Windows authentication are available. In fact, the constructor executes the following:

constructor TSQLRestServer.Create(aModel: TSQLModel; aHandleUserAuthentication: boolean);
  (...)
  if aHandleUserAuthentication then// default mORMot authentication schemesAuthenticationRegister([TSQLRestServerAuthenticationDefault,TSQLRestServerAuthenticationSSPI]);
  (...)

In order to define one or several authentication scheme, you can call the AuthenticationRegister() and AuthenticationUnregister() methods of TSQLRestServer.

mORMot secure RESTful authentication

The TSQLRestServerAuthenticationDefault class implements a proprietary but secure RESTful 18.

Here are the typical steps to be followed in order to create a new user session via mORMot authentication scheme:
- Client sends a GET ModelRoot/auth?UserName=... request to the remote server;
- Server answers with an hexadecimal nonce contents (valid for about 5 minutes), encoded as JSON result object;
- Client sends a GET ModelRoot/auth?UserName=...&PassWord=...&ClientNonce=... request to the remote server, in which ClientNonce is a random value used as Client nonce, and PassWord is computed from the log-on and password entered by the User, using both Server and Client nonce as salt;
- Server checks that the transmitted password is valid, i.e. that its matches the hashed password stored in its database and a time-valid Server nonce - if the value is not correct, authentication fails;
- On success, Server will create a new in-memory session and return the session number and a private key to be used during the session (encoded as JSON result object);
- On any further access to the Server, a &session_signature= parameter is added to the URL, and will be checked against the valid sessions in order to validate the request.

Query Authentication is handled at the Client side in TSQLRestClientURI.SessionSign method, by computing the session_signature parameter for a given URL, according to the TSQLRestServerAuthentication class used.

In order to enhance security, the session_signature parameter will contain, encoded as 3 hexadecimal 32 bit cardinals:
- The Session ID (to retrieve the private key used for the signature);
- A Client Time Stamp (in 256 ms resolution) which must be greater or equal than the previous time stamp received;
- The URI signature, using the session private key, the user hashed password, and the supplied Client Time Stamp as source for its crc32 hashing algorithm.

Such a classical 3 points signature will avoid most man-in-the-middle (MITM) or re-play attacks.

Here is typical signature to access the root URL

 root?session_signature=0000004C000F6BE365D8D454

In this case, 0000004C is the Session ID, 000F6BE3 is the client time stamp (aka nonce), and 65D8D454 is the signature, checked by the following Delphi expression:

(crc32(crc32(fPrivateSaltHash,PTimeStamp,8),pointer(aURL),aURLlength)=aSignature);

For instance, a RESTful GET of the TSQLRecordPeople table with RowID=6 will have the following URI:

 root/People/6?session_signature=0000004C000F6DD02E24541C

For better Server-side performance, the URI signature will use fast crc32 hashing method, and not the more secure (but much slower) SHA-256. Since our security model is not officially validated as a standard method (there is no standard for per URI authentication of RESTful applications), the better security will be handled by encrypting the whole transmission channel, using standard HTTPS with certificates signed by a trusted CA, validated for both client and server side. The security involved by using crc32 will be enough for most common use. Note that the password hashing and the session opening will use SHA-256, to enhance security with no performance penalty.

In our implementation, for better Server-side reaction, the session_signature parameter is appended at the end of the URI, and the URI parameters are not sorted alphabetically, as suggested by the reference article quoted above. This should not be a problem, either from a Delphi Client or from a AJAX / JavaScript client.

On practice, this scheme is secure and very fast, perfect for a Delphi client.

Authentication using Windows credentials

By default, the hash of the user password is stored safely on the server side. This may be an issue for corporate applications, since a new user name / password pair is to be defined by each client, which may be annoying.

Since revision 1.18 of the framework, mORMot is able to use Windows Authentication to identify any user. That is, the user does not need to enter any name nor password, but her/his Windows credentials, as entered at Windows session startup, will be used.

If the SSPIAUTH conditional is defined (which is the default), any call to TSQLRestClientURI.SetUser() method with a void aUserName parameter will try to use current logged name and password to perform a secure Client-Server authentication. It will in fact call the class function TSQLRestServerAuthenticationSSPI.ClientSetUser() method.

In this case, the aPassword parameter will just be ignored. This will be transparent to the framework, and a regular session will be created on success.

Only prerequisite is that the TSQLAuthUser table shall contain a corresponding entry, with its LogonName column equals to 'DomainNameUserName' value. This data row won't be created automatically, since it is up to the application to allow or disallow access from an authenticated user: you can be member of the domain, but not eligible to the application.

Weak authentication

The TSQLRestServerAuthenticationNone class can be used if you trust your client (e.g. via a https connection). It will implement a weak but simple authentication scheme.

Here are the typical steps to be followed in order to create a new user session via this authentication scheme:
- Client sends a GET ModelRoot/auth?UserName=... request to the remote server;
- Server checks that the transmitted user name is valid, i.e. that it is available in the TSQLAuthGroup table - if the value is not correct, authentication fails
- On success, Server will create a new in-memory session and returns the associated session number (encoded as hexadecimal in the JSON result object);
- On any further access to the Server, a &session_signature= parameter is to be added to the URL with the correct session ID, and will be checked against the valid sessions in order to validate the request.

For instance, a RESTful GET of the TSQLRecordPeople table with RowID=6 will have the following URI:

 root/People/6?session_signature=0000004C

Here is some sample code about how to define this authentication scheme:

// on the Server side:
  Server.AuthenticationRegister(TSQLRestServerAuthenticationNone);
  ...
  // on the Client side:
  TSQLRestServerAuthenticationNone.ClientSetUser(Client,'User');

The performance benefit is very small in comparison to TSQLRestServerAuthenticationDefault, so should not be used for Delphi clients.

Clients authentication

Client interactivity

Note that with this design, it's up to the Client to react to an authentication error during any request, and ask again for the User pseudo and password at any time to create a new session. For multiple reasons (server restart, session timeout...) the session can be closed by the Server without previous notice.

In fact, the Client should just use create one instance of the TSQLRestClientURI classes as presented in 6, then call the SetUser method as such:

      Check(Client.SetUser('User','synopse')); // use default user

Then an event handled can be associated to the TSQLRestClientURI. OnAuthentificationFailed property, in order to ask the user to enter its login name and password:

  TOnAuthentificationFailed = function(Retry: integer;
    var aUserName, aPassword: string): boolean;

Of course, if Windows Authentication is defined (see above), this event handler shall be adapted as expected. For instance, you may add a custom notification to register the corresponding user to the TSQLAuthUser table.

Authentication using AJAX

Some working JavaScript code has been published in our forum by a framework user (thanks, "RangerX"), which implements the authentication schema as detailed above. It uses jQuery, and HTML 5 LocalStorage, not cookies, for storing session information on the Client side.

See http://synopse.info/forum/viewtopic.php?pid=2995#p2995

The current revision of the framework contains the code as expected by this JavaScript code - especially the results encoded as 2 objects.

In the future, some "official" code will be available for such AJAX clients. It will probably rely on pure-pascal implementation using such an Object-Pascal-to-JavaScript compiler - it does definitively make sense to have Delphi-like code on the client side, not to break the ORM design. For instance, the Open Source DWS (DelphiWebScript) compiler matches our needs - see http://delphitools.info/tag/javascript

Authorization

Per-table access rights

Even if authentication is disabled, a pointer to a TSQLAccessRights record, and its GET / POST / PUT / DELETE fields, is sent as a member of the parameter to the unique access point of the server class:

procedure TSQLRestServer.URI(var Call: TSQLRestServerURIParams);

This will allow checking of access right for all CRUD operations, according to the table invoked. For instance, if the table TSQLRecordPeople has 2 as index in TSQLModel.Tables[], any incoming POST command for TSQLRecordPeople will be allowed only if the 2nd bit in RestAccessRights^.POST field is set, as such:

case URI.Method of
    mPOST: begin// POST=ADD=INSERTif URI.Table=nil then begin
      (...)
    end else// here, Table<>nil and TableIndex in [0..MAX_SQLTABLES-1]if not (URI.TableIndex in Call.RestAccessRights^.POST) then// check UserCall.OutStatus := HTML_FORBIDDEN else
      (...)

Making access rights a parameter allows this method to be handled as pure stateless, thread-safe and session-free, from the bottom-most level of the framework.

On the other hand, the security policy defined by this global parameter does not allow tuned per-user authorization. In the current implementation, the SUPERVISOR_ACCESS_RIGHTS constant is transmitted for all handled communication protocols (direct access, GDI messages, named pipe or HTTP). Only direct access via TSQLRestClientDB will use FULL_ACCESS_RIGHTS, i.e. will have AllowRemoteExecute parameter set to true.

The light session process, as implemented by 18, is used to override the access rights with the one defined in the TSQLAuthGroup.AccessRights field.

Be aware than this per-table access rights depend on the table order as defined in the associated TSQLModel. So if you add some tables to your database model, please take care to add the new tables after the existing. If you insert the new tables within current tables, you will need to update the access rights values.

Additional safety

A AllowRemoteExecute: TSQLAllowRemoteExecute field has been made available in the TSQLAccessRights record to tune remote execution, depending on the authenticated user.

It adds some options to tune the security policy.

SQL remote execution

In our RESTful implementation, the POST command with no table associated in the URI allows to execute any SQL statement directly.

This special command should be carefully tested before execution, since SQL misuses could lead into major security issues. Such execution on any remote connection, if the SQL statement is not a SELECT, is unsafe. In fact, if it may affect the data content.

By default, for security reasons, this AllowRemoteExecute field value in SUPERVISOR_ACCESS_RIGHTS constant does not include reSQL. It means that no remote call will be allowed but for safe read-only SELECT statements.

Another possibility of SQL remote execution is to add a sql=.... inline parameter to a GET request (with optional paging). The reUrlEncodedSQL option is used to enable or disable this feature.

Last but not least, a WhereClause=... inline parameter can be added to a DELETE request. The reUrlEncodedDelete option is used to enable or disable this feature.

You can change the default safe policy by including reSQL, reUrlEncodedSQL or reUrlEncodedDelete in the TSQLAuthGroup.AccessRights field if an authentication user session. But since remote execution of any SQL statements can be unsafe, we recommend to write a dedicated server-side service (method-based or interface-based) to execute such statements.

Service execution

The reService option can be used to enable or unable the interface-based services feature of mORMot.

In addition to this global parameter, you can set per-service and per-method security via dedicated methods.

For method-based services, if authentication is enabled, any method execution will be processed only for signed URI.

You can use TSQLRestServer.ServiceMethodByPassAuthentication() to disable the need of a signature for a given service method - e.g. it is the case for Auth and TimeStamp standard method services.

Feedback is welcome on our forum, as usual.

SQLite3 performance in Exclusive file locking mode

$
0
0

As stated in previous blog articles, the default SQlite3 write speed is quite slow, when running on a normal hard drive. By default, the engine will pause after issuing a OS-level write command. This guarantees that the data is written to the disk, and features the ACID properties of the database engine.

ACID is an acronym for "Atomicity Consistency Isolation Durability" properties, which guarantee that database transactions are processed reliably: for instance, in case of a power loss or hardware failure, the data will be saved on disk in a consistent way, with no potential loss of data.

In SQLite3, ACID is implemented by two means at file level:
- Synchronous writing: it means that the engine will wait for any written content to be flushed to disk before processing the next request;
- File locking: it means that the database file is locked for exclusive use during writing, allowing several processes to access the same database file concurrently.

Changing these default settings can ensure much better writing performance.

We just added direct File locking tuning.
It appears that defining exclusive access mode is able to increase the performance a lot, in both reading and writing speed.

Here are some new benchmarks and data, extracted from the updated SAD documentation.

SQLite3 performance benchmark

Here we insert 5,000 rows of data, with diverse scenarios:

  • 'Direct' stands for a individual Client.Add() insertion;
  • 'Batch' mode regrouping data rows;
  • 'Trans' indicates that all insertion is nested within a transaction - which makes a great difference, e.g. with a SQlite3 database.

Benchmark was run on a Core i7 notebook, with standard SSD.
So it was a development environment, very similar to low-cost production site, not dedicated to give best performance.
During the process, CPU was noticeable used only for SQLite3 in-memory and TObjectList - most of the time, the bottleneck is not the CPU, but the storage or network.
As a result, rates and timing may vary depending on network and server load, but you get results similar to what could be expected on customer side, with an average hardware configuration.

 DirectBatchTransBatch Trans
SQLite3 (file full)50339996391123064
SQLite3 (file off)85393099534130907
SQLite3 (file off exc)3182935798101874132752
SQLite3 (mem)85803109641103976135332
TObjectList (static)321089548365312031547105
TObjectList (virtual)314366513136316676571232
SQLite3 (ext full)45151112092137249
SQLite3 (ext off)971909108133144475
SQLite3 (ext off exc)4280551256113155150829
SQLite3 (ext mem)97344121400113229153256

Due to its ACID implementation, SQLite3 process on file waits for the hard-disk to have finished flushing its data, therefore it is the reason why it is slower than other engines at individual row insertion (less than 10 objects per second with a mechanical hardrive instead of a SDD) outside the scope of a transaction.

So if you want to reach the best writing performance in your application with the default engine, you should better use transactions and regroup all writing into services or a BATCH process.
Another possibility could be to execute DB.Synchronous := smOff and/or DB.LockingMode := lmExclusive at SQLite3 engine level before process: in case of power loss at wrong time it may corrupt the database file, but it will increase the rate by a factor of 50 (with hard drive), as stated by the "off" and off exc" rows of the table - see below.

Now the same data is retrieved via the ORM layer:

  • 'By one' states that one object is read per call (ORM generates a SELECT * FROM table WHERE ID=? for Client.Retrieve() method);
  • 'All *' is when all 5000 objects are read in a single call (i.e. running SELECT * FROM table from a FillPrepare() method call), either forced to use the virtual table layer, or with direct static call.

Here are some reading speed values, in objects/second:

 By oneAll VirtualAll Direct
SQLite3 (file full)26936514456531858
SQLite3 (file off)27116538735428302
SQLite3 (file off exc)122417541125541653
SQLite3 (mem)119314539781545494
TObjectList (static)303398529661799232
TObjectList (virtual)308109403323871080
SQLite3 (ext full)137525264690546806
SQLite3 (ext off)134807262123531011
SQLite3 (ext off exc)133936261574536941
SQLite3 (ext mem)136915258732544069

The SQLite3 layer gives amazing reading results, which makes it a perfect fit for most typical ORM use. When running with DB.LockingMode := lmExclusive defined (i.e. "off exc" rows), reading speed is very high, and benefits from exclusive access to the database file - see below.
External database access is only required when data is expected to be shared with other processes.

For both writing and reading, TObjectList / TSQLRestServerStaticInMemory engine gives impressive results, but has the weakness of being in-memory, so it is not ACID by design, and the data has to fit in memory. Note that indexes are available for IDs and stored AS_UNIQUE properties.

Synchronous writing

You can overwrite the first default ACID behavior by setting the TSQLDataBase.Synchronous property to smOff instead of the default smFull setting.
When Synchronous is set to smOff, SQLite continues without syncing as soon as it has handed data off to the operating system. If the application running SQLite crashes, the data will be safe, but the database might become corrupted if the operating system crashes or the computer loses power before that data has been written to the disk surface. On the other hand, some operations are as much as 50 or more times faster with this setting.

When the tests performed during benchmarking use Synchronous := smOff, "Write one" speed is enhanced from 8-9 rows per second into about 400 rows per second, on a physical hard drive (SSD or NAS drives may not suffer from this delay).

So depending on your application requirements, you may switch Synchronous setting to off.

To change the main SQLite3 engine synchronous parameter, you may code for instance:

Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
Client.Server.DB.Synchronous := smOff;

Note that this setting is common to a whole TSQLDatabase instance, so will affect all tables handled by the TSQLRestServerDB instance.

But if you defined some SQLite3external tables, you can define the setting for a particular external connection, for instance:

Props := TSQLDBSQLite3ConnectionProperties.Create(DBFileName,'''','');
VirtualTableExternalRegister(Model,TSQLRecordSample,Props,'SampleRecord');
Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
TSQLDBSQLite3Connection(Props.MainConnection).Synchronous := smOff;

File locking

You can overwrite the second default ACID behavior by setting the TSQLDataBase.LockingMode property to LockingMode instead of the default lmNormal setting.
When LockingMode is set to lmExclusive, SQLite will lock the database file for exclusive use during the whole session. It will prevent other processes (e.g. database viewer tools) to access the file at the same time, but small write transactions will be much faster, by a factor usually greater than 40. Bigger transactions involving several hundredths/thousands of INSERT won't be accelerated - but individual insertions will have a major speed up.

To change the main SQLite3 engine locking mode parameter, you may code for instance:

Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
Client.Server.DB.LockingMode := lmExclusive;

Note that this setting is common to a whole TSQLDatabase instance, so will affect all tables handled by the TSQLRestServerDB instance.

But if you defined some SQLite3 external tables, you can define the setting for a particular external connection, for instance:

Props := TSQLDBSQLite3ConnectionProperties.Create(DBFileName,'''','');
VirtualTableExternalRegister(Model,TSQLRecordSample,Props,'SampleRecord');
Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
TSQLDBSQLite3Connection(Props.MainConnection).LockingMode := lmExclusive;

In fact, exclusive file locking improves the reading speed by a factor of 4 (in case of individual row retrieval).
As such, defining LockingMode := lmExclusive without Synchronous := smOff could be of great benefit for a server which purpose is mainly to serve ORM content to clients.

Performance tuning

By default, the slow but truly ACID setting will be used with mORMot, just as with SQlite3.
We do not change this policy (as FireDAC library does, for instance), since it will ensure best safety, in the expense of slow writing outside a transaction.

The best performance will achieved by combining the two previous options, as such:

Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
Client.Server.DB.LockingMode := lmExclusive;Client.Server.DB.Synchronous := smOff;

Or, for external tables:

Props := TSQLDBSQLite3ConnectionProperties.Create(DBFileName,'''','');
VirtualTableExternalRegister(Model,TSQLRecordSample,Props,'SampleRecord');
Client := TSQLRestClientDB.Create(Model,nil,MainDBFileName,TSQLRestServerDB,false,'');
TSQLDBSQLite3Connection(Props.MainConnection).Synchronous := smOff;TSQLDBSQLite3Connection(Props.MainConnection).LockingMode := lmExclusive;

If you can afford loosing some data in very rare border case, or if you are sure your hardware configuration is safe (e.g. if the server is connected to a power inverter and has RAID disks) and that you have backups at hand, setting Synchronous := smOff would help your application scale for writing. Setting LockingMode := lmExclusive will benefit of both writing and reading speed.
Consider using an external and dedicated database (like Oracle or MS SQL) if your security expectations are very high, and if the default safe but slow setting is not enough for you.

In all cases, do not forget to perform backups as often as possible (at least several times a day).
You may use TSQLRestServerDB.Backup or TSQLRestServerDB.BackupGZ methods for a fast backup of a running database. Adding a backup feature on the server side is as simple as running:

Client.Server.BackupGZ(MainDBFileName+'.gz');

Server will stop working during this phase, so a lower-level backup mechanism could be used instead, if you need 100% of service availability. Using an external database would perhaps keep you main mORMot database small in size, so that its backup time will remain unnoticeable on the client side.

Note that with the current implementation, low-level backup is not working as expected on the Win64 platform. The error seems to be at the SQlite3 64 bit library level, since it is not able to release all internal instance statements before backup. We were not able to fix this issue yet.

Feedback is welcome on our forum, as usual.

FireDAC / AnyDAC support for mORMot

$
0
0

Our SynDB classes feature now FireDAC / AnyDAC access, with full speed!

Up to now, only UniDAC, BDE or ZEOS components were available as source, but we just added FireDAC / AnyDAC.

FireDAC is an unique set of Universal Data Access Components for developing cross platform database applications on Delphi. This was in fact a third-party component set, bought by Embarcadero to DA-SOFT Technologies (formerly known as AnyDAC), and included with several editions of Delphi XE3 and up. This is the new official platform for high-speed database development in Delphi, in favor to the now deprecated DBExpress.

Our integration within SynDB.pas units and the mORMot persistence layer has been tuned. For instance, you can have direct access to high-speed FireDAC Array DML feature, via the ORM batch process, via so-called array binding.

Since revision 1.18 of the framework, a new SynDBDataset.pas unit has been introduced, able to interface any DB.pas based library to our SynDB classes, using TDataset to retrieve the results. Due to the TDataset design, performance is somewhat degraded in respect to direct SynDB connection (e.g. results for SQLite3 or Oracle), but it also opens the potential database access.

Some dedicated providers have been published in the SynDBDataset sub-folder of the mORMot source code repository. Up to now, FireDAC (formerly AnyDAC), UniDAC and BDE libraries are interfaced, and a direct connection to the NexusDB engine is available.

Since there are a lot of potential combinations here, feedback is welcome. Due to our Agile process, we will first stick to the providers we need and use. It is up to mORMot users to ask for additional features, and provide wrappers, if possible, or at least testing abilities. Of course, DBExpress would benefit to be integrated, even if Embarcadero just acquired AnyDAC and revamped/renamed it as FireDAC - to make it the new official platform.

Data access benchmark

On an recent notebook computer (Core i7 and SSD drive), depending on the back-end database interfaced, mORMot excels in speed:

  • You can persist up to 570,000 objects per second, or retrieve 870,000 objects per second (for our pure Delphi in-memory engine); 
  • When data is retrieved from server or client internal cache, you can read more than 900,000 objects per second, whatever the database back-end is; 
  • With a high-performance database like Oracle and our direct access classes, you can write 62,000 (via array binding) and read 92,000 objects per second, over a 100 MB network; 
  • When using alternate database access libraries (e.g. Zeos, or DB.pas based classes), speed is lower, but still enough for most work.

Difficult to find a faster ORM, I suspect.

The following tables try to sum up all available possibilities, and give some benchmark (average objects/second for writing or read).

In these tables:

  • 'SQLite3 (file full/off/exc)' indicates use of the internal SQLite3 engine, with or without Synchronous := smOff and/or DB.LockingMode := lmExclusive - see 60; 
  • 'SQLite3 (mem)' stands for the internal SQLite3 engine running in memory; 
  • 'SQLite3 (ext ...)' is about access to a SQLite3 engine as external database, either as file or memory; 
  • 'TObjectList' indicates a TSQLRestServerStaticInMemory instance - either static (with no SQL support) or virtual (i.e. SQL featured via SQLite3 virtual table mechanism) which may persist the data on disk as JSON or compressed binary; 
  • 'Oracle' shows the results of our direct OCI access layer (SynDBOracle.pas); 
  • 'Jet' stands for a MSAccess database engine, accessed via OleDB; 
  • 'NexusDB' is the free embedded edition, available from official site; 
  • 'ZEOS *' indicates that the database was accessed directly via the ZDBC layer; 
  • 'FireDAC *' stands for FireDAC library; 
  • 'UniDAC *' stands for UniDAC library; 
  • 'BDE *' when using a BDE connection; 
  • 'ODBC *' for a direct access to ODBC.

This list of database provider is to be extended in the future. Any feedback is welcome!

Numbers are expressed in rows/second (or objects/second). This benchmark was compiled with Delphi 7, so newer compilers may give even better results, with in-lining and advanced optimizations.

Note that these tests are not about the relative speed of each database engine, but reflect the current status of the integration of several DB libraries within the mORMot database access.

Purpose here is not to say that one library is better or faster than another, but publish a snapshot of mORMot persistence layer abilities.

In this timing, we do not benchmark only the "pure" SQL/DB layer access (SynDB units), but the whole Client-Server ORM of our framework: process below includes read and write RTTI access of a TSQLRecord, JSON marshaling, CRUD/REST routing, virtual cross-database layer, SQL on-the-fly translation. We just bypass the communication layer, since TSQLRestClient and TSQLRestServer are run in-process, in the same thread - as a TSQLRestServerDB instance. So you have here some raw performance testimony of our framework's ORM and RESTful core.

You can compile the "15 - External DB performance" supplied sample code, and run the very same benchmark on your own configuration.

Insertion speed

Here we insert 5,000 rows of data, with diverse scenarios:

  • 'Direct' stands for a individual Client.Add() insertion; 
  • 'Batch' mode has already be described
  • 'Trans' indicates that all insertion is nested within a transaction - which makes a great difference, e.g. with a SQlite3 database.

Benchmark was run on a Core i7 notebook, with standard SSD, including anti-virus and background applications, over a 100 Mb corporate network, linked to a shared Oracle 11g database. So it was a development environment, very similar to low-cost production site, not dedicated to give best performance. During the process, CPU was noticeable used only for SQLite3 in-memory and TObjectList - most of the time, the bottleneck is not the CPU, but the storage or network. As a result, rates and timing may vary depending on network and server load, but you get results similar to what could be expected on customer side, with an average hardware configuration.

DirectBatchTransBatch Trans
SQLite3 (file full)50339996391123064
SQLite3 (file off)92393099534130907
SQLite3 (file off exc)3182935798101874132752
SQLite3 (mem)85803109641103976135332
TObjectList (static)321089548365312031547105
TObjectList (virtual)314366513136316676571232
SQLite3 (ext full)45151112092137249
SQLite3 (ext off)971909108133144475
SQLite3 (ext off exc)4280551256113155150829
SQLite3 (ext mem)97344121400113229153256
ZEOS SQlite34874551682619680
FireDAC SQlite3251824979541962114241
UniDAC SQlite34734122737037962
ZEOS Firebird183521421873422540
UniDAC Firebird70657637915710399
Jet4197431847894947
Oracle5115945594859762
ODBC Oracle55053610241043
ZEOS Oracle34336210861087
FireDAC Oracle5123232898034668
UniDAC Oracle465496915879
BDE Oracle418410661755
NexusDB6278674979018801

Due to its ACID implementation, SQLite3 process on file waits for the hard-disk to have finished flushing its data, therefore it is the reason why it is slower than other engines at individual row insertion (less than 10 objects per second with a mechanical hardrive instead of a SDD) outside the scope of a transaction.

So if you want to reach the best writing performance in your application with the default engine, you should better use transactions and regroup all writing into services or a BATCH process. Another possibility could be to execute DB.Synchronous := smOff and/or DB.LockingMode := lmExclusive at SQLite3 engine level before process: in case of power loss at wrong time it may corrupt the database file, but it will increase the rate by a factor of 50 (with hard drive), as stated by the "off" and "off exc" rows of the table. Note that by default, the FireDAC library set both options, so results above are to be compared with "SQLite3 off exc" rows.

For both our direct Oracle access SynDBOracle.pas library and FireDAC, Batch process benefit of the array binding feature a lot (known as Array DML in FireDAC/AnyDAC).

Reading speed

Now the same data is retrieved via the ORM layer:

  • 'By one' states that one object is read per call (ORM generates a SELECT * FROM table WHERE ID=? for Client.Retrieve() method); 
  • 'All *' is when all 5000 objects are read in a single call (i.e. running SELECT * FROM table from a FillPrepare() method call), either forced to use the virtual table layer, or with direct static call.

Here are some reading speed values, in objects/second:

By oneAll VirtualAll Direct
SQLite3 (file full)26936514456531858
SQLite3 (file off)27116538735428302
SQLite3 (file off exc)122417541125541653
SQLite3 (mem)119314539781545494
TObjectList (static)303398529661799232
TObjectList (virtual)308109403323871080
SQLite3 (ext full)137525264690546806
SQLite3 (ext off)134807262123531011
SQLite3 (ext off exc)133936261574536941
SQLite3 (ext mem)136915258732544069
ZEOS SQlite332328324395934
FireDAC SQlite3763980261108117
UniDAC SQlite315867314296989
ZEOS Firebird38826997485416
UniDAC Firebird21777185889856
Jet2619144801222736
Oracle5937431266131
ODBC Oracle11343326733049
ZEOS Oracle8634420753868
FireDAC Oracle8963317137912
UniDAC Oracle5002191823688
BDE Oracle68933433426
NexusDB1419121294195687

The SQLite3 layer gives amazing reading results, which makes it a perfect fit for most typical ORM use. When running with DB.LockingMode := lmExclusive defined (i.e. "off exc" rows), reading speed is very high, and benefits from exclusive access to the database file. External database access is only required when data is expected to be shared with other processes.

In the above table, it appears that all libraries based on DB.pas are slower than the others for reading speed. In fact, TDataSet sounds to be a real bottleneck. Even FireDAC, which is known to be very optimized for speed, is limited by the TDataSet structure. Our direct classes, or even ZEOS/ZDBC performs better.

For both writing and reading, TObjectList / TSQLRestServerStaticInMemory engine gives impressive results, but has the weakness of being in-memory, so it is not ACID by design, and the data has to fit in memory. Note that indexes are available for IDs and stored AS_UNIQUE properties.

Analysis and use case proposal

When declared as virtual table (via a VirtualTableRegister call), you have the full power of SQL (including JOINs) at hand, with incredibly fast CRUD operations: 100,000 requests per second for objects read and write, including serialization and Client-Server communication!

In the above list, the MS SQL Server is not integrated, but may be used instead of Oracle (minus the fact that BULK insert is not implemented yet for it, whereas array binding boosts Oracle writing BATCH process performance by 100 times). Any other OleDB or ODBC providers may also be used, with direct access. Or any DB.pas provider (e.g. DBExpress / BDE), but with the additional layer introduced by using a TDataSet instance.

Note that all those tests were performed locally and in-process, via a TSQLRestClientDB instance. For both insertion and reading, a Client-Server architecture (e.g. using HTTP/1.1 for mORMot clients) will give even better results for BATCH and retrieve all modes. During the tests, internal caching - see 37 and 38 - was disabled, so you may expect speed enhancements for real applications, when data is more read than written: for instance, when an object is retrieved from the cache, you achieve more than 700,000 read requests per second, whatever database is used.

Therefore, the typical use may be the following:

DatabaseCreated byUse
int. SQLite3 filedefaultGeneral safe data handling
int. SQLite3 mem:memory:Fast data handling with no persistence (e.g. for testing)
TObjectList staticStaticDataCreateBest possible performance for small amount of data, without ACID nor SQL
TObjectList virtualVirtualTableRegisterBest possible performance for small amount of data, if ACID is not required nor complex SQL
ext. SQLite3 fileVirtualTableExternalRegisterExternal back-end, e.g. for disk spanning
ext. SQLite3 memVirtualTableExternalRegisterFast external back-end (e.g. for testing)
ext. Oracle / MS SQL / FirebirdVirtualTableExternalRegisterFast, secure and industry standard; can be shared outside mORMot
ext. NexusDBVirtualTableExternalRegisterThe free embedded version let the whole engine be included within your executable, and insertion speed is higher than SQLite3, so it may be a good alternative if your project mostly insert individual objects - using a batch within a transaction let SQlite3 be the faster engine
ext. JetVirtualTableExternalRegisterCould be used as a data exchange format (e.g. with Office applications)
ext. Zeos/FireDAC/UniDACVirtualTableExternalRegisterAllow access to several external engines, with some advantages for Zeos, since direct ZDBC access will by-pass the DB.pas unit and its TDataSet bottleneck - and we will also prefer an active Open Source project!

Whatever database back-end is used, don't forget that mORMot design will allow you to switch from one library to another, just by changing a TSQLDBConnectionProperties class type. And note that you can mix external engines, on purpose: you are not tied to one single engine, but the database access can be tuned for each ORM table, according to your project needs.

Feedback is welcome on our forum, as usual!

SynPDF now implements 40 bit and 128 bit security

$
0
0

The trunk version of our Open Source SynPdf library now features encryption using 40 bit or 128 bit key size.

This is a long awaiting feature, and sounds working just fine from my tests.
Speed has been optimized (as usual with our libraries), as a consequence encrypting the content will only be slightly slower.

In fact, TPdfEncryption.New() will create the expected TPdfEncryption instance, depending on the supplied encryption Level:

class function TPdfEncryption.New(aLevel: TPdfEncryptionLevel;
      const aUserPassword, aOwnerPassword: string;
      aPermissions: TPdfEncryptionPermissions): TPdfEncryption;

Here are some comments about this new method:

  • to be called as parameter of TPdfDocument/TPdfDocumentGDI.Create()
  • currently, only elRC4_40 and elRC4_128 levels are implemented
  • both passwords are expected to be ASCII-7 characters only
  • aUserPassword will be asked at file opening: to be set to '' for not blocking display, but optional permission
  • aOwnerPassword shall not be '', and will be used internally to cypher the pdf file content
  • aPermissions can be either one of the PDF_PERMISSION_ALL / PDF_PERMISSION_NOMODIF / PDF_PERSMISSION_NOPRINT / PDF_PERMISSION_NOCOPY / PDF_PERMISSION_NOCOPYNORPRINT set of options

In practice, typical use may be:

 Doc := TPdfDocument.Create(false,0,false,
   TPdfEncryption.New(elRC4_40,'','toto',PDF_PERMISSION_NOMODIF));
 Doc := TPdfDocument.Create(false,0,false,
   TPdfEncryption.New(elRC4_128,'','toto',PDF_PERMISSION_NOCOPYNORPRINT));

Follow this link to get the latest trunk (unstable) version.

Feedback is welcome on our forum, as usual!

SynPDF now generates (much) smaller PDF file size

$
0
0

We have reduced PDF generated file size for all version of the PDF format, by trimming any uneeded space in the generated content.

We introduced a new optional PDFGeneratePDF15File property for even better compression, using advanced features (Object Streams and Cross-Reference Streams) of the PDF 1.5 format.
This PDF 1.5 format needs Acrobot 6.0 and up to open them.
Which should be the case on your computer.

Can be up to 70% smaller for a pdf with a lot of pages with simple textual context.

I suspect our library is able to generate the smallest file size possible, even in regard to other alternative libraries.
Open Source can be great, can't it?

We still have an issue which disallows to enable the full benefit of PDFGeneratePDF15File=true when encryption is set.
But even if the files may be a bit bigger in respect to when encryption is not set, the generated content is still perfectly valid... and encrypted!
If you have an idea about the problem cause, your feedback is welcome!

We can discuss on our forum.

Viewing all 166 articles
Browse latest View live