Quantcast
Channel: Synopse
Viewing all 166 articles
Browse latest View live

Think free as in free speech, not free beer

$
0
0

After more than 5 years of opening some huge part of Delphi code base, just my two cents.

Free software means free as a bird.

In practice, most Open Source "consumers" focus on free as a free beer...
This is a reality, especially for "niche" projects like developing libraries for Delphi.

Here are some thoughts from my little experiment with mORMot.
If you ask what Open Source for libraries mean, it may help you!

I mixed both PROs and CONs, since it is difficult to make a hierarchy of thoughts, when you are so involved as I am.

1. Open Source is a great adventure, in which you encounter some very nice people, and learn from others;

2. Open Source does not give a lot of benefit, neither financial, nor for fame (do not expect much reward);

3. Select a permissive license (like MPL), and/or a GPL/LGPL license since it is needed for such viral projects - but I do not know many Delphi software created as GPL/LGPL;

4. Release soon: do not wait to post the code - when it works (compile + pass the regression tests), submit it, even if it is not perfect;

5. Release often: use a source code repository, and post every modification in it;

6. Do not leak the features you need on your side for a particular client - it smells like wrongly designed code in your libraries, which may not be open enough for extension;

7. If you publish a library, try to document your code, and follow coding/naming conventions - if you do not have time for documenting, try at least some wiki;

8. Have a forum for support, but do not expect other users to help - usually, only the main contributors will post on the forum - others may feel too shy;

9. Be gentile and patient with every user (unless he/she is a troll or openly incompetent), and be enthusiast with any code contribution (if you can, integrate it ASAP in the trunk);

10. Most of your users will ask for free support, or even debugging of their own code - learn to say NO;

11. Feedback is welcome, even from people who do not like what you did;

12. If you are several co-workers working with your libraries, let everyone be involved in support;

13. Have a bug tracking web site, and distinguish bugs from feature requests;

14. Set priorities to tickets, especially feature requests: implement first those YOU need, then those you may be paid for, then those you may have fun working on, then let people contribute on their side for the remaining;

13. Use your public web site to track and discuss any bug or feature request you encounter on your side about the libraries (it will benefit all);

14. A set of regression tests with good coverage is mandatory;

15. You can offer support for your libraries for money, but you will be only asked on a few occasions;

16. You will have companies or individuals ignoring your libraries (why was our blog never accepted in DelphiFeeds?), perhaps because they do not understand Open Source, or see it as some "unfair" competition;

17. Do not hide anything, even restrictions nor known issues;

18. Try to make your site appealing, but do not abuse of marketing - good marketing and wrong code did kill the component market - good code is the priority;

19. Do your best to support several versions of the Delphi compiler - a lot of users, especially in Open Source, are still using old (pre-Unicode!) versions;

20. Design and architecture level of some Delphi user is somewhat low - most did use the tool in pure RAD, and are afraid or ignorant about modern programming (like SOLID, DDD, stubbing, unit testing...);

21. If you incorporate some code from other Open Source project, clearly state it (good), or rewrite it from scratch (even better - with the corresponding unit tests);

22. Delphi is a very small market, not trendy, especially for young developers;

23. Open Source your libraries can be time consuming (e.g. if you are as perfectionist as I am) - but the main point is about balancing your investment, with the benefit of sharing;

24. Thanks to a larger base of users, you will find bugs you would never discover otherwise, but on production (e.g. with Asiatic Windows, or on heavy production);

25. Participate to the Delphi community outside of your own project(s), e.g. on StackOverflow - it will help ranking your web site;

26. Your code will remain for ever on the Internet archives, and you will never be forgotten;

27. Even your managers can be easily convinced about the benefits of Open Sourcing some part of their code, and also that you may spend some (identified) part of your time to maintain a community;

28. Sharing is everything.

Feedback is welcome in our forum!


Are NoSQL databases ACID?

$
0
0

One of the main features you may miss when discovering NoSQL ("Not-Only SQL"?) databases, coming from a RDBMS background, is ACID.

ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. (Wikipedia)

But are there any ACID NoSQL database?

Please ensure you read the Martin Fowler introduction about NoSQL databases.
And the corresponding video.

First of all, we can distinguish two types of NoSQL databases:

  1. Aggregate-oriented databases;
  2. Graph-oriented databases (e.g. Neo4J).

By design, most Graph-oriented databases are ACID!
This is a first good point.

Then, what about the other type?
In Aggregate-oriented databases, we can identify three sub-types:

Whatever document/key/column oriented they are, they all use some kind of document storage.
It may be schema-less, blob-stored, column-driven, but it is always some set of values bound together to be persisted.
This set of values define a particular state of one entity, in a given model.
Which we may call Aggregate.

What we call an Aggregate here, is what Eric Evans defined in its Domain-Driven Design as a self-sufficient of Entities and Value-Objects in a given Bounded Context.

As a consequence, an aggregate is a collection of data that we interact with as a unit.
Aggregates form the boundaries for ACID operations with the database.
(Martin Fowler)

So, at Aggregate level, we can say that most NoSQL databases can be as safe as ACID RDBMS, with the proper settings.
Of source, if you tune your server for the best speed, you may come into something non ACID. But replication will help.

My main point is that you have to use NoSQL databases as they are, not as a (cheap) alternative to RDBMS.
I have seen too much projects abusing of relations between documents. This can't be ACID.
If you stay at document level, i.e. at Aggregate boundaries, you do not need any transaction.
And your data will be as safe as with an ACID database, even if it not truly ACID, since you do not need those transactions!

If you need transactions and update several "documents" at once, you are not in the NoSQL world any more - so use a RDBMS engine instead!

We are currently cooking a native direct MongoDB access in our labs.
See our SynMongoDB.pas unit...
Still work to do, but I suspect this will be another unit feature of mORMot, when the corresponding mORMotMongoDB.pas unit will achieve one highly-optimized bridge between our RESTful ORM and MongoDB.

Stay tuned!

ORM enhanced for BATCH insert

$
0
0

We just committed some nice features to the ORM kernel, and SynDB* classes of our mORMot framework.

During BATCH insertion, the ORM is able to generate some optimized SQL statements, depending on the target database, to send several rows of data at once.
It induces a noticeable speed increase when saving several objects into an external database.

This feature is available for SQlite3 (3.7.11 and later), MySQL, PostgreSQL, MS SQL Server (2008 and up), Oracle, Firebird and NexusDB.
Since it is working at SQL level, it is available for all supported access libraries, e.g. ODBC, OleDB, Zeos/ZDBC, UniDAC, FireDAC.
It means that even properties not implementing array binding (like OleDB, Zeos or UniDAC) are able to have a huge boost at data insertion, ready to compete with the (until now) more optimized libraries.

To be more specific:

SQlite3, MySQL, PostgreSQL, MSSQL 2008 or NexusDB handle INSERT statements with multiple VALUES, in the following SQL-92 standard syntax, using parameters:

INSERT INTO TABLE (column-a, [column-b, ...])
VALUES ('value-1a', ['value-1b', ...]),
('value-2a', ['value-2b', ...]),
...

Oracle implements the weird but similar syntax (note the mandatory SELECT at the end):

INSERT ALL
INTO phone_book VALUES ('John Doe', '555-1212')
INTO phone_book VALUES ('Peter Doe', '555-2323')
SELECT * FROM DUAL;

Firebird implements its own syntax:

execute block
as
begin
INSERT INTO phone_book VALUES ('John Doe', '555-1212');
INSERT INTO phone_book VALUES ('Peter Doe', '555-2323');
end

As a result, some engines show a nice speed boost when used in BATCH.
Even SQLite3 is faster when used as external engine, in respect to direct execution, in respect to direct execution of individual prepared statements in loop! 

Here are some insertion results, to compare with the previous benchmark, which did not include these enhancements:

 DirectBatchTransBatch Trans
SQLite3 (file full)48846397498126256
SQLite3 (file off)789815101010130561
SQLite3 (file off exc)3137635785104410136328
SQLite3 (mem)88070106981106215144270
TObjectList (static)308584545732311837535733
TObjectList (virtual)308413539548316997527537
SQLite3 (ext full)30812151107469170636
SQLite3 (ext off)77622404111819188316
SQLite3 (ext off exc)42213182561111642197464
SQLite3 (ext mem)98531228634112004227489
ZEOS SQlite3497120715648972720
ODBC SQlite3509124803899682581
FireDAC SQlite3249925006521985156887
UniDAC SQlite346989812766739239
NexusDB599615494768718619
ZEOS Firebird12732138482745630724
ODBC Firebird1745183661441918993
FireDAC Firebird24000503292405051423
UniDAC Firebird637314801647414675
Jet4252456150165208
Oracle31042327104661661
ODBC Oracle337396213565197
FireDAC Oracle45835160145137204
UniDAC Oracle289306511405747
BDE Oracle4899278391022
MSSQL local5266544171365962706
ODBC MSSQL5050187391180420796
FireDAC MSSQL498973151126750520
UniDAC MSSQL440430845887934933

This feature is at ORM level, so it benefits to any external database library.
Of course, if a given library has a better option (e.g. our direct Oracle or FireDAC array binding), it is used instead.

You can note that we included access to Firebird embedded via ODBC, using the official driver.
And also SQLite3 access via ODBC, using this nice full-featured BSD licensed driver.
Sounds like a not so optimized solution, e.g. in respect to ZDBC/ZEOS direct connection.
But nice show case of ODBC connection with mORMot.

Reading speed is not affected by this modification, so we won't publish new data here.
Note that now our native access to external databases outperforms any third-party drivers, with the only exception of Firebird, which is still most efficiently accessed via FireDAC.
The SAD 1.18 pdf includes the latest benchmark.

If you want to use a map/reduce algorithm in your application, or the DDD's Unit Of Work pattern, in addition to ORM data access, all those enhancements may speed up a lot your process. Reading and writing huge amount of data has never been so fast and easy: you may even be tempted to replace stored-procedure process by high-level code implemented in your Domain service. N-tier separation would benefit from it.

Feedback is welcome on our forum, as usual.

Support of MySQL, DB2 and PostgreSQL

$
0
0

We just tested, benchmarked and validated Oracle MySQL, IBM DB2 and PostgreSQL support for our SynDB database classes and the mORMot's ORM core.
This article will also show all updated results, including our newly introduced multi-value INSERT statement generations, which speed up a lot BATCH insertion.

Stay tuned!

Purpose here is not to say that one library or database is better or faster than another, but publish a snapshot of mORMot persistence layer abilities, depending on each access library.

In this timing, we do not benchmark only the "pure" SQL/DB layer access (SynDB units), but the whole Client-Server ORM of our framework.

Process below includes all aspects of our ORM:

  • Access via high level CRUD methods (Add/Update/Delete/Retrieve, either per-object or in BATCH mode);
  • Read and write access of TSQLRecord instances, via optimized RTTI;
  • JSON marshaling of all values (ready to be transmitted over a network);
  • REST routing, with security, logging and statistic;
  • Virtual cross-database layer using its SQLite3 kernel;
  • SQL on-the-fly generation and translation (in virtual mode);
  • Access to the database engines via several libraries or providers.

In those tests, we just bypassed the communication layer, since TSQLRestClient and TSQLRestServer are run in-process, in the same thread - as a TSQLRestServerDB instance. So you have here some raw performance testimony of our framework's ORM and RESTful core, and may expect good scaling abilities when running on high-end hardware, over a network.

On a recent notebook computer (Core i7 and SSD drive), depending on the back-end database interfaced, mORMot excels in speed, as will show the following benchmark:

  • You can persist up to 570,000 objects per second, or retrieve 870,000 objects per second (for our pure Delphi in-memory engine);
  • When data is retrieved from server or client 38, you can read more than 900,000 objects per second, whatever the database back-end is;
  • With a high-performance database like Oracle, and our direct access classes, you can write 70,000 (via array binding) and read 160,000 objects per second, over a 100 MB network;
  • When using alternate database access libraries (e.g. Zeos, or DB.pas based classes), speed is lower (even if comparable for DB2, MS SQL, PostgreSQL, MySQL) but still enough for most work, due to some optimizations in the mORMot code (e.g. caching of prepared statements, SQL multi-values insertion, direct export to/from JSON, SQlite3 virtual mode design, avoid most temporary memory allocation...).

Difficult to find a faster ORM, I suspect.

Software and hardware configuration

The following tables try to sum up all available possibilities, and give some benchmark (average objects/second for writing or reading).

In these tables:
- 'SQLite3 (file full/off/exc)' indicates use of the internal SQLite3 engine, with or without Synchronous := smOff and/or DB.LockingMode := lmExclusive;
- 'SQLite3 (mem)' stands for the internal SQLite3 engine running in memory;
- 'SQLite3 (ext ...)' is about access to a SQLite3 engine as external database , either as file or memory;
- 'TObjectList' indicates a TSQLRestServerStaticInMemory instance, either static (with no SQL support) or virtual (i.e. SQL featured via SQLite3 virtual table mechanism) which may persist the data on disk as JSON or compressed binary;
- 'NexusDB' is the free embedded edition, available from official site;
- 'Jet' stands for a MSAccess database engine, accessed via OleDB.
- 'Oracle' shows the results of our direct OCI access layer (SynDBOracle.pas);
- 'Zeos *' indicates that the database was accessed directly via the ZDBC layer;
- 'FireDAC *' stands for FireDAC library;
- 'UniDAC *' stands for UniDAC library;
- 'BDE *' when using a BDE connection;
- 'ODBC *' for a direct access to ODBC.

This list of database providers is to be extended in the future. Any feedback is welcome!

Numbers are expressed in rows/second (or objects/second). This benchmark was compiled with Delphi XE4, since newer compilers tends to give better results, mainly thanks to function in-lining (which was not existing e.g. in Delphi 6-7).

Note that these tests are not about the relative speed of each database engine, but reflect the current status of the integration of several DB libraries within the mORMot database access.

Benchmark was run on a Core i7 notebook, running Windows 7, with a standard SSD, including anti-virus and background applications:
- Linked to a shared Oracle 11.2.0.1 database over 100 Mb Ethernet;
- MS SQL Express 2008 R2 running locally in 64 bit mode;
- IBM DB2 Express-C edition 10.5 running locally in 64 bit mode;
- PostgreSQL 9.2.7 running locally in 64 bit mode;
- MySQL 5.6.16 running locally in 64 bit mode;
- Firebird embedded in revision 2.5.2;
- NexusDB 3.11 in Free Embedded Version.

So it was a development environment, very similar to low-cost production site, not dedicated to give best performance. During the process, CPU was noticeable used only for SQLite3 in-memory and TObjectList - most of the time, the bottleneck is not the CPU, but the storage or network. As a result, rates and timing may vary depending on network and server load, but you get results similar to what could be expected on customer side, with an average hardware configuration. When using high-head servers and storage, running on a tuned Linux configuration, you can expect even better numbers.

Tests were compiled with the Delphi XE4 32 bit mode target platform. Most of the tests do pass when compiled as a 64 bit executable, with the exception of some providers (like Jet), not available on this platform. Speed results are almost the same, only slightly slower; so we won't show them here.

You can compile the "15 - External DB performance" supplied sample code, and run the very same benchmark on your own configuration.
Feedback is welcome!

From our tests, the UniDAC version we were using had huge stability issues when used with DB2: the tests did not pass, and the DB2 server just hang processing the queries, whereas there was no problem with other libraries. It may have been fixed since, but you won't find any "UniDAC DB2" results in the benchmark below in the meanwhile.

Insertion speed

Here we insert 5,000 rows of data, with diverse scenarios:
- 'Direct' stands for a individual Client.Add() insertion;
- 'Batch' mode will be described 28;
- 'Trans' indicates that all insertion is nested within a transaction - which makes a great difference, e.g. with a SQlite3 database.

 DirectBatchTransBatch Trans
SQLite3 (file full)46235695377130086
SQLite3 (file off)844821100389136675
SQLite3 (file off exc)2884735316102599144258
SQLite3 (mem)89456120513104249146933
TObjectList (static)314465543892326370542652
TObjectList (virtual)325393545672298846545018
SQLite3 (ext full)42411297102049164636
SQLite3 (ext off)83021406109706189250
SQLite3 (ext off exc)41589180759108481192071
SQLite3 (ext mem)101440234576113530190142
ODBC SQLite3492117463536782425
ZEOS SQlite3494118515620685705
FireDAC SQlite3263695030649755155115
UniDAC SQlite347787252655238756
ODBC Firebird1495180561348517731
ZEOS Firebird9733134292634830616
FireDAC Firebird24233520212479152111
UniDAC Firebird598614809652214948
Jet4235442449545094
NexusDB599815494768718619
Oracle22656112113352367
ODBC Oracle236166415157709
FireDAC Oracle11848575151912566
UniDAC Oracle164570112152884
BDE Oracle4899278391022
MSSQL local5246543601298862453
ODBC MSSQL4911186521154120976
FireDAC MSSQL501673411168651242
UniDAC MSSQL439229768864933464
ODBC DB24792483871408570104
FireDAC DB24452486351101452781
ZEOS PostgreSQL419626663968938735
ODBC PostgreSQL406819515513027843
FireDAC PostgreSQL4181370001011136483
UniDAC PostgreSQL270518563444222317
ODBC MySQL3160383091085647630
ZEOS MySQL3426340371221740186
FireDAC MySQL3078430531095545781
UniDAC MySQL3119277721124633288

Due to its ACID implementation, SQLite3 process on file waits for the hard-disk to have finished flushing its data, therefore it is the reason why it is slower than other engines at individual row insertion (less than 10 objects per second with a mechanical hardrive instead of a SDD) outside the scope of a transaction.

So if you want to reach the best writing performance in your application with the default engine, you should better use transactions and regroup all writing into services or a BATCH process. Another possibility could be to execute DB.Synchronous := smOff and/or DB.LockingMode := lmExclusive at SQLite3 engine level before process: in case of power loss at wrong time it may corrupt the database file, but it will increase the rate by a factor of 50 (with hard drive), as stated by the "off" and "off exc" rows of the table. Note that by default, the FireDAC library set both options, so results above are to be compared with "SQLite3 off exc" rows.

For both our direct Oracle access SynDBOracle.pas unit and FireDAC library , BATCH process benefits of the array binding feature a lot (known as Array DML in FireDAC/AnyDAC).

For most engines, our ORM kernel is able to generate the appropriate SQL statement for speeding up bulk insertion. For instance:
- SQlite3, MySQL, PostgreSQL, MSSQL 2008, DB2, MySQL or NexusDB handle INSERT statements with multiple INSERT INTO .. VALUES (..),(..),(..)..;
- Oracle handles INSERT INTO .. INTO .. SELECT 1 FROM DUAL (weird syntax, isn't it?);
- Firebird implements EXECUTE BLOCK.

As a result, some engines show a nice speed boost when BatchAdd() is used. Even SQLite3 is faster when used as external engine, in respect to direct execution! This feature is at ORM/SQL level, so it benefits to any external database library. Of course, if a given library has a better implementation pattern (e.g. our direct Oracle or FireDAC with native array binding), it is used instead.

Reading speed

Now the same data is retrieved via the ORM layer:
- 'By one' states that one object is read per call (ORM generates a SELECT * FROM table WHERE ID=? for Client.Retrieve() method);
- 'All *' is when all 5000 objects are read in a single call (i.e. running SELECT * FROM table from a FillPrepare() method call), either forced to use the virtual table layer, or with direct static call.

Here are some reading speed values, in objects/second:

 By oneAll VirtualAll Direct
SQLite3 (file full)27284558721550842
SQLite3 (file off)26896549450526149
SQLite3 (file off exc)128077557537535905
SQLite3 (mem)127106557537563316
TObjectList (static)300012912408913742
TObjectList (virtual)303287402706866551
SQLite3 (ext full)135380267436553158
SQLite3 (ext off)133696262977543065
SQLite3 (ext off exc)134698264186558596
SQLite3 (ext mem)137487259713557475
ODBC SQLite319461136600201280
ZEOS SQlite333541200835306955
FireDAC SQlite3768383532112470
UniDAC SQlite325227403096420
ODBC Firebird34466960797585
ZEOS Firebird2029691974107229
FireDAC Firebird23764627656269
UniDAC Firebird21896688688102
Jet2640166112258277
NexusDB1413120845208246
Oracle1558120977159861
ODBC Oracle16204344145764
FireDAC Oracle12314214954795
UniDAC Oracle6882708330093
BDE Oracle86038704036
MSSQL local10135210837437905
ODBC MSSQL12458147544256502
FireDAC MSSQL37767212394091
UniDAC MSSQL250593231135932
ODBC DB2764984880124486
FireDAC DB231557145688264
ZEOS PostgreSQL8833158760223583
ODBC PostgreSQL1036185680120913
FireDAC PostgreSQL22615825279002
UniDAC PostgreSQL86486900122856
ODBC MySQL101436553882447
ZEOS MySQL2052171803245772
FireDAC MySQL363675081105028
UniDAC MySQL479899940146968

The SQLite3 layer gives amazing reading results, which makes it a perfect fit for most typical ORM use. When running with DB.LockingMode := lmExclusive defined (i.e. "off exc" rows), reading speed is very high, and benefits from exclusive access to the database file. External database access is only required when data is expected to be shared with other processes.

In the above table, it appears that all libraries based on DB.pas are slower than the others for reading speed. In fact, TDataSet sounds to be a real bottleneck, due to its internal data marshalling. Even FireDAC, which is known to be very optimized for speed, is limited by the TDataSet structure. Our direct classes, or even ZEOS/ZDBC performs better, since they are able to output JSON content with no additional marshalling.

For both writing and reading, TObjectList / TSQLRestServerStaticInMemory engine gives impressive results, but has the weakness of being in-memory, so it is not ACID by design, and the data has to fit in memory. Note that indexes are available for IDs and stored AS_UNIQUE properties.

As a consequence, search of non-unique values may be slow: the engine has to loop through all rows of data. But for unique values (defined as stored AS_UNIQUE), both insertion and search speed is awesome, due to its optimized O(1) hash algorithm - see the following benchmark, especially the "By name" row for "TObjectList" columns, which correspond to a search of an unique RawUTF8 property value via this hashing method.

SQLite3 (file full)SQLite3 (file off)SQLite3 (mem)TObjectList (static)TObjectList (virt.)SQLite3 (ext file full)SQLite3 (ext file off)SQLite3 (ext mem)OracleJet
By one1046110549447371035771035534336744099452209011074
By name969496513235070534601532278522240230558891071
All Virt.1670951629561686512532921182039708390592946885663952764
All Direct1671231442501685772542842563831707941656011688568834275999

Above table results were run on a Core 2 duo laptop, so numbers are lower than with the previous tables.

During the tests, internal caching was disabled, so you may expect speed enhancements for real applications, when data is more read than written: for instance, when an object is retrieved from the cache, you achieve more than 1,00,000 read requests per second, whatever database is used.

Analysis and use case proposal

When declared as virtual table (via a VirtualTableRegister call), you have the full power of SQL (including JOINs) at hand, with incredibly fast CRUD operations: 100,000 requests per second for objects read and write, including serialization and Client-Server communication!

Some providers are first-class citizens to mORMot, like SQLite3, Oracle, MS SQL, PostgreSQL, MySQL or IBM DB2. You can connect to them without the bottleneck of the DB.pas unit, nor any restriction of your Delphi license (a Starter edition is enough).

First of all, SQLite3 is still to be considered, even for a production server. Thanks to mORMot's architecture and design, this "embedded" database could be used as main database engine for a client-server application with heavy concurrent access - if you have doubts about its scaling abilities, see this blog article. Here, "embedded" is not restricted to "mobile", but sounds like a self-contained, zero-configuration proven engine.

Most recognized closed source databases are available:
- Direct access to Oracle gives impressive results in BATCH mode (aka array binding). It may be an obligation if your end-customer stores already its data in such a server, for instance, and want to leverage the licensing cost of its own IT solution. Oracle Express edition is free, but somewhat heavy and limited in terms of data/hardware size (see its licensing terms);
- MS SQL Server, directly accessed via OleDB (or ODBC) gives pretty good timing. A MS SQL Server 2008 R2 Express instance is pretty well integrated with the Windows environment, for a very affordable price (i.e. for free) - the LocalDB (MSI installer) edition is enough to start with, but also with data/hardware size limitation, just like Oracle Express;
- IBM DB2 is another good candidate, and the Express-C ("C" standing for Community) offers a no-charge opportunity to run an industry standard engine, with no restriction on the data size, and somewhat high hardware limitations (16 GB of RAM and 2 CPU cores for the latest 10.5 release) or enterprise-level features;
- NexusDB may be considered, if you have existing Delphi code and data - but it is less known and recognized as the its commercial competitors.

Open Source databases are worth considering, especially in conjunction with an Open Source framework like mORMot:
- MySQL is the well-known engine used by a lot of web sites, mainly with LAMP (Linux MySQL Apache PHP) configurations. Windows is not the best platform to run it, but it could be a fairly good candidate, especially in its MariaDB fork, which sounds more attractive those days than the official main version, owned by Oracle;
- PostgreSQL is an Enterprise class database, with amazing features among its Open Source alternatives, and really competes with commercial solutions. Even under Windows, we think it is easy to install and administrate, and uses less resource than the other commercial engines.
- Firebird gave pretty consistent timing, when accessed via Zeos/ZDBC. We show here the embedded version, but the server edition is worth considering, even if it has a less

To access those databases, OleDB, ODBC or ZDBC providers may also be used, with direct access. But mORMot is very open-minded: you can use any DB.pas provider, e.g. FireDAC, UniDAC, DBExpress, NexusDB or even the BDE, but with the additional layer introduced by using a TDataSet instance, at reading.

Therefore, the typical use may be the following (int. meaning "internal", ext. for "external", mem for "in-memory"):

DatabaseUse case
int. SQLite3 fileCreated by default.
General safe data handling, with amazing speed in "off exc" mode
int. SQLite3 memCreated with :memory: file name.
Fast data handling with no persistence (e.g. for testing or temporary storage)
TObjectList staticCreated with StaticDataCreate.
Best possible performance for small amount of data, without ACID nor SQL
TObjectList virtualCreated with VirtualTableRegister.
Best possible performance for SQL over small amount of data (or even unlimited amount under Win64), if ACID is not required nor complex SQL
ext. SQLite3 fileCreated with VirtualTableExternalRegister
External back-end, e.g. for disk spanning
ext. SQLite3 memCreated with VirtualTableExternalRegister
Fast external back-end (e.g. for testing)
ext. Oracle / MS SQL / DB2 / PostgreSQL / MySQL / FirebirdCreated with VirtualTableExternalRegister
Fast, secure and industry standard back-ends; data can be shared outside mORMot
ext. NexusDBCreated with VirtualTableExternalRegister
The free embedded version let the whole engine be included within your executable, and use any existing code, but SQlite3 sounds like a better option
ext. JetCreated with VirtualTableExternalRegister
Could be used as a data exchange format (e.g. with Office applications)
ext. ZeosCreated with VirtualTableExternalRegister
Allow access to several external engines, with direct Zeos/ZDBC access which will by-pass the DB.pas unit and its TDataSet bottleneck - and we will also prefer an active Open Source project!
ext. FireDAC/UniDACCreated with VirtualTableExternalRegister
Allow access to several external engines, including the DB.pas unit and its TDataSet bottleneck

Whatever database back-end is used, don't forget that mORMot design will allow you to switch from one library to another, just by changing a TSQLDBConnectionProperties class type. And note that you can mix external engines, on purpose: you are not tied to one single engine, but the database access can be tuned for each ORM table, according to your project needs.

Feedback is welcome in our forum, as usual.

ORM mapping class fields to external table columns

$
0
0

When working with an ORM, you have mainly two possibilites:

  1. Start from scratch, i.e. write your classes and let the ORM creates all the database structure - it is also named "code-first";
  2. From an existing database, you define in your model how your classes map the existing database structure - this is "database-first".

We have just finalized ORM external table field mapping in mORMot, using e.g.
aModel.Props[aExternalClass].ExternalDB.MapField(..)
See this last commit.

So you can write e.g.

fProperties := TSQLDBSQLite3ConnectionProperties.Create(
  SQLITE_MEMORY_DATABASE_NAME,'','','');
VirtualTableExternalRegister(fExternalModel,
  TSQLRecordPeopleExt,fProperties,'PeopleExternal');
fExternalModel.Props[TSQLRecordPeopleExt].ExternalDB.
  MapField('ID','Key').
  MapField('YearOfDeath','YOD');

Then you use your TSQLRecordPeopleExt table as usual from Delphi code, with ID and YearOfDeath fields:

  • The "internal" TSQLRecord class will be stored within the PeopleExternal external table;
  • The "internal" TSQLRecord.ID field will be an external "Key: INTEGER" column;
  • The "internal" TSQLRecord.YearOfDeath field will be an external "YOD: BIGINT" column;
  • Other internal published properties will be mapped by default with the same name to external column.

By default, the field / column mapping will be:

With your customize mapping, two fields are mapped differently:

Due to the design of SQLite3 virtual tables, and mORMot internals in its current state, the database primary key must be an INTEGER field to be mapped as expected by the ORM.

Next step is to allow on-the-fly conversion of values, from internal to external...
Which is required to use our ORM with an existing complex database.
For instance, it may help use another primary key (add a way to map any varchar(250) value from/to an integer value for a primary key), or map a hashed column of a limited set of values into a Delphi enumeration, or map a foreign key to some fixed list of items using also a Delphi enumeration.

Not so easy to implement...

I see at least several patterns:

  • My current best idea is that we may have to generate ALTER TABLE ADD COLUMN for a new INTEGER NON NULL UNIQUE column to the table, so that we will be able to use it as the primary key on an existing table.
  • Another option may be to maintain a separate IntegerID/Varchar250Key mapping (e.g. in one or several dedicated SQLite3 tables). But this may be slow and confusing, especially when the DB is accessed directly from other non-mORMot applications;
  • Last, we could use a callback to compute the ID to/from varchar(250), but we may have collision, so it could be also very difficult to work with.

What do you think?

Feedback is welcome on our forum, as usual.

Enhanced and fixed late-binding of variants for Delphi XE2 and up

$
0
0

For several units of our framework, we allow late-binding of data values, using a variant and direct named access to properties:
- In SynCommons, we defined our TDocVariant custom variant type, able to store any JSON/BSON document-based content;
- In SynBigTable, we use the TSynTableVariantType custom variant type, as defined in SynCommons;
- In SynDB, we defined a TSQLDBRowVariantType, ready to access any column of a RDBMS data result set row;
- In mORMot, we allow access to TSQLTableRowVariantType column values.

It's a very convenient way of accessing result rows values. Code is still very readable, and safe at the same time.

For instance, we can write:

var V: variant;
 ...
  TDocVariant.New(V); // or slightly slower V := TDocVariant.New;
  V.name := 'John';
  V.year := 1972;
  // now V contains {"name":"john","year":1982}

This is just another implementation of KISS design in our framework.

Since Delphi XE2, some modifications were introduced to the official DispInvoke() RTL implementation:

  1. A new varUStrArg kind of parameter has been defined, which will allow to transmit UnicodeString property values;
  2. All text property values would be transmitted as BSTR / WideString / varOleStr variants to the invoked variant type;
  3. All textual property names were normalized to be in UPPERCASE.

Those modifications are worth considering...
And we may have discovered two regressions: one about speed, and the other about an unexpected logic bug...

The issues

The first modification does make sense, and was indeed a welcome fix for an Unicode version of Delphi. It should have been as such since Delphi 2009.

Temporary conversion to WideString does make sense in the COM / OLE world, but is an awfull performance bottleneck in the pure Delphi realm, i.e. when using late-binding with custom type of variants (as for all our custom variant types). This may be a noticeable speed penalty, in comparison to previous versions of the compiler.

Last but not least, the conversion to uppercase is a bug. For instance, the following code won't work as expected since Delphi XE2:

var V: variant;
 ...
  TDocVariant.New(V); // or slightly slower V := TDocVariant.New;
  V.name := 'John';
  V.year := 1972;
  // before Delphi XE2, V contains {"name":"john","year":1982} - as expected// since Delphi XE2,  V contains {"NAME":"john","YEAR":1982} - sounds like a bug, doesn't it?

This sounds indeed like an awfull regression.

Fix included in the mORMot framework

Since revision 1.18 of the framework, the patch described in this previous blog article has been modified for Delphi XE2 and up, as such:

  • It will handle varUStrArg kind of parameter as exepcted;
  • It will avoid any temporary conversion to WideString for textual values;
  • It will by-pass the property name change into uppercase.

As soon as you define SynCommons in any of your program's uses class, our hooked DispInvoke() will take place, and identify any of our TSynInvokeableVariantType classes.

As a result, it will by-pass the performance bottleneck of the default RTL implementation, and also fix the uppercase conversion of the property name.

Of course, if this variant is not a TSynInvokeableVariantType instance (e.g. any Ole Automation call), the regular TInvokeableVariantType.DispInvoke() method as defined in Variants.pas will be executed, to maintain the best compatibility possible.

Feedback is welcome in our forum, as usual!

JavaScript support in mORMot via SpiderMonkey

$
0
0

As we already stated, we finished the first step of integration of the SpiderMonkey engine to our mORMot framework.
Version 1.8.5 of the library is already integrated, and latest official revision will be soon merged, thanks to mpv's great contribution.
It can be seen as stable, since it is already used on production site to serve more than 1,000,000 requests per day.

You can now easily uses JavaScript on both client and server side.
On server side, mORMot's implementation offers an unique concept, i.e. true multi-threading, which is IMHO a huge enhancement when compared to the regular node.js mono-threaded implementation, and its callback hell.
In fact, node.js official marketing states its non-blocking scheme is a plus. It allows to define a HTTP server in a few lines, but huge server applications need JavaScript experts not to sink into a state a disgrace.

Scripting abilities of mORMot

As a Delphi framework, mORMot premium language support is for the object pascal language. But it could be convenient to have some part of your software not fixed within the executable. In fact, once the application is compiled, execution flow is written in stone: you can't change it, unless you modify the Delphi source and compile it again. Since mORMot is Open Source, you can ship the whole source code to your customers or services with no restriction, and diffuse your own code as pre-compiled .dcu files, but your end-user will need to have a Delphi IDE installed (and paid), and know the Delphi language.

This is when scripting does come on the scene.
For instance, scripting may allow to customize an application behavior for an end-user (i.e. for reporting), or let a domain expert define evolving appropriate business rules - following Domain Driven Design.

If your business model is to publish a core domain expertise (e.g. accounting, peripheral driving, database model, domain objects, communication, AJAX clients...) among several clients, you will sooner or later need to adapt your application to one or several of your customers. There is no "one exe to rule them all". Maintaining several executables could become a "branch-hell". Scripting is welcome here: speed and memory critical functionality (in which mORMot excels) will be hard-coded within the main executable, then everything else could be defined in script.

There are plenty of script languages available.
We considered DelphiWebScript which is well maintained and expressive (it is the code of our beloved SmartMobileStudio), but is not very commonly used. We still want to include it in the close future.
Then LUA defines a light and versatile general-purpose language, dedicated to be embedded in any application. Sounds like a viable solution: if you can help with it, your contribution is welcome!
We did also take into consideration Python and Ruby but both are now far from light, and are not meant to be embedded, since they are general-purpose languages, with a huge set of full-featured packages.

Then, there is JavaScript:

  • This is the World Wide Web assembler. Every programmer in one way or another knows JavaScript.
  • JavaScript can be a very powerful language - see Crockford's book "JavaScript - The Good Parts";
  • There are a huge number of libraries written in JavaScript: template engines (jade, mustache...), SOAP and LDAP clients, and many others (including all node.js libraries of course);
  • It was the base for some strongly-typed syntax extensions, like CoffeScript, TypeScript, Dart;
  • In case of AJAX / Rich Internet Application we can directly share part of logic between client and server (validation, template rendering...) without any middle-ware;
  • One long time mORMot's user (Pavel, aka mpv) already integrated SpiderMonkey to mORMot's core. His solution is used on production to serve billion of requests per day, with success. We officially integrated his units.
    Thanks a lot, Pavel!

As a consequence, mORMot introduced direct JavaScript support via SpiderMonkey.
It allows to:

  • Execute Delphi code from JavaScript - including our ORM or SOA methods, or even reporting;
  • Consume JavaScript code from Delphi (e.g. to define and customize any service or rule, or use some existing .js library);
  • Expose JavaScript objects and functions via a TSMVariant custom variant type: it allows to access any JavaScript object properties or call any of its functions via late-binding, from your Delphi code, just as if it was written in native Object-Pascal;
  • Follow a classic synchronous blocking pattern, rooted on mORMot's multi-thread efficient model, easy to write and maintain;
  • Handle JavaScript or Delphi objects as UTF-8 JSON, ready to be published or consumed via mORMot's RESTful Client-Server remote access.

SpiderMonkey integration

A powerful JavaScript engine

SpiderMonkey, the Mozilla JavaScript engine, can be embedded in your mORMot application. It could be used on client side, within a Delphi application (e.g. for reporting), but the main interest of it may be on the server side.

The word JavaScript may bring to mind features such as event handlers (like onclick), DOM objects, window.open, and XMLHttpRequest.
But all of these features are actually not provided by the SpiderMonkey engine itself.

SpiderMonkey provides a few core JavaScript data types—numbers, strings, Arrays, Objects, and so on—and a few methods, such as Array.push. It also makes it easy for each application to expose some of its own objects and functions to JavaScript code. Browsers expose DOM objects. Your application will expose objects that are relevant for the kind of scripts you want to write. It is up to the application developer to decide what objects and methods are exposed to scripts.

Direct access to the SpiderMonkey API

The SynSMAPI.pas unit is a tuned conversion of the SpiderMonkey API, providing full ECMAScript 5 support and JIT.
You could take a look at the full description of this low-level API.

But the SynSM.pas unit will encapsulate most of it into higher level Delphi classes and structures (including a custom variant type), so you probably won't need to use SynSMAPI.pas directly in your code:

TypeDescription
TSMEngineManagermain access point to the SpiderMonkey per-thread scripting engines
TSMEngineimplements a Thread-Safe JavaScript engine instance
TSMObjectwrap a JavaScript object and its execution context
TSMValuewrap a JavaScript value, and interfaces it with Delphi types
TSMVariant /
TSMVariantData
define a custom variant type, for direct access to any JavaScript object, with late-binding

We will see know how to work with all those classes.

Execution scheme

The SpiderMonkey JavaScript engine compiles and executes scripts containing JavaScript statements and functions. The engine handles memory allocation for the objects needed to execute scripts, and it cleans up—garbage collects—objects it no longer needs.

In order to run any JavaScript code in SpiderMonkey, an application must have three key elements:

  1. A JSRuntime,
  2. A JSContext,
  3. And a globalJSObject.

A JSRuntime, or runtime, is the space in which the JavaScript variables, objects, scripts, and contexts used by your application are allocated. Every JSContext and every object in an application lives within a JSRuntime. They cannot travel to other runtimes or be shared across runtimes.

A JSContext, or context, is like a little machine that can do many things involving JavaScript code and objects. It can compile and execute scripts, get and set object properties, call JavaScript functions, convert JavaScript data from one type to another, create objects, and so on.

Lastly, the globalJSObject is a JavaScript object which contains all the classes, functions, and variables that are available for JavaScript code to use. Whenever a web browser code does something like window.open("http://www.mozilla.org/"), it is accessing a global property, in this case window. SpiderMonkey applications have full control over what global properties scripts can see.

Every SpiderMonkey instance starts out every execution context by creating its JSRunTime, JSContext instances, and a global JSObject. It populates this global object with the standard JavaScript classes, like Array and Object. Then application initialization code will add whatever custom classes, functions, and variables (like window) the application wants to provide; it may be, for a mORMot server application, ORM access or SOA services consumption and/or implementation.

Each time the application runs a JavaScript script (using, for example, JS_EvaluateScript), it provides the global object for that script to use. As the script runs, it can create global functions and variables of its own. All of these functions, classes, and variables are stored as properties of the global object.

Creating your execution context

The main point about those three key elements is that, in the current implementation pattern of SpiderMonkey, runtime, context or global objects are not thread-safe.

Therefore, in the mORMot's use of this library, each thread will have its own instance of each.

In the SynSM.pas unit, a TSMEngine class has been defined to give access to all those linked elements:

  TSMEngine = class
  ...
    /// access to the associated global object as a TSMVariant custom variant// - allows direct property and method executions in Delphi code, via// late-bindingproperty Global: variant read FGlobal;
    /// access to the associated global object as a TSMObject wrapper// - you can use it to register a methodproperty GlobalObject: TSMObject read FGlobalObject;
    /// access to the associated global object as low-level PJSObjectproperty GlobalObj: PJSObject read FGlobalObject.fobj;
    /// access to the associated execution contextproperty cx: PJSContext read fCx;
    /// access to the associated execution runtimeproperty rt: PJSRuntime read frt;
  ...

Our implementation will define one Runtime, one Context, and one global object per thread, i.e. one TSMEngine class instance per thread.

A JSRuntime, or runtime, is created for each TSMEngine instance. In practice, you won't need access to this value, but rely either on a JSContext or directly a TSMEngine.

A JSContext, or context, will be the main entry point of all SpiderMonkey API, which expect this context to be supplied as parameter. In mORMot, you can retrieve the running TSMEngine from its context by using the function TSMObject.Engine: TSMEngine - in fact, the engine instance is stored in the private data slot of each JSContext.

Lastly, the TSMEngine's global object contains all the classes, functions, and variables that are available for JavaScript code to use. For a mORMot server application, ORM access or SOA services consumption and/or implementation, as stated above.

You can note that there are several ways to access this global object instance, from high-level to low-level JavaScript object types. The TSMEngine.Global property above is in fact a variant. Our SynSM.pas unit defines in fact a custom variant type, identified as the TSMVariant class, able to access any JavaScript object via late-binding, for both variables and functions:

  engine.Global.MyVariable := 1.0594631;
  engine.Global.MyFunction(1,'text');

Most web applications only need one runtime, since they are running in a single thread - and (ab)use of callbacks for non-blocking execution. But in mORMot, you will have one TMSEngine instance per thread, using the TSMEngineManager.ThreadSafeEngine method. Then all execution may be blocking, without any noticeable performance issue, since the whole mORMot threading design was defined to maximize execution resources.

Blocking threading model

This threading model is the big difference with other server-side scripting implementation schemes, e.g. the well-known node.js solution.

Multi-threading is not evil, when properly used. And thanks to the mORMot's design, you won't be afraid of writing blocking JavaScript code, without any callbacks. In practice, those callbacks are what makes most JavaScript code difficult to maintain.

On the client side, i.e. in a web browser, the JavaScript engine only uses one thread per web page, then uses callbacks to defer execution of long-running methods (like a remote HTTP request).
If fact, this is one well identified performance issue of modern AJAX applications. For instance, it is not possible to perform some intensive calculation in JavaScript, without breaking the web application responsiveness: you have to split your computation task in small tasks, then let the JavaScript code pause, until a next piece of computation could be triggerred... Some browsers did only start to uncouple the JavaScript execution thread with the HTML rendering thread - and even this is hard to do... we reached here the limit of a technology rooted in the 80's...

On the server side, node.js did follow this pattern, which did make sense (it allows to share code with the client side, with some name-space tricks), but it is also IMHO a big waste of resources. Why should we stick to an implementation pattern inherited from the 80's computing model, when all CPUs were mono core, and threads were not available?

The main problem when working with one single thread, is that your code shall be asynchronous. Soon or later, you will face a syndrome known as "Callback Hell". In short, you are nesting anonymous functions, and define callbacks. The main issue, in addition to lower readability and being potentially sunk into function() nesting, is that you just lost the JavaScript exception model. In fact, each callback function has to explicitly check for the error (returned as a parameter in the callback function), and handle it.

Of course, you can use so-called Promises and some nice libraries - mainly async.js.
But even those libraries add complexity, and make code more difficult to write. For instance, consider the following non-blocking/asynchronous code:

getTweetsFor("domenic") // promise-returning function
  .then(function (tweets) {
    var shortUrls = parseTweetsForUrls(tweets);
    var mostRecentShortUrl = shortUrls[0];
    return expandUrlUsingTwitterApi(mostRecentShortUrl); // promise-returning function
  })
  .then(httpGet) // promise-returning function
  .then(
    function (responseBody) {
      console.log("Most recent link text:", responseBody);
    },
    function (error) {
      console.error("Error with the twitterverse:", error);
    }
  );

Taken from this web site.

This kind of code will be perfectly readable for a JavaScript daily user, or someone fluent with functional languages.

But the following blocking/synchronous code may sound much more familiar, safer and less verbose, to most Delphi / Java / C# programmer:

try {
  var tweets = getTweetsFor("domenic"); // blockingvar shortUrls = parseTweetsForUrls(tweets);
  var mostRecentShortUrl = shortUrls[0];
  var responseBody = httpGet(expandUrlUsingTwitterApi(mostRecentShortUrl)); // blocking x 2
  console.log("Most recent link text:", responseBody);
} catch (error) {
  console.error("Error with the twitterverse: ", error);
}

Thanks to the blocking pattern, it becomes obvious that code readability and maintainability is as high as possible, and error detection is handled nicely via JavaScript exceptions, and a global try .. catch.

Last but not least, debugging blocking code is easy and straightforward, since the execution will be linear, following the code flow.

Upcoming ECMAScript 6 should go even further thanks to the yield keyword and some task generators - see taskjs - so that asynchronous code may become closer to the synchronous pattern. But even with yield, your code won't be as clean as with plain blocking style.

In mORMot, we did choose to follow an alternate path, i.e. write blocking synchronous code. Sample above shows how easier it is to work with. If you use it to define some huge business logic, or let a domain expert write the code, blocking syntax is much more straightforward.

Of course, mORMot allows you to use callbacks and functional programming pattern in your JavaScript code, if needed. But by default, you are allowed to write KISS blocking code.

Interaction with existing code

Within mORMot units, you can mix Delphi and JavaScript code by two ways:

  • Either define your own functions in Delphi code, and execute them from JavaScript
  • Or define your own functions in JavaScript code (including any third-party library), and execute them from Delphi.

Like for other part of our framework, performance and integration has been tuned, to follow our KISS way.

You can take a look at "22 - JavaScript HTTPApi web server\JSHttpApiServer.dpr" sample for reference code.

Proper engine initialization

As was previously stated, the main point to interface the JavaScript engine is to register all methods when the TSMEngine instance is initialized.

For this, you set the corresponding OnNewEngine callback event to the main TSMEngineManager instance.
See for instance, in the sample code:

constructor TTestServer.Create(const Path: TFileName);
begin
  ...
  fSMManager := TSMEngineManager.Create;
  fSMManager.OnNewEngine := DoOnNewEngine;
  ...

In DoOnNewEngine, you will initialize every newly created TSMEngine instance, to register all needed Delphi methods and prepare access to JavaScript via the runtime's global JSObject.

Then each time you want to access the JavaScript engine, you will write for instance:

function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
var engine: TSMEngine;
...
   engine := fSMManager.ThreadSafeEngine;
...  // now you can use engine, e.g. engine.Global.someMethod()

Each thread of the HTTP server thread-pool will be initialized on the fly if needed, or the previously initialized instance will be quickly returned otherwise.

Once you have the TSMEngine instance corresponding to the current thread, you can launch actions on its global object, or tune its execution.
For instance, it could be a good idea to check for the JavaScript VM's garbage collection:

function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
...
   engine := fSMManager.ThreadSafeEngine;
   engine.MaybeGarbageCollect; // perform garbage collection if needed
...

We will now find out how to interact between JavaScript and Delphi code.

Calling Delphi code from JavaScript

In order to call some Delphi method from JavaScript, you will have to register the method.
As just stated, it is done by setting a callback within TSMEngineManager.OnNewEngine initialization code. For instance:

procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
...
  // add native function to the engine
  Engine.RegisterMethod(Engine.GlobalObj,'loadFile',LoadFile,1);
end;

Here, the local LoadFile() method is implemented as such in native code:

function TTestServer.LoadFile(const This: variant; const Args: array of variant): variant;
beginif length(Args)<>1 thenraise Exception.Create('Invalid number of args for loadFile(): required 1 (file path)');
  result := AnyTextFileToSynUnicode(Args[0]);
end;

As you can see, this is perfectly easy to follow.
Its purpose is to load a file content from JavaScript, by defining a new global function named loadFile().
Remember that the SpiderMonkey engine, by itself, does not know anything about file system, database or even DOM. Only basic objects were registered, like arrays. We have to explicitly register the functions needed by the JavaScript code.

In the above code snippet, we used the TSMEngineMethodEventVariant callback signature, marshaling variant values as parameters. This is the easiest method, with only a slight performance impact.

Such methods have the following features:

  • Arguments will be transmitted from JavaScript values as simple Delphi types (for numbers or text), or as our custom TSMVariant type for JavaScript objects, which allows late-binding;
  • The This: variant first parameter map the "callee" JavaScript object as a TSMVariant custom instance, so that you would be able to access the other object's methods or properties directly via late-binding;
  • You can benefit of the JavaScript feature of variable number of arguments when calling a function, since the input arguments is a dynamic array of variant;
  • All those registered methods are registered in a list maintained in the TSMEngine instance, so it could be pretty convenient to work with, in some cases;
  • You can still access to the low-level JSObject values of any the argument, if needed, since they can be trans-typed to a TSMVariantData instance (see below) - so you do not loose any information;
  • The Delphi native method will be protected by the mORMot wrapper, so that any exception raised within the process will be catch and transmitted as a JavaScript exception to the runtime;
  • There is also an hidden set of the FPU exception mask during execution of native code (more on it later on) - you should not bother on it here.

Now consider how you should have written the same loadFile() function via low-level API calls.

First, we register the callback:

procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
...
  // add native function to the engine
 Engine.GlobalObject.DefineNativeMethod('loadFile', nsm_loadFile, 1);
end;

Then its implementation:

function nsm_loadFile(cx: PJSContext; argc: uintN; vp: Pjsval): JSBool; cdecl;
var in_argv: PjsvalVector;
    filePath: TFileName;
begin
  TSynFPUException.ForDelphiCode;
  tryif argc<>1 thenraise Exception.Create('Invalid number of args for loadFile(): required 1 (file path)');
    in_argv := JS_ARGV(cx,vp);
    filePath := JSVAL_TO_STRING(in_argv[0]).ToString(cx);
    JS_SET_RVAL(cx, vp, cx^.NewJSString(AnyTextFileToSynUnicode(filePath)).ToJSVal);
    Result := JS_TRUE;
  except
    on E: Exception do begin// all exceptions MUST be catched on Delphi side
      JS_SET_RVAL(cx, vp, JSVAL_VOID);
      JSError(cx, E);
      Result := JS_FALSE;
    end;
  end;
end;

As you can see, this nsm_loadFile() function is much more difficult to follow:

  • Your code shall begin with a cryptic TSynFPUException.ForDelphiCode instruction, to protect the FPU exception flag during execution of native code (Delphi RTL expects its own set of FPU exception mask during execution, which does not match the FPU exception mask expected by SpiderMonkey);
  • You have to explicitly catch any Delphi exception which may raise, with a try...finally block, and marshal them back as JavaScript errors;
  • You need to do a lot of manual low-level conversions - via JS_ARGV() then e.g. JSVAL_TO_STRING() macros - to retrieve the actual values of the arguments;
  • And the returning function is to be marshaled by hand - see the JS_SET_RVAL() line.

Since the variant-based callback has only a slight performance impact (nothing measurable, when compared to the SpiderMonkey engine performance itself), and still have access to all the transmitted information, we strongly encourage you to use this safer and cleaner pattern, and do not define any native function via low-level API.

Note that there is an alternate JSON-based callback, which is not to be used in your end-user code, but will be used when marshaling to JSON is needed, e.g. when working with mORMot's ORM or SOA features.

TSMVariant custom type

As stated above, the SynSM.pas unit defines a TSMVariant custom variant type. It will be used by the unit to marshal any JSObject instance as variant.

Via the magic of late-binding, it will allow access of any JavaScript object property, or execute any of its functions. Only with a slightly performance penalty, but with much better code readability than with low-level access of the SpiderMonkey API.

The TSMVariantData memory structure can be used to map such a TSMVariant variant instance. In fact, the custom variant type will store not only the JSObject value, but also its execution context - i.e. JSContext - so is pretty convenient to work with.

For instance, you may be able to write code as such:

function TMyClass.MyFunction(const This: variant; const Args: array of variant): variant;
var global: variant;
begin
  TSMVariantData(This).GetGlobal(global);
  global.anotherFunction(Args[0],Args[1],'test');
  // same as:
  global := TSMVariantData(This).SMObject.Engine.Global;
  global.anotherFunction(Args[0],Args[1],'test');
  // but you may also write directly:with TSMVariantData(This).SMObject.Engine do
    Global.anotherFunction(Args[0],Args[1],'test');
  result := AnyTextFileToSynUnicode(Args[0]);
end;

Here, the This custom variant instance is trans-typed via TSMVariantData(This) to access its internal properties.

Calling JavaScript code from Delphi

In order to execute some JavaScript code from Delphi, you should first define the JavaScript functions to be executed.
This shall take place within TSMEngineManager.OnNewEngine initialization code:

procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
var showDownRunner: SynUnicode;
begin// add external JavaScript library to engine (port of the Markdown library)
  Engine.Evaluate(fShowDownLib, 'showdown.js');
  // add the bootstrap function calling loadfile() then showdown's makeHtml()
  showDownRunner := AnyTextFileToSynUnicode(ExeVersion.ProgramFilePath+'showDownRunner.js');
  Engine.Evaluate(showDownRunner, 'showDownRunner.js');
  ...

This code first evaluates (i.e. "executes") a general-purpose JavaScript library contained in the showdown.js file, available in the sample executable folder. This is an open source library able to convert any Markdown markup into HTML. Plain standard JavaScript code.

Then we evaluate (i.e. "execute") a small piece of JavaScript code, to link the makeHtml() function of the just defined library with our loadFile() native function:

function showDownRunner(pathToFile){
  var src = loadFile(pathToFile);            // call Delphi native codevar converter = new Showdown.converter();  // get the Showdown convertedreturn converter.makeHtml(src);            // convert .md content into HTML via showdown.js
}

Now we have a new global function showDownRunner(pathToFile) at hand, ready to be executed by our Delphi code:

function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
var content: variant;
    FileName, FileExt: TFileName;
    engine: TSMEngine;
  ...
  if FileExt='.md' then begin
  ...
    engine := fSMManager.ThreadSafeEngine;
  ...
    content := engine.Global.showDownRunner(FileName);
  ...

As you can see, we access the function via late-binding. Above code is perfectly readable, and we call here a JavaScript function and a whole library as natural as if it was native code.

Without late-binding, we may have written, accessing not the Global TSMVariant instance, but the lower level GlobalObject: TSMObject property:

  ...
    content := engine.GlobalObject.Run('showDownRunner',[SynUnicode(FileName)]);
  ...

It is up to you to choose which kind of code you prefer, but late-binding is worth considering.

Next step on our side is to directly allow access to mORMot's ORM and SOA features, including interface-based services.
Feedback is welcome on our forum, as usual.

Introducing mORMot's architecture and design principles

$
0
0

We have just released a set of slides introducing 

  • ORM, SOA, REST, JSON, MVC, MVVM, SOLID, Mocks/Stubs, Domain-Driven Design concepts with Delphi, 
  • and showing some sample code using our Open Source mORMot framework.

You can follow the public link on Google Drive!

This is a great opportunity to discovers some patterns you may not be familiar with, and find out how mORMot try to implement them.
This set of slides may be less intimidating than our huge documentation - do not be terrified by our 1400 pages Software Architecture Design pdf!

Feedback is welcome on our forum, as usual.


Mustache Logic-less templates for Delphi - part 1

$
0
0

Mustache is a well-known logic-less template engine.
There is plenty of Open Source implementations around (including in JavaScript, which can be very convenient for AJAX applications on client side, for instance).
For mORMot, we created the first pure Delphi implementation of it, with a perfect integration with other bricks of the framework.

In this first part of this series of blog articles, we will introduce the Mustache design.
You can download this documentation as one single pdf file.

Generally speaking, a Template system can be used to separate output formatting specifications, which govern the appearance and location of output text and data elements, from the executable logic which prepares the data and makes decisions about what appears in the output.

Most template systems (e.g. PHP, smarty, Razor...) feature in fact a full scripting engine within the template content.
It allows powerful constructs like variable assignment or conditional statements in the middle of the HTML content. It makes it easy to modify the look of an application within the template system exclusively, without having to modify any of the underlying "application logic". They do so, however, at the cost of separation, turning the templates themselves into part of the application logic.

Mustache inherits from Google's ctemplate library, and is used in many famous applications, including the "main" Google web search, or the Twitter web site.
The Mustache template system leans strongly towards preserving the separation of logic and presentation, therefore ensures a perfect MVC design, and ready to consume SOA services.

Mustache is intentionally constrained in the features it supports and, as a result, applications tend to require quite a bit of code to instantiate a template: all the application logic will be defined within the Controller code, not in the View source.
This may not be to everybody's tastes. However, while this design limits the power of the template language, it does not limit the power or flexibility of the template system. This system supports arbitrarily complex text formatting.

Finally, Mustache is designed with an eye towards efficiency. Template instantiation is very quick, with an eye towards minimizing both memory use and memory fragmentation. As a result, it sounds like a perfect template system for our mORMot framework.

Mustache principles

There are two main parts to the Mustache template system:

  1. Templates (which are plain text files);
  2. Data dictionaries (aka Context).

For instance, given the following template:

<h1>{{header}}</h1>

{{#items}} {{#first}} <li><strong>{{name}}</strong></li> {{/first}} {{#link}} <li><a href="{{url}}">{{name}}</a></li> {{/link}} {{/items}}
{{#empty}} <p>The list is empty.</p> {{/empty}}

and the following data context:

{
  "header": "Colors",
  "items": [
      {"name": "red", "first": true, "url": "#Red"},
      {"name": "green", "link": true, "url": "#Green"},
      {"name": "blue", "link": true, "url": "#Blue"}
  ],
  "empty": true
}

The Mustache engine will render this data as such:

<h1>Colors</h1>
<li><strong>red</strong></li>
<li><a href="#Green">green</a></li>
<li><a href="#Blue">blue</a></li>
<p>The list is empty.</p>

In fact, you did not see any "if" nor "for" loop in the template, but Mustache conventions make it easy to render the supplied data as the expected HTML output. It is up to the MVC Controller to render the data as expected by the template, e.g. for formatting dates or currency values.

Next article will detail the Mustache syntax itself.
Stay tuned!

Mustache Logic-less templates for Delphi - part 2

$
0
0

Mustache is a well-known logic-less template engine.
There is plenty of Open Source implementations around (including in JavaScript, which can be very convenient for AJAX applications on client side, for instance).
For mORMot, we created the first pure Delphi implementation of it, with a perfect integration with other bricks of the framework.

In this second part of this series of blog articles, we will introduce the Mustache syntax.
You can download this documentation as one single pdf file.

The Mustache template logic-less language has five types of tags:

  1. Variables;
  2. Sections;
  3. Inverted Sections;
  4. Comments;
  5. Partials.

All those tags will be identified with mustaches, i.e. {{...}}.
Anything found in a template of this form is interpreted as a template marker.
All other text is considered formatting text and is output verbatim at template expansion time.

MarkerDescription
{{variable}}The variable name will be searched recursively within the current context (possibly with dotted names), and, if found, will be written as escaped HTML.
If there is no such key, nothing will be rendered.
{{{variable}}}
{{& variable}}
The variable name will be searched recursively within the current context, and, if found, will be written directly, without any HTML escape.
If there is no such key, nothing will be rendered.
{{#section}}
...
{{/section}}
Defines a block of text, aka section, which will be rendered depending of the section variable value, as searched in the current context:
- If section equals false or is an empty list[], the whole block won't be rendered;
- If section is non-false but not a list, it will be used as the context for a single rendering of the block;
- If section is a non-empty list, the text in the block will be rendered once for each item in the list - the context of the block will be set to the current item for each iteration.
{{^section}}
...
{{/section}}
Defines a block of text, aka inverted section, which will be rendered depending of the section variable inverted value, as searched in the current context:
- If section equals false or is an empty list, the whole block will be rendered;
- If section is non-false or a non-empty list, it won't be rendered.
{{! comment}}The comment text will just be ignored.
{{>partial}}The partial name will be searched within the registered partials list, then will be executed at run-time (so recursive partials are possible), with the current execution context.
{{=...=}}The delimiters (i.e. by default {{ }}) will be replaced by the specified characters (may be convenient when double-braces may appear in the text).

In addition to those standard markers, the mORMot implementation of Mustache features:

MarkerDescription
{{.}}This pseudo-variable refers to the context object itself instead of one of its members. This is particularly useful when iterating over lists.
{{-index}}This pseudo-variable returns the current item number when iterating over lists, starting counting at 1
{{#-first}}
...
{{/-first}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-first}} - for the first item when iterating over lists
{{#-last}}
...
{{/-last}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-last}} - for the last item when iterating over lists
{{#-odd}}
...
{{/-odd}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-odd}} - for the odd item number when iterating over lists: it can be very usefull e.g. to display a list with alternating row colors
{{<partial}}
...
{{/partial}}
Defines an in-lined partial - to be called later via {{>partial}} - within the scope of the current template
{{"some text}}This pseudo-variable will supply the given text to a callback, which will for instance transform its content (e.g. translate it), before writing it to the output

This set of markers will allow to easily write any kind of content, without any explicit logic nor nested code.
As a major benefit, the template content could be edited and verified without the need of any Mustache compiler, since all those {{...}} markers will identify very clearly the resulting layout.

Variables

A typical Mustache template:

Hello {{name}}
You have just won {{value}} dollars!
Well, {{taxed_value}} dollars, after taxes.

Given the following hash:

{
  "name": "Chris",
  "value": 10000,
  "taxed_value": 6000
}

Will produce the following:

Hello Chris
You have just won 10000 dollars!
Well, 6000 dollars, after taxes.

You can note that variable tags are escaped for HTML by default. This is a mandatory security feature. In fact, all web applications which create HTML documents can be vulnerable to Cross-Site-Scripting (XSS) attacks unless data inserted into a template is appropriately sanitized and/or escaped. With Mustache, this is done by default. Of course, you can override it and force to not-escape the value, using variable or & variable.

For instance:

TemplateContextOutput
* {{name}}
* {{age}}
* {{company}}
* {{{company}}}
{
"name": "Chris",
"company": "<b>GitHub</b>"
}
* Chris
*
* &lt;b&gt;GitHub&lt;/b&gt;
* <b>GitHub</b>

Variables resolve names within the current context with an optional dotted syntax, for instance:

TemplateContextOutput
* {{people.name}}
* {{people.age}}
* {{people.company}}
* {{{people.company}}}
{
"name": "Chris",
"company": "<b>GitHub</b>"
}
* Chris
*
* &lt;b&gt;GitHub&lt;/b&gt;
* <b>GitHub</b>

Sections

Sections render blocks of text one or more times, depending on the value of the key in the current context.

In our "wining template" above, what happen if we do want to hide the tax details?
In most script languages, we may write an if ... block within the template. This is what Mustache avoids. So we define a section, which will be rendered on need.

The template becomes:

Hello {{name}}
You have just won {{value}} dollars!
{{#in_ca}}
Well, {{taxed_value}} dollars, after taxes.
{{/in_ca}}

Here, we created a new section, named in_ca.

Given the hash value of in_ca (and its presence), the section will be rendered, or not:

ContextOutput
{
"name": "Chris",
"value": 10000,
"taxed_value": 6000,
"in_ca": true
}
Hello Chris
You have just won 10000 dollars!
Well, 6000 dollars, after taxes.
{
"name": "Chris",
"value": 10000,
"taxed_value": 6000,
"in_ca": false
}
Hello Chris
You have just won 10000 dollars!
{
"name": "Chris",
"value": 10000,
"taxed_value": 6000
}
Hello Chris
You have just won 10000 dollars!

Sections also change the context of its inner block. It means that the section variable content becomes the top-most context which will be used to identify any supplied variable key.

Therefore, the following context will be perfectly valid: we can define taxed_value as a member of in_ca, and it will be rendered directly, since it is part of the new context.

ContextOutput
{
"name": "Chris",
"value": 10000,
"in_ca": {
  "taxed_value": 6000
  }
}

Hello Chris
You have just won 10000 dollars!
Well, 6000 dollars, after taxes.
{
"name": "Chris",
"value": 10000,
"taxed_value": 6000
}
Hello Chris
You have just won 10000 dollars!
{
"name": "Chris",
"value": 10000,
"taxed_value": 3000,
"in_ca": {
  "taxed_value": 6000
  }
}
Hello Chris
You have just won 10000 dollars!
Well, 6000 dollars, after taxes.

In the latest context above, there are two taxed_value variables.
The engine will use the one defined by the context in the in_ca section, i.e. in_ca.taxed_value; the one defined at the root context level (which equals 3000) is just ignored.

If the variable pointed by the section name is a list, the text in the block will be rendered once for each item in the list.
The context of the block will be set to the current item for each iteration.

In this way we can loop over collections.
Mustache allows any depth of nested loops (e.g. any level of master/details information).

TemplateContextOutput
{{#repo}}
<b>name</b>
{{/repo}}
{
"repo": [
  {"name": "resque"},
  {"name": "hub"},
  {"name": "rip"} 
  ]
}
<b>resque</b>
<b>hub</b>
<b>rip</b>
{{#repo}}
<b>.</b>
{{/repo}}
{
"repo":
["resque", "hub", "rip"]
}
<b>resque</b>
<b>hub</b>
<b>rip</b>

The latest template makes use of the . pseudo-variable, which allows to render the current item of the list.

Inverted Sections

An inverted section begins with a caret (^) and ends as a standard (non-inverted) section.
They may render text once, based on the inverse value of the key. That is, the text block will be rendered if the key doesn't exist, is false, or is an empty list.

Inverted sections are usually defined after a standard section, to render some message in case no information will be written in the non-inverted section:

TemplateContextOutput
{{#repo}}
<b>.</b>
{{/repo}}
{{^repo}}
No repos :(
{{/repo}}
{
"repo":
[]
}
No repos :(

Partials

Partials are some kind of external sub-templates which can be included within a main template, for instance to follow the same rendering at several places.
Just like functions in code, they do ease template maintainability and spare development time.

Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops.
They also inherit the calling context, so can easily be re-used within a list section, or together with plain variables.

In practice, partials shall be supplied together with the data context - they could be seen as "template context".

For example, this "main" template uses a > user partial:

<h2>Names</h2>
{{#names}}
{{> user}}
{{/names}}

With the following template registered as "user":

<strong>{{name}}</strong>

Can be thought of as a single, expanded template:

<h2>Names</h2>
{{#names}}
<strong>{{name}}</strong>
{{/names}}

In mORMot's implementations, you can also create some internal partials, defined as <partial ... /partial pseudo-sections.
It may decrease the need of maintaining multiple template files, and refine the rendering layout.

For instance, the previous template may be defined at once:

<h2>Names</h2>
{{#names}}
{{>user}}
{{/names}}
{{<user}}
<strong>{{name}}</strong>
{{/user}}

The same file will define both the partial and the main template. 

Note that we defined the internal partial after the main template, but we may have defined it anywhere in the main template logic: internal partials definitions are ignored when rendering the main template, just like comments.

Next article will detail the Mustache engine as implemented in mORMot's source code tree.
Now, a bit of practice!

Mustache Logic-less templates for Delphi - part 3

$
0
0

Mustache is a well-known logic-less template engine.
There is plenty of Open Source implementations around (including in JavaScript, which can be very convenient for AJAX applications on client side, for instance).
For mORMot, we created the first pure Delphi implementation of it, with a perfect integration with other bricks of the framework.

In last part of this series of blog articles, we will introduce the Mustache library included within mORMot source code tree.
You can download this documentation as one single pdf file.

Part of our mORMot framework, we implemented an optimized Mustache template engine in the SynMustache unit:

  • It is the first Delphi implementation of Mustache;
  • It has a separate parser and renderer (so you can compile your templates ahead of time);
  • The parser features a shared cache of compiled templates;
  • It passes all official Mustache specification tests - including all weird whitespace process;
  • External partials can be supplied as TSynMustachePartials dictionaries;
  • {{.}}, {{-index}} and {{"some text}} pseudo-variables were added to the standard Mustache syntax;
  • {{#-first}}, {{#-last}} and {{#-odd}} pseudo-sections were added to the standard Mustache syntax;
  • Internal partials can be defined via {{<partial}} - also a nice addition to the standard Mustache syntax;
  • It allows the data context to be supplied as JSON or our TDocVariant custom type;
  • Almost no memory allocation is performed during the rendering;
  • It is natively UTF-8, from the ground up, with optimized conversion of any string data;
  • Performance has been tuned and grounded in SynCommons's optimized code;
  • Each parsed template is thread-safe and re-entrant;
  • It follows the Open/Close principle so that any aspect of the process can be customized and extended (e.g. for any kind of data context);
  • It is perfectly integrated with the other bricks of our mORMot framework, ready to implement dynamic web sites with true 10 design, and full separation of concerns in the views written in Mustache, the controllers being e.g. interface-based services;
  • API is flexible and easy to use.

Variables

Now, let's see some code.

First, we define our needed variables:

var mustache: TSynMustache;
    doc: variant;

In order to parse a template, you just need to call:

  mustache := TSynMustache.Parse(
    'Hello {{name}}'#13#10'You have just won {{value}} dollars!');

It will return a compiled instance of the template.
The Parse() class method will use the shared cache, so you won't need to release the mustache instance once you are done with it: no need to write a try ... finally mustache.Free; end block.

You can use a TDocVariant to supply the context data (with late-binding):

  TDocVariant.New(doc);
  doc.name := 'Chris';
  doc.value := 10000;

As an alternative, you may have defined the context data as such:

  doc := _ObjFast(['name','Chris','value',1000]);

Now you can render the template with this context:

  html := mustache.Render(doc);
  // now html='Hello Chris'#13#10'You have just won 10000 dollars!'

If you want to supply the context data as JSON, then render it, you may write:

  mustache := TSynMustache.Parse(
    'Hello {{value.name}}'#13#10'You have just won {{value.value}} dollars!');
  html := mustache.RenderJSON('{value:{name:"Chris",value:10000}}');
  // now html='Hello Chris'#13#10'You have just won 10000 dollars!'

Note that here, the JSON is supplied with an extended syntax (i.e. field names are unquoted), and that TSynMustache is able to identify a dotted-named variable within the execution context.

As an alternative, you could use the following syntax to create the data context as JSON, with a set of parameters, therefore easier to work with in real code storing data in variables (for instance, any string variable is quoted as expected by JSON, and converted into UTF-8):

  mustache := TSynMustache.Parse(
    'Hello {{name}}'#13#10'You have just won {{value}} dollars!');
  html := mustache.RenderJSON('{name:?,value:?}',[],['Chris',10000]);
  html='Hello Chris'#13#10'You have just won 10000 dollars!'

You can find in the mORMot.pas unit the ObjectToJSON() function which is able to transform any TPersistent instance into valid JSON content, ready to be supplied to a TSynMustache compiled instance.
If the object's published properties have some getter functions, they will be called on the fly to process the data (e.g. returning 'FirstName Name' as FullName by concatenating both sub-fields).

Sections

Sections are handled as expected:

  mustache := TSynMustache.Parse('Shown.{{#person}}As {{name}}!{{/person}}end{{name}}');
  html := mustache.RenderJSON('{person:{age:?,name:?}}',[10,'toto']);
  // now html='Shown.As toto!end'

Note that the sections change the data context, so that within the #person section, you can directly access to the data context person member, i.e. writing directly name

It supports also inverted sections:

  mustache := TSynMustache.Parse('Shown.{{^person}}Never shown!{{/person}}end');
  html := mustache.RenderJSON('{person:true}');
  // now html='Shown.end'

To render a list of items, you can write for instance (using the . pseudo-variable):

  mustache := TSynMustache.Parse('{{#things}}{{.}}{{/things}}');
  html := mustache.RenderJSON('{things:["one", "two", "three"]}');
  // now html='onetwothree'

The -index pseudo-variable allows to numerate the list items, when rendering:

  mustache := TSynMustache.Parse(
    'My favorite things:'#$A'{{#things}}{{-index}}. {{.}}'#$A'{{/things}}');
  html := mustache.RenderJSON('{things:["Peanut butter", "Pen spinning", "Handstands"]}');
  // now html='My favorite things:'#$A'1. Peanut butter'#$A'2. Pen spinning'#$A+//          '3. Handstands'#$A,'-index pseudo variable'

Partials

External partials (i.e. standard Mustache partials) can be defined using TSynMustachePartials. You can define and maintain a list of TSynMustachePartials instances, or you can use a one-time partial, for a given rendering process, as such:

  mustache := TSynMustache.Parse('{{>partial}}'#$A'3');
  html := mustache.RenderJSON('{}',TSynMustachePartials.CreateOwned(['partial','1'#$A'2']));
  // now html='1'#$A'23','external partials'

Here TSynMustachePartials.CreateOwned() expects the partials to be supplied as name/value pairs.

Internal partials (one of the SynMustache extensions), can be defined directly in the main template:

  mustache := TSynMustache.Parse('{{<partial}}1'#$A'2{{name}}{{/partial}}{{>partial}}4');
  html := mustache.RenderJSON('{name:3}');
  // now html='1'#$A'234','internal partials'

Internationalization

You can define {{"some text}} pseudo-variables in your templates, which text will be supplied to a callback, ready to be transformed on the fly: it may be convenient for i18n of web applications.

By default, the text will be written directly to the output buffer, but you can define a callback which may be used e.g. for text translation:

procedure TTestLowLevelTypes.MustacheTranslate(var English: string);
beginif English='Hello' then
    English := 'Bonjour' elseif English='You have just won' then
    English := 'Vous venez de gagner';
end;

Of course, in a real application, you may assign one TLanguageFile.Translate(var English: string) method, as defined in the mORMoti18n.pas unit.

Then, you will be able to define your template as such:

  mustache := TSynMustache.Parse(
    '{{"Hello}} {{name}}'#13#10'{{"You have just won}} {{value}} {{"dollars}}!');
  html := mustache.RenderJSON('{name:?,value:?}',[],['Chris',10000],nil,MustacheTranslate);
  // now html='Bonjour Chris'#$D#$A'Vous venez de gagner 10000 dollars!'

All text has indeed been translated as expected.

Feedback is welcome on our forum, as usual!

mORMot on GitHub

$
0
0

There was a long-standing request from customers, about putting all our source code repository to GitHub.

We like a lot our self-hosted Fossil repository, and will continue to use it as our main system, including issue tracking and wiki, for our official web site.

But we created a repository on GitHub, on https://github.com/synopse/mORMot

Git, as a source control manager system, sounds pretty good for handling the source code tree.
Pretty similar to fossil (it is a distributed SCM), but much heavier and difficult to install/configure.

But I would not say the same for the "GitHub for Windows" tool.
For a somewhat huge project like mORMot, it is slow, non responsive, and, after waiting a lot, uses more than 400 MB just to display the repository.
Sometimes, it just crashes. The UI looks nice, but is very difficult to work with.
Another awfully fat and non working WPF / .Net application!

Tickets and wiki will remain on our own site.
But source code repository will be committed on both system, i.e. our self-hosted Fossil repository, and GitHub.

We wrote a simple tool to make this easy:

"GitHub for Windows" was not an option for us!
What a bloatware!

Feedback is welcome on our forum, as usual.

MongoDB database access

$
0
0

MongoDB (from "humongous") is a cross-platform document-oriented database system, and certainly the best known NoSQL database.
According to http://db-engines.com in April 2014, MongoDB is in 5th place of the most popular types of database management systems, and first place for NoSQL database management systems.
Our mORMot framework gives premium access to this database, featuring full NoSQL and Object-Document Mapping (ODM) abilities to the framework.

Integration is made at two levels:

  • Direct low-level access to the MongoDB server, in the SynMongoDB.pas unit;
  • Close integration with our ORM (which becomes defacto an ODM), in the mORMotMongoDB.pas unit.

MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), which matches perfectly mORMot's RESTful approach.

In this first article, we will detail direct low-level access to the MongoDB server, via the SynMongoDB.pas unit.

MongoDB client

The SynMongoDB.pas unit features direct optimized access to a MongoDB server.

It gives access to any BSON data, including documents, arrays, and MongoDB's custom types (like ObjectID, dates, binary, regex or Javascript):

  • For instance, a TBSONObjectID can be used to create some genuine document identifiers on the client side (MongoDB does not generate the IDs for you: a common way is to generate unique IDs on the client side);
  • Generation of BSON content from any Delphi types (via TBSONWriter);
  • Fast in-place parsing of the BSON stream, without any memory allocation (via TBSONElement);
  • A TBSONVariant custom variant type, to store MongoDB's custom type values;
  • Interaction with the SynCommons' TDocVariant custom variant type as document storage and late-binding access;
  • Marshalling BSON to and from JSON, with the MongoDB extended syntax for handling its custom types.

This unit defines some objects able to connect and manage databases and collections of documents on any MongoDB servers farm:

  • Connection to one or several servers, including secondary hosts, via the TMongoClient class;
  • Access to any database instance, via the TMongoDatabase class;
  • Access to any collection, via the TMongoCollection class;
  • It features some nice abilities about speed, like BULK insert or delete mode, and explicit Write Concern settings.

At collection level, you can have direct access to the data, with high level structures like TDocVariant/TBSONVariant, with easy-to-read JSON, or low level BSON content.
You can also tune most aspects of the client process, e.g. about error handling or write concerns (i.e. how remote data modifications are acknowledged).

Connecting to a server

Here is some sample code, which is able to connect to a MongoDB server, and returns the server time:

var Client: TMongoClient;
    DB: TMongoDatabase;
    serverTime: TDateTime;
    res: variant; // we will return the command result as TDocVariant
    errmsg: RawUTF8;
begin
  Client := TMongoClient.Create('localhost',27017);
  try
    DB := Client.Database['mydb'];
    writeln('Connecting to ',DB.Name); // will write 'mydb'
    errmsg := DB.RunCommand('hostInfo',res); // run a commandif errmsg<>'' then
      exit; // quit on any error
    serverTime := res.system.currentTime; // direct conversion to TDateTime
    writeln('Server time is ',DateTimeToStr(serverTime));
  finally
    Client.Free; // will release the DB instanceend;
end;

Note that for this low-level command, we used a TDocVariant, and its late-binding abilities.

In fact, if you put your mouse over the res variable during debugging, you will see the following JSON content:

{"system":{"currentTime":"2014-05-06T15:24:25","hostname":"Acer","cpuAddrSize":64,"memSizeMB":3934,"numCores":4,"cpuArch":"x86_64","numaEnabled":false},"os":{"type":"Windows","name":"Microsoft Windows 7","version":"6.1 SP1 (build 7601)"},"extra":{"pageSize":4096},"ok":1}

And we simply access to the server time by writing res.system.currentTime.

Adding some documents to the collection

We will now explain how to add documents to a given collection.

We assume that we have a DB: TMongoDatabase instance available. Then we will create the documents with a TDocVariant instance, which will be filled via late-binding, and via a doc.Clear pseudo-method used to flush any previous property value:

var Coll: TMongoCollection;
    doc: variant;
    i: integer;
begin
  Coll := DB.CollectionOrCreate[COLL_NAME];
  TDocVariant.New(doc);
  for i := 1 to 10 dobegin
    doc.Clear;
    doc.Name := 'Name '+IntToStr(i+1);
    doc.Number := i;
    Coll.Save(doc);
    writeln('Inserted with _id=',doc._id);
  end;
end;

Thanks to TDocVariant late-binding abilities, code is pretty easy to understand and maintain.

This code will display the following on the console:

Inserted with _id=5369029E4F901EE8114799D9
Inserted with _id=5369029E4F901EE8114799DA
Inserted with _id=5369029E4F901EE8114799DB
Inserted with _id=5369029E4F901EE8114799DC
Inserted with _id=5369029E4F901EE8114799DD
Inserted with _id=5369029E4F901EE8114799DE
Inserted with _id=5369029E4F901EE8114799DF
Inserted with _id=5369029E4F901EE8114799E0
Inserted with _id=5369029E4F901EE8114799E1
Inserted with _id=5369029E4F901EE8114799E2

It means that the Coll.Save() method was clever enough to understand that the supplied document does not have any _id field, so will compute one on the client side before sending the document data to the MongoDB server.

We may have written:

for i := 1 to 10 dobegin
    doc.Clear;
    doc._id := ObjectID;
    doc.Name := 'Name '+IntToStr(i+1);
    doc.Number := i;
    Coll.Save(doc);
    writeln('Inserted with _id=',doc._id);
  end;
end;

Which will compute the document identifier explicitly before calling Coll.Save().
In this case, we may have called directly Coll.Insert(), which is somewhat faster.

Note that you are not obliged to use a MongoDB ObjectID as identifier. You can use any value, if you are sure that it will be genuine. For instance, you can use an integer:

for i := 1 to 10 dobegin
    doc.Clear;
    doc._id := i;
    doc.Name := 'Name '+IntToStr(i+1);
    doc.Number := i;
    Coll.Insert(doc);
    writeln('Inserted with _id=',doc._id);
  end;
end;

The console will display now:

Inserted with _id=1
Inserted with _id=2
Inserted with _id=3
Inserted with _id=4
Inserted with _id=5
Inserted with _id=6
Inserted with _id=7
Inserted with _id=8
Inserted with _id=9
Inserted with _id=10

Note that the mORMot ORM will compute a genuine series of integers in a similar way, which will be used as expected by the TSQLRecord.ID primary key property.

The TMongoCollection class can also write a list of documents, and send them at once to the MongoDB server: this BULK insert mode - close to the Array Binding feature of some SQL providers, and implemented in our SynDB classes - see BATCH sequences for adding/updating/deleting records - can increase the insertion by a factor of 10 times, even when connected to a local instance: imagine how much time it may save over a physical network!

For instance, you may write:

var docs: TVariantDynArray;
...
  SetLength(docs,COLL_COUNT);
  for i := 0 to COLL_COUNT-1 do begin
    TDocVariant.New(docs[i]);
    docs[i]._id := ObjectID; // compute new ObjectID on the client side
    docs[i].Name := 'Name '+IntToStr(i+1);
    docs[i].FirstName := 'FirstName '+IntToStr(i+COLL_COUNT);
    docs[i].Number := i;
  end;
  Coll.Insert(docs); // insert all values at once
...

You will find out later for some numbers about the speed increase due to such BULK insert.

Retrieving the documents

You can retrieve the document as a TDocVariant instance:

var doc: variant;
...
  doc := Coll.FindOne(5);
  writeln('Name: ',doc.Name);
  writeln('Number: ',doc.Number);

Which will write on the console:

Name: Name 6
Number: 5

You have access to the whole Query parameter, if needed:

  doc := Coll.FindDoc('{_id:?}',[5]);
  doc := Coll.FindOne(5); // same as previous

This Query filter is similar to a WHERE clause in SQL. You can write complex search patterns, if needed - see http://docs.mongodb.org/manual/reference/method/db.collection.find for reference.

You can retrieve a list of documents, as a dynamic array of TDocVariant:

var docs: TVariantDynArray;
...
  Coll.FindDocs(docs);
  for i := 0 to high(docs) do
    writeln('Name: ',docs[i].Name,'  Number: ',docs[i].Number);

Which will output:

Name: Name 2  Number: 1
Name: Name 3  Number: 2
Name: Name 4  Number: 3
Name: Name 5  Number: 4
Name: Name 6  Number: 5
Name: Name 7  Number: 6
Name: Name 8  Number: 7
Name: Name 9  Number: 8
Name: Name 10  Number: 9
Name: Name 11  Number: 10

If you want to retrieve the documents directly as JSON, we can write:

var json: RawUTF8;
...
  json := Coll.FindJSON(null,null);
  writeln(json);
...

This will append the following to the console:

[{"_id":1,"Name":"Name 2","Number":1},{"_id":2,"Name":"Name 3","Number":2},{"_id
":3,"Name":"Name 4","Number":3},{"_id":4,"Name":"Name 5","Number":4},{"_id":5,"N
ame":"Name 6","Number":5},{"_id":6,"Name":"Name 7","Number":6},{"_id":7,"Name":"
Name 8","Number":7},{"_id":8,"Name":"Name 9","Number":8},{"_id":9,"Name":"Name 1
0","Number":9},{"_id":10,"Name":"Name 11","Number":10}]

You can note that FindJSON() has two properties, which are the Query filter, and a Projection mapping (similar to the column names in a SELECT col1,col2).
So we may have written:

  json := Coll.FindJSON('{_id:?}',[5]);
  writeln(json);

Which would output:

[{"_id":5,"Name":"Name 6","Number":5}]

Note here than we used an overloaded FindJSON() method, which accept the MongoDB extended syntax (here, the field name is unquoted), and parameters as variables.

We can specify a projection:

  json := Coll.FindJSON('{_id:?}',[5],'{Name:1}');
  writeln(json);

Which will only return the "Name" and "_id" fields (since _id is, by MongoDB convention, always returned:

[{"_id":5,"Name":"Name 6"}]

To return only the "Name" field, you can specify '_id:0,Name:1' as extended JSON for the projection parameter.

[{"Name":"Name 6"}]

There are other methods able to retrieve data, also directly as BSON binary data. They will be used for best speed e.g. in conjunction with our ORM, but for most end-user code, using TDocVariant is safer and easier to maintain.

Updating or deleting documents

The TMongoCollection class has some methods dedicated to alter existing documents.

At first, the Save() method can be used to update a document which has been first retrieved:

  doc := Coll.FindOne(5);
  doc.Name := 'New!';
  Coll.Save(doc);
  writeln('Name: ',Coll.FindOne(5).Name);

Which will write:

Name: New!

Note that we used here an integer value (5) as key, but we may use an ObjectID instead, if needed.

The Coll.Save() method could be changed into Coll.Update(), which expects an explicit Query operator, in addition to the updated document content:

  doc := Coll.FindOne(5);
  doc.Name := 'New!';
  Coll.Update(BSONVariant(['_id',5]),doc);
  writeln('Name: ',Coll.FindOne(5).Name);

Note that by MongoDB's design, any call to Update() will replace the whole document.

For instance, if you write:

  writeln('Before: ',Coll.FindOne(3));
  Coll.Update('{_id:?}',[3],'{Name:?}',['New Name!']);
  writeln('After:  ',Coll.FindOne(3));

Then the Number field will disappear!

Before: {"_id":3,"Name":"Name 4","Number":3}
After:  {"_id":3,"Name":"New Name!"}

If you need to update only some fields, you will have to use the $set modifier:

  writeln('Before: ',Coll.FindOne(4));
  Coll.Update('{_id:?}',[4],'{$set:{Name:?}}',['New Name!']);
  writeln('After:  ',Coll.FindOne(4));

Which will write on the console the value as expected:

Before: {"_id":4,"Name":"Name 5","Number":4}
After:  {"_id":4,"Name":"New Name!","Number":4}

Now the Number field remains untouched.

You can also use the Coll.UpdateOne() method, which will update the supplied fields, and leave the non specified fields untouched:

  writeln('Before: ',Coll.FindOne(2));
  Coll.UpdateOne(2,_Obj(['Name','NEW']));
  writeln('After:  ',Coll.FindOne(2));

Which will output as expected:

Before: {"_id":2,"Name":"Name 3","Number":2}
After:  {"_id":2,"Name":"NEW","Number":2}

You can refer to the documentation of the SynMongoDB.pas unit, to find out all functions, classes and methods available to work with MongoDB.

Write Concern and Performance

You can take a look at the MongoDBTests.dpr sample - located in the SQLite3- MongoDB sub-folder of the source code repository, and the TTestDirect classes, to find out some performance information.

In fact, this TTestDirect is inherited twice, to run the same tests with diverse write concern.

The difference between the two classes will take place at client initialization:

procedure TTestDirect.ConnectToLocalServer;
...
  fClient := TMongoClient.Create('localhost',27017);
  if ClassType=TTestDirectWithAcknowledge thenfClient.WriteConcern := wcAcknowledged elseif ClassType=TTestDirectWithoutAcknowledge thenfClient.WriteConcern := wcUnacknowledged;
...

wcAcknowledged is the default safe mode: the MongoDB server confirms the receipt of the write operation. Acknowledged write concern allows clients to catch network, duplicate key, and other errors. But it adds an additional round-trip from the client to the server, and wait for the command to be finished before returning the error status: so it will slow down the write process.

With wcUnacknowledged, MongoDB does not acknowledge the receipt of write operation. Unacknowledged is similar to errors ignored; however, drivers attempt to receive and handle network errors when possible. The driver's ability to detect network errors depends on the system's networking configuration.

The speed difference between the two is worth mentioning, as stated by the regression tests status, running on a local MongoDB instance:

1. Direct access

1.1. Direct with acknowledge: - Connect to local server: 6 assertions passed 4.72ms - Drop and prepare collection: 8 assertions passed 9.38ms - Fill collection: 15,003 assertions passed 558.79ms 5000 rows inserted in 548.83ms i.e. 9110/s, aver. 109us, 3.1 MB/s - Drop collection: no assertion 856us - Fill collection bulk: 2 assertions passed 74.59ms 5000 rows inserted in 64.76ms i.e. 77204/s, aver. 12us, 7.2 MB/s - Read collection: 30,003 assertions passed 2.75s 5000 rows read at once in 9.66ms i.e. 517330/s, aver. 1us, 39.8 MB/s - Update collection: 7,503 assertions passed 784.26ms 5000 rows updated in 435.30ms i.e. 11486/s, aver. 87us, 3.7 MB/s - Delete some items: 4,002 assertions passed 370.57ms 1000 rows deleted in 96.76ms i.e. 10334/s, aver. 96us, 2.2 MB/s Total failed: 0 / 56,527 - Direct with acknowledge PASSED 4.56s
1.2. Direct without acknowledge: - Connect to local server: 6 assertions passed 1.30ms - Drop and prepare collection: 8 assertions passed 8.59ms - Fill collection: 15,003 assertions passed 192.59ms 5000 rows inserted in 168.50ms i.e. 29673/s, aver. 33us, 4.4 MB/s - Drop collection: no assertion 845us - Fill collection bulk: 2 assertions passed 68.54ms 5000 rows inserted in 58.67ms i.e. 85215/s, aver. 11us, 7.9 MB/s - Read collection: 30,003 assertions passed 2.75s 5000 rows read at once in 9.99ms i.e. 500150/s, aver. 1us, 38.5 MB/s - Update collection: 7,503 assertions passed 446.48ms 5000 rows updated in 96.27ms i.e. 51933/s, aver. 19us, 7.7 MB/s - Delete some items: 4,002 assertions passed 297.26ms 1000 rows deleted in 19.16ms i.e. 52186/s, aver. 19us, 2.8 MB/s Total failed: 0 / 56,527 - Direct without acknowledge PASSED 3.77s

As you can see, the reading speed is not affected by the Write Concern settings.
But data writing can be multiple times faster, when each write command is not acknowledged.

Since there is no error handling, wcUnacknowledged is not to be used on production. You may use it for replication, or for data consolidation, e.g. feeding a database with a lot of existing data as fast as possible.

Stay tuned for the next article, which will detail the MongoDB integration 's ORM...
Feedback is welcome on our forum, as usual!

MongoDB + mORMot ORM = ODM

$
0
0

MongoDB (from "humongous") is a cross-platform document-oriented database system, and certainly the best known NoSQL database.
According to http://db-engines.com in April 2014, MongoDB is in 5th place of the most popular types of database management systems, and first place for NoSQL database management systems.
Our mORMot gives premium access to this database, featuring full NoSQL and Object-Document Mapping (ODM) abilities to the framework.

Integration is made at two levels:

  • Direct low-level access to the MongoDB server, in the SynMongoDB.pas unit;
  • Close integration with our ORM (which becomes defacto an ODM), in the mORMotMongoDB.pas unit.

MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), which matches perfectly mORMot's RESTful approach.

This second article will focus on integration of MongoDB with our ORM.

MongoDB + ORM = ODM

The mORMotMongoDB.pas unit is able to let any TSQLRecord class be persisted on a remote MongoDB server.

As a result, our ORM is able to be used as a NoSQL and Object-Document Mapping (ODM) framework, with almost no code change. Any MongoDB database can be accessed via RESTful commands, using JSON over HTTP - see JSON RESTful Client-Server.

This integration benefits from the other parts of the framework (e.g. our UTF-8 dedicated process, which is also the native encoding for BSON), so you can easily mix SQL and NoSQL databases with the exact same code, and are still able to tune any SQL or MongoDB request in your code, if necessary.

Register the TSQLRecord class

In the database model, we define a TSQLRecord class, as usual:

  TSQLORM = class(TSQLRecord)
  private
    fAge: integer;
    fName: RawUTF8;
    fDate: TDateTime;
    fValue: variant;
    fInts: TIntegerDynArray;
    fCreateTime: TCreateTime;
    fData: TSQLRawBlob;
  publishedproperty Name: RawUTF8 read fName write fName stored AS_UNIQUE;
    property Age: integer read fAge write fAge;
    property Date: TDateTime read fDate write fDate;
    property Value: variant read fValue write fValue;
    property Ints: TIntegerDynArray index 1 read fInts write fInts;
    property Data: TSQLRawBlob read fData write fData;
    property CreateTime: TCreateTime read fCreateTime write fCreateTime;
  end;

Note that we did not define any index... values for the RawUTF8 property, as we need for external SQL databases, since MongoDB does not expect any restriction about text fields length.

The property values will be stored in the native MongoDB layout, i.e. with a better coverage than the SQL types:

DelphiMongoDBRemarks
byteint32
wordint32
integerint32
cardinalN/AYou should use Int64 instead
Int64int64
booleanboolean0 is false, anything else is true
enumerationint32store the ordinal value of the enumerated item(i.e. starting at 0 for the first element)
setint32each bit corresponding to an enumerated item (therefore a set of up to 64 elements can be stored in such a field)
singledouble
doubledouble
extendeddoublestored as double (precision lost)
currencydoublesafely converted to/from currency type with fixed decimals, without rounding error
RawUTF8UTF-8this is the preferred field type for storing some textual content in the ORM
WinAnsiStringUTF-8WinAnsi char-set (code page 1252) in Delphi
RawUnicodeUTF-8UCS2 char-set in Delphi, as AnsiString
WideStringUTF-8UCS2 char-set, as COM BSTR type (Unicode in all version of Delphi)
SynUnicodeUTF-8Will be either WideString before Delphi 2009, or UnicodeString later
stringUTF-8Not to be used before Delphi 2009 (unless you may loose some data during conversion) - RawUTF8 is preferred in all cases
TDateTimedatetimeISO 8601 encoded date time
TTimeLogint64as proprietary fast Int64 date time
TModTimeint64the server date time will be stored when a record is modified (as proprietary fast Int64)
TCreateTimeint64the server date time will be stored when a record is created (as proprietary fast Int64)
TSQLRecordint32RowID pointing to another record (warning: the field value contains pointer(RowID), not a valid object instance - the record content must be retrieved with late-binding via its ID using a PtrInt(Field) typecast or the Field.ID method), or by using e.g. CreateJoined()
TSQLRecordManynothingdata is stored in a separate pivot table; for MongoDB, you should better use data sharding, and an embedded sub-document
TRecordReferenceint32store both ID and TSQLRecord type in a RecordRef-like value (use e.g. TSQLRest. Retrieve(Reference) to get a record content)
TPersistentobjectBSON object (from ObjectToJSON)
TCollectionarrayBSON array of objects (from ObjectToJSON)
TObjectListarrayBSON array of objects (from ObjectToJSON) - see TJSONSerializer. RegisterClassForJSONTObjectList serialization
TStringsarrayBSON array of strings (from ObjectToJSON)
TRawUTF8ListarrayBSON array of string (from ObjectToJSON)
any TObjectobjectSee TJSONSerializer. RegisterCustomSerializerTObject serialization
TSQLRawBlobbinaryThis type is an alias to RawByteString
dynamic arraysarray
binary
if the dynamic array can be saved as true JSON, will be stored as BSON array - otherwise, will be stored in the TDynArray.SaveTo binary format
variantarray
object
BSON number, text, object or array, depending on TDocVariant custom variant type or TBSONVariant stored value
recordbinary
object
BSON as defined in code by overriding TSQLRecord.InternalRegisterCustomProperties

On the server side (there won't be any difference for the client), you define a TMongoDBClient, and assign it to a given TSQLRecord class:

  MongoClient := TMongoClient.Create('localhost',27017);
  DB := MongoClient.Database['dbname'];
  Model := TSQLModel.Create([TSQLORM]);
  Client := TSQLRestClientDB.Create(Model,nil,':memory:',TSQLRestServerDB);
  if StaticMongoDBRegister(TSQLORM,fClient.Server,fDB,'collectionname')=nil thenraise Exception.Create('Error');

And... that's all!

You can then use any ORM command, as usual:

  writeln(Client.TableRowCount(TSQLORM)=0);

As with external databases, you can specify the field names mapping between the objects and the MongoDB collection.
By default, the TSQLRecord.ID property is mapped to the MongoDB's _id field, and the ORM will populate this _id field with a sequence of integer values, just like any TSQLRecord table.
You can specify your own mapping, using for instance:

 aModel.Props[aClass].ExternalDB.MapField(..)

Since the field names are stored within the document itself, it may be a good idea to use shorter naming for the MongoDB collection. It may save some storage space, when working with a huge number of documents.

Once the TSQLRecord is mapped to a MongoDB collection, you can always have direct access to the TMongoCollection instance later on, by calling:

 (aServer.StaticDataServer[aClass] as TSQLRestServerStaticMongoDB).Collection

This may allow any specific task, including any tuned query or process.

ORM/ODM CRUD methods

You can add documents with the standard CRUD methods of the ORM, as usual:

  R := TSQLORM.Create;
  tryfor i := 1 to COLL_COUNT do begin
      R.Name := 'Name '+Int32ToUTF8(i);
      R.Age := i;
      R.Date := 1.0*(30000+i);
      R.Value := _ObjFast(['num',i]);
      R.Ints := nil;
      R.DynArray(1).Add(i);
      assert(Client.Add(R,True)=i);end;
  finally
    R.Free;
  end;

As we already saw, the framework is able to handle any kind of properties, including complex types like dynamic arrays or variant.
In the above code, a TDocVariant document has been stored in R.Value, and a dynamic array of integer values is accessed via its index 1 shortcut and the TSQLRecord.DynArray() method.

The usual Retrieve / Delete / Update methods are available:

  R := TSQLORM.Create;
  tryfor i := 1 to COLL_COUNT do beginCheck(Client.Retrieve(i,R));// here R instance contains all values of one document, excluding BLOBsend;
  finally
    R.Free;
  end;

You can define a WHERE clause, as if the back-end where a regular SQL database:

    R := TSQLORM.CreateAndFillPrepare(Client,'ID=?',[i]);
    try
    ...

Current implementation understand one condition over one single field, with = > >= < <= IN clauses. More advanced queries are possible, but won't be handled as SQL, but via direct access to the TMongoDBCollection.

To perform a query and retrieve the content of several documents, you can use regular CreateAndFillPrepare or FillPrepare methods:

R := TSQLORM.CreateAndFillPrepare(Client,'');try
    n := 0;
    while R.FillOne do begin// here R instance contains all values of one document, excluding BLOBs
      inc(n);
    end;
    assert(n=COLL_COUNT);
  finally
    R.Free;
  end;

A WHERE clause can also be defined for CreateAndFillPrepare or FillPrepare methods.

BATCH mode

In addition to individual CRUD operations, our MongoDB is able to use BATCH mode for adding or deleting documents.

You can write the exact same code as with any SQL back-end:

  Client.BatchStart(TSQLORM);
  R := TSQLORM.Create;
  tryfor i := 1 to COLL_COUNT do begin
      R.Name := 'Name '+Int32ToUTF8(i);
      R.Age := i;
      R.Date := 1.0*(30000+i);
      R.Value := _ObjFast(['num',i]);
      R.Ints := nil;
      R.DynArray(1).Add(i);
      assert(Client.BatchAdd(R,True)>=0);
    end;
  finally
    R.Free;
  end;
  assert(Client.BatchSend(IDs)=HTML_SUCCESS);

Or for deletion:

  Client.BatchStart(TSQLORM);
  for i := 5 to COLL_COUNT doif i mod 5=0 then
      assert(fClient.BatchDelete(i)>=0);
  assert(Client.BatchSend(IDs)=HTML_SUCCESS);

Speed benefit may be huge in regard to individual Add/Delete operations, even on a local MongoDB server. We will see some benchmark numbers now.

ORM/ODM performance

You can take a look at Data access benchmark to compare MongoDB as back-end for our ORM classes.

In respect to external SQL engines, it features very high speed, low CPU use, and almost no difference in use. We interfaced the BatchAdd() and BatchDelete() methods to benefit of MongoDB BULK process, and avoided most memory allocation during the process.

Here are some numbers, extracted from the MongoDBTests.dpr sample, which reflects the performance of our ORM/ODM, depending on the Write Concern mode used:

2. ORM

2.1. ORM with acknowledge: - Connect to local server: 6 assertions passed 18.65ms - Insert: 5,002 assertions passed 521.25ms 5000 rows inserted in 520.65ms i.e. 9603/s, aver. 104us, 2.9 MB/s - Insert in batch mode: 5,004 assertions passed 65.37ms 5000 rows inserted in 65.07ms i.e. 76836/s, aver. 13us, 8.4 MB/s - Retrieve: 45,001 assertions passed 640.95ms 5000 rows retrieved in 640.75ms i.e. 7803/s, aver. 128us, 2.1 MB/s - Retrieve all: 40,001 assertions passed 20.79ms 5000 rows retrieved in 20.33ms i.e. 245941/s, aver. 4us, 27.1 MB/s - Retrieve one with where clause: 45,410 assertions passed 673.01ms 5000 rows retrieved in 667.17ms i.e. 7494/s, aver. 133us, 2.0 MB/s - Update: 40,002 assertions passed 681.31ms 5000 rows updated in 660.85ms i.e. 7565/s, aver. 132us, 2.4 MB/s - Blobs: 125,003 assertions passed 2.16s 5000 rows updated in 525.97ms i.e. 9506/s, aver. 105us, 2.4 MB/s - Delete: 38,003 assertions passed 175.86ms 1000 rows deleted in 91.37ms i.e. 10944/s, aver. 91us, 2.3 MB/s - Delete in batch mode: 33,003 assertions passed 34.71ms 1000 rows deleted in 14.90ms i.e. 67078/s, aver. 14us, 597 KB/s Total failed: 0 / 376,435 - ORM with acknowledge PASSED 5.00s
2.2. ORM without acknowledge: - Connect to local server: 6 assertions passed 16.83ms - Insert: 5,002 assertions passed 179.79ms 5000 rows inserted in 179.15ms i.e. 27908/s, aver. 35us, 3.9 MB/s - Insert in batch mode: 5,004 assertions passed 66.30ms 5000 rows inserted in 31.46ms i.e. 158891/s, aver. 6us, 17.5 MB/s - Retrieve: 45,001 assertions passed 642.05ms 5000 rows retrieved in 641.85ms i.e. 7789/s, aver. 128us, 2.1 MB/s - Retrieve all: 40,001 assertions passed 20.68ms 5000 rows retrieved in 20.26ms i.e. 246718/s, aver. 4us, 27.2 MB/s - Retrieve one with where clause: 45,410 assertions passed 680.99ms 5000 rows retrieved in 675.24ms i.e. 7404/s, aver. 135us, 2.0 MB/s - Update: 40,002 assertions passed 231.75ms 5000 rows updated in 193.74ms i.e. 25807/s, aver. 38us, 3.6 MB/s - Blobs: 125,003 assertions passed 1.44s 5000 rows updated in 150.58ms i.e. 33202/s, aver. 30us, 2.6 MB/s - Delete: 38,003 assertions passed 103.57ms 1000 rows deleted in 19.73ms i.e. 50668/s, aver. 19us, 2.4 MB/s - Delete in batch mode: 33,003 assertions passed 47.50ms 1000 rows deleted in 364us i.e. 2747252/s, aver. 0us, 23.4 MB/s Total failed: 0 / 376,435 - ORM without acknowledge PASSED 3.44s

As for direct MongoDB access, the wcUnacknowledged is not to be used on production, but may be very useful in some particular scenarios. As expected, the reading process is not impacted by the Write Concern mode set.

You can take a look at the previous blog article, about low-level MongoDB direct access.
Feedback is welcome on our forum, as usual!

MongoDB + mORMot benchmark

$
0
0

Here are some benchmark charts about MongoDB integration in mORMot's ORM.

MongoDB appears as a serious competitor to SQL databases, with the potential benefit of horizontal scaling and installation/administration ease - performance is very high, and its document-based storage fits perfectly with mORMot's advanced ORM features like Shared nothing architecture (or sharding).

Following tests were using Synopse mORMot framework 1.18, compiled with Delphi XE4, against SQLite 3.8.4.3.

We won't show all database engine, but the most representative ones.
Please refer to this other benchmark article for some more complete information.

Insertion speed

'MongoDB ack/no ack' for direct MongoDB access (SynMongoDB.pas) with or without write acknowledge.

For the testings, we used a local MongoDB 2.6 instance in 64 bit mode.

 DirectBatchTransBatch Trans
SQLite3 (file full)46643781754108752
SQLite3 (file off)2012204484550111731
SQLite3 (file off exc)250792819283943115159
SQLite3 (mem)699619487187279118657
TObjectList (static)232385400608252678402803
TObjectList (virtual)242812409131240003405712
SQLite3 (ext full)4901191887556151144
SQLite3 (ext off)21414726689249160616
SQLite3 (ext off exc)3319914547190025158815
SQLite3 (ext mem)7641118470689834192618
MongoDB (ack)1008184585980085232
MongoDB (no ack)3322318918627974206355
ZEOS SQlite3474119173674055767
FireDAC SQlite3207354008340408121359
ZEOS Firebird10056101551876920335
FireDAC Firebird19742486841990447803
MSSQL2012 local3470355101075047653
ODBC MSSQL20123659625255376290
FireDAC MSSQL201232765838954040040
ZEOS PostgreSQL295323740691329780
ODBC PostgreSQL290225040357628714
FireDAC PostgreSQL305423329714924844

MongoDB bulk insertion has been implemented, which shows an amazing speed increase in Batch mode. Depending on the MongoDB write concern mode, insertion speed can be very high: by default, every write process will be acknowledge by the server, but you can by-pass this request if you set the wcUnacknowledged mode - note that in this case, any error (e.g. an unique field duplicated value) will never be notified, so it should not be used in production, unless you need this feature to quickly populate a database, or consolidate some data as fast as possible.

Read speed

 By oneAll VirtualAll Direct
SQLite3 (file full)21607455083458757
SQLite3 (file off)22177456454458001
SQLite3 (file off exc)98014454215457540
SQLite3 (mem)99190461808464252
TObjectList (static)235504756773750300
TObjectList (virtual)233666332402733460
SQLite3 (ext full)103917210863458379
SQLite3 (ext off)101498209634441033
SQLite3 (ext off exc)101839218292439947
SQLite3 (ext mem)102414185494438904
MongoDB (ack)8002242353251268
MongoDB (no ack)8234252079254582
ZEOS SQlite331135173593263060
FireDAC SQlite363186716992291
ZEOS Firebird120766785385828
FireDAC Firebird19183711344894
MSSQL2012 local7904182401349797
ODBC MSSQL20128693113973178526
FireDAC MSSQL201230546373086051
ZEOS PostgreSQL7031122327176298
ODBC PostgreSQL72816684391489
FireDAC PostgreSQL16444518461252

MongoDB appears as a serious competitor to SQL databases, with the potential benefit of horizontal scaling and installation/administration ease - performance is very high, and its document-based storage fits perfectly with mORMot's advanced ORM features like Shared nothing architecture (or sharding).

You can get more information about low-level integration of MongoDB in mORMot, or our integrated ORM/ODM support in the framework.
Feedback is welcome on our forum, as usual!


Benchmarking Mustache libraries: native SynMustache vs mustache.js/SpiderMonkey

$
0
0

I just wrote a small sample program, for benchmarking Mustache libraries: native SynMustache vs mustache.js running on SpiderMonkey 24...

And the winner is ...SynMustache, which is 10 times faster, uses almost no memory during process, and handles inlined {{>partials}} natively (whereas we have to handle them manually with mustache.js)!

Who says that Garbage Collection and immutable strings in modern JITted runtimes are faster than "native" Delphi applications?
Are you still preferring the "NextGen" roadmap?

The program is pretty simple.
It is a good sample of Mustache rendering principles.

It renders a recursive template taken from a well known web site:

<h2>Example 6 : Recursively binding data to templates</h2>

<h3>Organization Structure</h3> {{> person}}
{{<person}} <div> <b>{{name}}</b> ({{title}}) <div style='padding-left: 15px; padding-top: 5px;'> {{#manages}} {{>person}} {{/manages}} </div> </div> {{/person}}

This template is executed on the following context data:

 { title : "President", name : "Perry President", manages : [
{ title : "CTO", name : "Janet TechOff", manages : [
{ title : "Web Architect", name : "Hari Archie", manages : [
{ title : "Senior Developer", name : "Brenda Senior", manages : []},
{ title : "Developer", name : "Roger Develo", manages : []},
{ title : "Junior Developer", name : "Jerry Junior", manages : []}
]}
]},
{ title : "HRO", name : "Harold HarOff", manages : [
{ title : "HR Officer", name : "Susan McHorror", manages : []},
{ title : "HR Auditor", name : "Audrey O'Fae", manages : []}
]}
]}
It will render it in loop, and then we compare the speed...
For SynMustache, we got:

For mustache.js over SpiderMonkey, we got:

The native Delphi version, included within our mORMot framework, is 10 times faster than the JavaScript optimized library.
And if you look at the process explorer during the run, SynMustache does not have any memory increase, whereas the JavaScript engine will continuously increase/decrease its memory usage, due to the garbage collector...

I always prefer such benchmarks, pretty close to real world process, in comparison to optimistic and unrealistic benchmarks like a Mandelbrot computation.
Who is calculating a fractal for its business? Unless you write a video game (but who may use a JIT and a GC for it?), your software will very likely process data and strings in memory, just as with this benchmark.

In fact, JavaScript runs pretty well on latest SpiderMonkey, with more than 10,000 recursive blocks rendered per second.
The SynMustache performance (around 120,000 recursive blocks rendered per second) sounds so high it may not be worth it...

Just remember: 10 times faster, on a server, means 10 times more clients served in the same time...
So 10 times more Return On Investment (ROI) for the same hardware, and probably a more integrated (cheaper) solution, since you won't need to spread the server software on several pieces of hardware!

If your managers begin to have doubts about JIT/GC/Java/C# speed and ROI - which is pretty much likely after years running J2EE servers around the globe - you can show them this sample.

Today, managers may be confident that JavaScript (and mono-threaded node.js) is the answer.
At least, they are told so.
But, even if they are far away from the technical stuff today, they know that such solutions consume a lot of computer resources: each tab in Chrome or FireFox exhausts their notebook or smartphone computing power!
JavaScript may be good enough on client side, but may reduces the ROI on servers.
RAM on servers do cost money...

Imagine how your business may benefit from a multi-threaded engine like mORMot, when compared to a mono-threaded node.js server...
And, in mORMot, every single thread is able to process 10 times more data than its JavaScript version!
So 10 times more clients on the same server, 10 time less money to invest, 10 times more money to win, with the stability of native code and tuned memory process...
I'm quite sure they will perhaps start to be moon-eyed again on Delphi.

Especially if the company has still Delphi skilled people in its teams.
Do not let your critical software be written by young developers who do not know about algorithms and data structures.
Delphi has the strengths of C/C++, but easier to work with thanks to a cleaner approach, and all modern design concept at hand.

OK... I stopped trolling, but it is not far away from my own experiment...

Feedback is welcome on our web site, as usual.

BREAKING CHANGE: TSQLRestServerStatic* classes are now renamed as TSQLRestStorage*

$
0
0

From the beginning, server-side storage tables which were not store in a SQLite3 database were implemented via some classes inheriting from TSQLRestServerStatic.
This TSQLRestServerStatic was inheriting from TSQLRestServer
, which did not make much sense (but was made for laziness years ago, if I remember well).

Now, a new TSQLRestStorage class, directly inheriting from TSQLRest, is used for per-table storage.
This huge code refactoring results in a much cleaner design, and will enhance code maintainability.
Documentation has been updated to reflect the changes.

Note that this won't change anything when using the framework (but the new class names): it is an implementation detail, which had to be fixed.

In the mORMot units, you may also find those classes also inheriting from TSQLRestStorage:

In the above class hierarchy, the TSQLRestStorage[InMemory][External] classes are in fact used to store some TSQLRecord tables in any non-SQL backend:

  • TSQLRestStorageExternal maps tables stored in an external database (e.g. Oracle, MSSQL, PostgreSQL, FireBird, MySQL or any OleDB/ODBC provider, via our SynDBoptimized classes);
  • TSQLRestStorageInMemory stores the data in a TObjectList- with amazing performance;
  • TSQLRestStorageMongoDB will connect to a remote MongoDB server to store the tables as a NoSQL collection of documents.

Those classes are used within a main TSQLRestServer to host some given TSQLRecord classes, either in-memory, or on external databases.
They do not enter in account in our Client-Server architecture, but are implementation details, on the server side.
From the client side, you do not have to worry about how the data is stored, just consume it via REST.

Feedback is welcome in our forum, as usual!

"Native" means CPU-native... even Microsoft admits it!

$
0
0

There were a lot of debate about what "native" was..

Especially for some great companies, you want to sell their compiler technologies...
For them, "native" is not CPU-native, but framework-native...

Even Microsoft claimed since the beginning of C# that the managed .Net model was faster, due to "optimized JIT or NGEN compilation"...

But even Microsoft is clearly changing its mind!

They switch to "Native" for their framework, especially when targeting mobile platforms!

So they officially switch to "Native":

For users of your apps, .NET Native offers these advantages:

  • Fast execution times
  • Consistently speedy startup times
  • Low deployment and update costs
  • Optimized app memory usage

See this official MSDN article for reference...

Isn't it that we, Delphi users, claimed since decades?

Isn't it that an Open Source project like our little mORMot show?

And you can use "native UI controls" with Delphi or FPC, if you need to.

Do not trust the marketing sirens, even more if they are sponsored by billion dollar companies...
I would not trust the Embarcadero marketers by all means, but at least they were ahead of time and pushing a right argument here...
I'm less pleased by the NextGen roadmap, and performance loss of this model in its current implementation...

Feedback is welcome on our forum, as usual!

New sample for JSON performance: mORMot vs SuperObject/XSuperObject/dwsJSON/DBXJSON

$
0
0

We have just added a new "25 - JSON performance" sample to benchmark JSON process, using well most known Delphi libraries...

A new fight
featuring
mORMot vs SuperObject/XSuperObject/dwsJSON/DBXJSON

On mORMot side, it covers TDocVariant, late binding, TSQLTable, ORM, record access, BSON...

We tried to face several scenarios:

  • parse/access/write iteration over a small JSON document,
  • read of deeply nested 680 KB JSON (here mORMot is slower than SO/dwsJSON),
  • read of one 180 MB JSON file (with on-the-fly adaptation to fit a record layout),
  • named access to all rows and columns of a 1 MB JSON table, extracted from a SQL request (with comparison with our ORM performance).

On average and in details, mORMot is the fastest in almost all scenarios (with an amazing performance for table/ORM processing), dwsJSON performs very well (better than SuperObject), and DBXJSON is the slowest (by far, but XE6 version is faster than XE4).

Here are some values, compiled with XE6 on my Core i7 notebook.

You have the number of iterations per second, and the peak memory used during each process.

   JSON benchmarking
  -------------------

1. Small content

 1.1. Synopse record:

  - Read: 25,000 assertions passed  70.67ms  353,721/s
  - Access: 50,000 assertions passed  493us  50,709,939/s
  - Write: 25,000 assertions passed  49.25ms  507,562/s
  Total failed: 0 / 100,000  - Synopse record PASSED  121.03ms

 1.2. Synopse variant:
  - Read: 25,000 assertions passed  120.29ms  207,827/s
  - Access direct: 50,000 assertions passed  29.04ms  860,614/s
  - Access late binding: 50,000 assertions passed  98.13ms  254,764/s
  - Write: 25,000 assertions passed  57.84ms  432,204/s
  Total failed: 0 / 150,000  - Synopse variant PASSED  306.04ms

 1.3. Super object record:
  - Read: 25,000 assertions passed  2.00s  12,470/s
  - Access: 50,000 assertions passed  408us  61,274,509/s
  - Write: 25,000 assertions passed  1.71s  14,539/s
  Total failed: 0 / 100,000  - Super object record PASSED  3.72s

 1.4. Super object properties:
  - Read: 25,000 assertions passed  2.14s  11,631/s
  - Access: 50,000 assertions passed  1.92s  12,971/s
  - Write: 25,000 assertions passed  186.63ms  133,952/s
  Total failed: 0 / 100,000  - Super object properties PASSED  4.26s

 1.5. dws JSON:
  - Read: 25,000 assertions passed  136.42ms  183,250/s
  - Access: 50,000 assertions passed  37.07ms  674,236/s
  - Write: 25,000 assertions passed  97.86ms  255,464/s
  Total failed: 0 / 100,000  - dws JSON PASSED  273.66ms

 1.6. DBXJSON:
  - Read: 25,000 assertions passed  2.35s  10,622/s
  - Access: 50,000 assertions passed  23.38ms  1,069,244/s
  - Write: 25,000 assertions passed  309.64ms  80,737/s
  Total failed: 0 / 100,000  - DBXJSON PASSED  2.68s

2. Big content

 2.1. Depth content:

  - Download files if necessary: no assertion  384us
  - Synopse read variant: 1 assertion passed  87.99ms  284,100/s  337 KB
  - Synopse read to BSON: 2 assertions passed  2.55ms  9,784,735/s  155 KB
  - Super object read: 2 assertions passed  9.20ms  2,716,210/s  529 KB
  - dws JSON read: 1 assertion passed  5.55ms  4,503,693/s  439 KB
  - DBXJSON read: 1 assertion passed  92.20ms  271,126/s  679 KB
  Total failed: 0 / 7  - Depth content PASSED  202.86ms

 2.2. Table content:
  - Download files if necessary: no assertion  356us  23,112,359/s
  - Synopse parse: 1 assertion passed  2.69ms  3,052,690/s  1.2 MB
  - Synopse ORM loop: 41,135 assertions passed  6.14ms  1,339,465/s  1.2 MB
  - Synopse ORM list: 41,135 assertions passed  6.52ms  1,260,070/s  951 KB
  - Synopse table direct: 41,135 assertions passed  20.40ms  403,126/s  1.2 MB
  - Synopse table variant: 41,135 assertions passed  20.29ms  405,330/s  1.2 MB
  - Synopse doc variant: 41,137 assertions passed  39.80ms  206,661/s  4.6 MB
  - Synopse late binding: 41,137 assertions passed  34.45ms  238,768/s  4.6 MB
  - Synopse to BSON: 2 assertions passed  8.92ms  922,206/s  1.1 MB
  - Super object properties: 41,136 assertions passed  2.14s  3,840/s  6.3 MB
  - Super object record: 41,136 assertions passed  148.57ms  55,373/s  6.3 MB
  - dws JSON: 41,136 assertions passed  28.87ms  284,888/s  4.7 MB
  - DBXJSON: 1 assertion passed  236.75ms  34,749/s  9.9 MB
  Total failed: 0 / 370,226  - Table content PASSED  2.70s

 2.3. Huge content:
  - Download files if necessary: no assertion  428us
  - Synopse read record: 4 assertions passed  1.52s  135,810/s  122.6 MB
  - Synopse read variant: 2 assertions passed  2.45s  84,134/s  512.9 MB
  - Synopse read to BSON: 3 assertions passed  2.01s  102,333/s  168.1 MB
  - Super object read: 2 assertions passed  9.07s  22,769/s  1.1 GB
  - dws JSON read: 2 assertions passed  3.26s  63,323/s  672.7 MB
  - DBXJSON read: no assertion  703us  35,561,877/s
     DBXJSON will raise EOutOfMemory for 185 MB JSON in Win32 -> skip
  Total failed: 0 / 13  - Huge content PASSED  18.92s

Generated with: Delphi XE6 compiler
Time elapsed for all tests: 33.22s

Tests performed at 17/05/2014 08:47:02
Total assertions failed for all test suits:  0 / 1,020,246
! All tests passed successfully.

SuperObject has some issues for property names lookup...

I've written a TTestTableContent.SuperObjectRecord dedicated method: accessing the values via a record (and RTTI) is much faster than using S[...] I[...] and such methods.
Current version did not support XE6 compiler (I had to write some $ifdef by hand), and when compiled for Win64, the sample program just exploded... SuperObject needs some tuning!

It is worth saying that dwsJSON performs very well, for its purpose.
What is written in this blog article is perfectly true, in comparison to SuperObject or DBXJSON.
Even on Win64 platform.
Great work, Eric!

DBXJSON is pretty slow, and is even giving an EOutOfMEmory error in Win32 for the huge content (more than 2GB is used!) - under Win64, it passes, with 3GB used for the 180 MB JSON file.
In the meanwhile, mORMot uses 150 MB of memory with records. :)

Here are some number concerning XSuperObject:

 1.3. X super object record:
  - Read: 25,000 assertions passed  12.68s  1,971/s
  - Access: 50,000 assertions passed  517us  48,355,899/s
  - Write: 25,000 assertions passed  2.32s  10,737/s
  Total failed: 0 / 100,000  - X super object record PASSED  15.01s

 1.4. X super object properties:
  - Read: 25,000 assertions passed  10.14s  2,463/s
  - Access: 50,000 assertions passed  307.49ms  81,302/s
  - Write: 25,000 assertions passed  435.44ms  57,412/s
  Total failed: 0 / 100,000  - X super object properties PASSED  10.89s

I was not able to run SuperObject and XSuperObject in the same application at once... so you have to use compiler defines in the sample source code, to let one of the two libraries be compiled.
But XSuperObject is not optimized for speed, it is in fact very slow, even slower than SuperObject - and not in the race when compared to dwsJSON or mORMot.

The mORMot code has some advantages, especially for ORM / table process.
The ability to use record and dynamic arrays to store the content make it very convenient, and also powerful (see how we used enhanced RTTI for serialization, but a custom sub-record type for a "polygon / multi-polygon coordinates" structure which wouldn't be able to be accessed via regular records from JSON. Our record-based RTTI gives also impressive results, in both terms of speed and memory consumption.
And late-binding for properties access gives very readable code.

To conclude, which syntax do you prefer?

// Synopse direct record access
Check(gloss.glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso[0]='GML');
// Synopse TDocVariant with properties
Check(DocVariantData(doc.GetValueByPath([
'glossary','GlossDiv','GlossList','GlossEntry','GlossDef','GlossSeeAlso'])).Value[0]='GML');
// Synopse TDocVariant with late binding
Check(doc.glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso._(0)='GML');
// SuperObject properties
check(obj['glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso[0]'].AsString='GML');
// SuperObject direct record access
Check(gloss.glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso[0]='GML');
// XSuperObject
check(obj['glossary.GlossDiv.GlossList.GlossEntry.GlossDef.GlossSeeAlso[0]'].AsString='GML');
// dwsJSON
check(obj['glossary']['GlossDiv']['GlossList']['GlossEntry']['GlossDef']['GlossSeeAlso'][0].AsString='GML');
// DBXJSON
check(((((((obj.GetValue('glossary') as TJSONObject).
GetValue('GlossDiv') as TJSONObject).
GetValue('GlossList') as TJSONObject).
GetValue('GlossEntry') as TJSONObject).
GetValue('GlossDef') as TJSONObject).
GetValue('GlossSeeAlso') as TJSONArray).Get(0).Value='GML');
Any feedback is welcome, including your own benchmark results, in our forum, as usual!

Automatic JSON serialization of record or dynamic arrays via Enhanced RTTI

$
0
0

Since Delphi 2010, the compiler generates additional RTTI at compilation, so that all record fields are described, and available at runtime.
By the way, this enhanced RTTI is one of the reasons why executables did grow so much in newer versions of the compiler.

Our SynCommons.pas unit is now able to use this enhanced information, and let any record be serialized via RecordLoad() and RecordSave() functions, and all internal JSON marshalling process.

In short, you have nothing to do.
Just use your record as parameters, and, with Delphi 2010 and up, they will be serialized as valid JSON objects.

Of course, text-based definition or callback-based registration are still at hand, and will be used with older versions of Delphi.
But you could be used to by-pass or extend the enhanced-RTTI serialization, even on newer versions of the compiler.

Enhanced RTTI support for records and dynamic arrays was added by this commit.

The documentation has been enhanced in synch!
Please ensure that you downloaded the latest SAD 1.18 pdf revision!

Serialization for older Delphi versions

Sadly, the information needed to serialize a record is available only since Delphi 2010.

If your application is developped on any older revision (e.g. Delphi 7, Delphi 2007 or Delphi 2009), you won't be able to automatically serialize records as plain JSON objects directly.

You have several paths available:

  • By default, the record will be serialized as binary, and encoded as Base64 text;
  • Or you can define method callbacks which will write or read the data as you expect;
  • Or you can define the record layout as plain text.

Note that any custom serialization (either via callbacks, or via text definition), will override any previous registered method, even the mechanism using the enhanced RTTI.
You can change the default serialization to easily meet your requirements.

For instance, this is what SynCommons.pas does for any TGUID content, which is serialized as the standard JSON text layout (e.g. "C9A646D3-9C61-4CB7-BFCD-EE2522C8F633"), and not following the TGUID record layout as defined in the RTTI , i.e. "D1":12345678,"D2":23023,"D3":9323,"D4":"0123456789ABCDEF" - which is far from convenient.
U

Viewing all 166 articles
Browse latest View live