Each system has its restrictions. Some of ours are inherited from SQL engine being used. Some from assumptions (nearly) knowingly made by developers. Those are current restrictions:
Amount of money (in 'cash' table) was stored (as of lms-1.1) as 32 integer value, so if you had 5000 users you might had problems in 8 years or so. Nowadays (since lms-1.1.7 Hathor) we use more appropriate type [ decimal (9.2), with 2 significant places after dot and 9 numers for whole sum] and the maximum is 9'999'999.99 (sum of all in/out cash operations). Procedures converting numbers to words are able to process numbers as big as 10^18.
We are switching to Unicode in UI and databases. Right now there are known problems. When you have to use SQL with poor Unicode support, you should use UI converting capabilities. (PHP only? watch out for perl scripts!). In following example (file lms.ini) we have database in LATIN2 (aka ISO-8859-2) format.
[database] user = lms password = lms12354 server_encoding = latin2 ; if DB is not encoded in Unicode
MySQL
Database size:
Following MySQL documentation ("How Big Can MySQL Tables Be?" in chapter "Table size"), MySQL 3.22 is restricted 4GB per table. Since 3.23 restriction is 8 million terabytes (2^63 bytes). It's worth to mention, however, that some systems have filesystem level, usually at 2 or 4 GB.
Number of records:
True informations can be obtained, by issuing (in mysql shell):
mysql> show table status; ...| Avg_row_length | Data_length | Max_data_length | Index_length | ...| 44 | 24136 | 4294967295 | 19456 |
See that free space is about 175 000 time more than currently used, so until you plan to have 100000 users, you're pretty safe in this matter :-)
PostgreSQL
Database size:
PostgreSQL stores data in 8kB blocks. Number of blocks in bound to 32-bit number with sign, which gives maximum table size of 16 terabytes. Filesystem restrictions are avoided by keeping data in slices, 1GB each.
Number of records:
PostgreSQL does not have row number limit for tables, however COUNT returns 32-bit number, so for tables longer that 2 billions of records this function will return wrong value (at least in version 7.1).