InterBase Configuration Parameters

by Ann Harrison

LOCK_MEM_SIZE

Parameters:

#V4_LOCK_MEM_SIZE    98304
#ANY_LOCK_MEM_SIZE   98304
Effect
LOCK_MEM_SIZE in any of its variants determines the amount of memory given to the lock table. In Classic mode, the size given is used for the initial allocation. The table expands dynamically up to the limit of memory. In SuperServer, the initial size is also the final size. You must restart the Superserver to change the lock table size. Lock memory size is 98304 by default.
Background

In all versions of InterBase except those that run on VMS, contention for resources is handled through a lock table maintained by InterBase. On VMS, InterBase uses the VMS lock manager. In Classic architecture, the lock table is kept in shared memory. In Super Server, the table is part of the server itself.

Although InterBase does not use locks to resolve conflicts on individual rows, it does use locks to protect a page while it is being changed - not for the duration of a transaction, but for the duration of a change. InterBase also uses locks to allow one transaction to wait for another when an update conflict appears, and for a number of other situations that require synchronization.

Indications For Use

Conditions that affect the lock table size include:

  • The size of the page cache. Every page in the cache is locked at least once. Pages that are being read in a shared mode may be locked several times.
  • The number of simultaneous transactions. Each transaction takes out a lock on its own identity - this lock is used to synchronize transactions and to recognize when a transaction has ended without committing or rolling back.
  • Events. The event notification mechanism is based on locks. The number of events and the number of clients waiting on events affect the size of the lock table.

SEMAPHORE COUNT

Parameters:

#V4_LOCK_SEM_COUNT    32
#ANY_LOCK_SEM_COUNT   32

In non-threading environments, this sets the number of semaphores available to InterBase. The default semaphore count is system dependent:

EPSON SEMAPHORES      10
M88K SEMAPHORES       10
UNIXWARE SEMAPHORES   10
NCR3000 SEMAPHORES    25
SCO_UNIX SEMAPHORES   25
sgi SEMAPHORES        25
IMP SEMAPHORES        25
DELTA SEMAPHORES      25
Ultrix SEMAPHORES     25
DGUX SEMAPHORES       25
DECOSF SEMAPHORES     16
Other UNIX            32
Background
Semaphores are used for lock and event notification. In theory, InterBase should use very few semaphores and 32 should be plenty.
Indications For Use
If you see the error message "semaphores are exhausted" in the interbase log file, increase the number of semaphores in increments of the normal cluster size, listed above as the default.

LOCK SIGNAL

Parameters:

#V4_LOCK_SIGNAL    16
#ANY_LOCK_SIGNAL   16
Effect
This changes the signal used to indicate lock conflicts.
Background

In Classic, when one process holds a lock on a page or other resource that another process needs, the second process signals the first. Change the signal used by setting either of these parameters (both is a better choice). The signal used by default is operating system dependent:

NETWARE_386  BLOCKING_SIGNAL 101
WINDOWS_ONLY BLOCKING_SIGNAL 101
All Others BLOCKING_SIGNAL SIGUSR1
Indications For Use

Signals tend to be "noisy" meaning that several services may use the same signal. InterBase is designed to live with noisy signals. When it receives a signal, it passes the signal on to other handlers for the same signal and it's not particularly upset to receive a signal and not find anything to do.

If another process on the system is using the same signal as InterBase and either failing to pass signals along, or becoming upset when it sees signals it can't account for, you will see InterBase connections hang or errors from the other process. In that case, you can use this parameter to choose another signal.

EVENT MEMORY SIZE

Parameters:

#V4_EVENT_MEM_SIZE     32768
#ANY_EVENT_MEM_SIZE    32768
Effect
This parameter sets the initial size for the memory allocated for the event table.
Background
The event table is held in mapped memory. In classic, this space is created for each connection. In SuperServer, there is one space shared by all clients.
Indications For Use
The table expands dynamically, so there appears to be no reason to set this parameter.

DATABASE CACHE SIZE

Parameter:

#DATABASE_CACHE_PAGES               75
Effect
This parameter sets the number of pages from any one database that can be held in cache at once. If you increase this value, InterBase will allocate more pages to the cache for every database. By default, the SuperServer allocates 2048 pages for each database and the classic allocates 75 pages per client per database. On 16 bit Windows, the default is 50 pages.
Background

The cache holds pages read from the database and new pages being created to store in the database. Its purpose is to reduce the number of times a page is read or written by keeping it around until a commit or other event forces it to be written. The larger the cache, the more pages are kept in memory.

The minimum value is 50 and the maximum is 65,535. Empirically, values over 10,000 decrease performance.

You can increase the cache size on a database by database (in SuperServer) or client/database pair basis through a connect parameter that is available through ISQL, the server manager, and IBConsole.

InterBase does not increase the cache size dynamically because an overly large cache can be as detrimental as one that is too small. For example, a mass insert works better with a relatively small cache because pages that have been filled are not revisited. Applications with frequently used look-up tables can use a larger cache to keep those tables in memory.

Indications For Use

If your server seems to be I/O bound and the number of cache pages allocated is less than 10,000, increasing the size of the cache may improve performance.

SERVER PRIORITY CLASS

Parameter:

#SERVER_PRIORITY_CLASS      1
Effect
Sets the priority class for the SuperServer on Windows/NT/2000. Setting a value of 2 makes the process HIGH_PRIORITY_CLASS. All other values produce NORMAL_PRIORITY_CLASS. By default the parameter is set to 1.
Background
By increasing the process priority, you can cause the server to get more cycles per fortnight on a shared system. If you care about performance, you'd be better off putting the server on a dedicated mono-processor system. If you depend on the performance gains from running clients on the same system so they use a shared memory data transfer rather than TCP, get a multi-processor, tie the server to one processor and run the clients on another.
Indications For Use
I'm biased against this one, but if you want to try it, go ahead.

SERVER CLIENT MAPPING

Parameter:

#SERVER_CLIENT_MAPPING      4096
Effect
This parameter sets the size of the area reserved in the mapped space used on Windows systems to communicate between the server and a client running on the same machine. The default size is 4K.
Background

On Windows systems (and only on Windows systems) a client running on the same system with the server communicates through a shared memory region rather than TCP. Use this parameter to control the size of that region.

Memory is allocated in 1024 byte blocks. The acceptable range of values is between 1 and 16 1K blocks. The value should be one of 1024, 2048, 3072, 4096, 5120, 6144, 7165, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384. That value will be divided by 1024 internally. Why not just make the parameter 1-16? Who knows.

Indications For Use
If you've got lots of memory and local clients, increasing the communications area might help.

SERVER WORKING SIZE

Parameters:

SERVER_WORKING_SIZE_MIN   0
SERVER_WORKING_SIZE_MAX   0
Effect
This sets the number of 1024 byte blocks available to the SuperServer as its working set size on Windows/NT/2000. If the values are checked for plausibility, I don't see where it's done. By default, both sizes are set to zero, which means no limits.
Background
By limiting the maximum working set size, you can cause the SuperServer to fall over dead before its time from lack of memory. By increasing the minimum working set size you can cause it to allocate memory it doesn't need.
Indications For Use
Setting the minimum working set size up may eliminate some processing as the server has to grow itself. Setting the maximum working set may keep the server from eating all the memory on a small memory system. Don't run SuperServer on a small memory system.

LOCK GRANT ORDER

Parameter:

#V4_LOCK_GRANT_ORDER        1
Effect
Sets the state of lock ordering. 1 is True and sets lock ordering on. 0 is false and turns lock ordering off. Lock ordering is on by default.
Background

Lock ordering is quite simple, once you understand quite a lot about locks. When a connection requests a lock on an object, it requests a

specific level of lock:

#define LCK_none        0
#define LCK_null        1            /* Existence */
#define LCK_SR          2            /* Shared Read */
#define LCK_PR          3            /* Protected Read */
#define LCK_SW          4            /* Shared Write */
#define LCK_PW          5            /* Protected Write */
#define LCK_EX          6            /* Exclusive */

LCK_none is a request to convert an existing lock to no lock. LCK_null is an existence lock and is taken by a connection that doesn't care what happens to the object, as long as it doesn't disappear. It is the lock type used to insure that indexes are preserved while compiled requests exist that depend on them. The interaction of the lock levels is described in the lock compatibility table.

In the table below, a 1 indicates that the locks are compatible, a 0 that they are not:

/*                         Shared   Prot   Shared  Prot
            none   null     Read    Read   Write   Write   Exclusive */
/* none */ 1,        1,       1,     1,      1,      1,      1,
/* null */         1,        1,       1,     1,      1,      1,      1,
/* SR */   1,        1,       1,     1,      1,      1,      0,
/* PR */   1,        1,       1,     1,      0,      0,      0,
/* SW */   1,        1,       1,     0,      1,      0,      0,
/* PW */   1,        1,       1,     0,      0,      0,      0,
/* EX */   1,        1,       0,     0,      0,      0,      0 }

When a connection wants to lock an object, it gets a lock request block which specifies the object and the lock level requested. Each locked object has a lock block. Request blocks are connected to those lock blocks either as requests that have been granted, or as pending requests.

Originally, if an object was locked for protected read by one connection, other connections that wanted protected read (or below) were moved to the front of the queue, since they could be granted immediately. Under heavy load, that caused connections that needed higher locks to wait indefinitely, because new readers were constantly arriving. If compatible lock requests are granted even if incompatible requests are waiting, lock ordering is off.

The current default is that locks are granted in the order they are requested. If an object is current locked by one owner for protected read and another owner asks for a protected read lock, the second request is granted immediately because it is compatible with the existing lock. If a third owner requests a lock for protected write, that request must wait until the first two owners release their locks. If, during that wait, a fourth owner requests a protected read lock, that request will wait for the first two locks to be released and for the write lock to be taken and released. This the the behavior when lock ordering is on.

Indications For Use
If your write operations are lower priority than read operations, turning off lock ordering will improve the speed of readers, particularly long read transactions. Remember though, that even readers do change the database. At a minimum, they record their transaction state. Generally speaking, you will get better system throughput with lock ordering on.

LOCK HASH SLOTS

The lock hash slots parameter has been removed from the V6 version of ibconfig, at least in the version I see as part of the NT SuperServer. The code to read and interpret the parameter still exists.

Parameter:

#LOCK_HASH_SLOTS  101
Effect
This parameter determines the width of the hash table used to find locks on specific objects. The default is 101. The number should be prime to help the hash algorithm produce good distribution. It must be between 101 and 2048.
Background

Think of the hash table as a linear array with chains hanging down from every cell. The lock manager hashes the name of the object and takes that value mod the number of hash slots to determine the cell from which the lock will hang. When looking for a lock, it identifies the cell the same way, and then wanders down the chain, looking for an object with the right name. If there is more than one object with that name, it walks the "homonym" chain that hangs from the first object that matched the name.

OK, got that? The longer the chains hanging from each slot, the slower the lock manager will be. Lock print will show the minimum, maximum and average length of the chains. A good average length is under 10.

Indications For Use
The first indication is overall low performance on a system with lots of users and lots of cache pages. Run a lock print. If the average length is greater than 10, adjust the hash slots. As a start, you might multiply the average length by the current number of slots and divide by 9, then adjust up to the next prime number less than 2048. If you make this adjustment on a SuperServer, you should also increase the lock table size.

DEADLOCK TIMEOUT

Parameter:

#DEADLOCK_TIMEOUT 10
Effect
This parameter determines the number of seconds that the lock manager will wait after a conflict has been encountered before deciding that there is a potential deadlock.
Background
This parameter was tested extensively, nearly twenty years ago, on systems that were so slow that today they couldn't run a dishwasher in real time. At that time, 10 seconds was optimal. Any more often and the machine was eaten by deadlock scans. Any less often and the users broke into the lab and murdered the computers.
Indications For Use
Deadlocks are very uncommon in InterBase. The usual deadlock error, Update Conflict, is not a deadlock detected by the lock manager. It might be interesting to develop an actual deadlock case (A updates row 1, B updates row 2, then A tries to update row 2 and B tries to update row 1, all without any commits) and vary the deadlock timeout to see what the performance implication is.

LOCK ACQUIRE SPINS

Parameter:

LOCK_ACQUIRE_SPINS   0
Effect
On SuperServer - apparently none. In Classic, only one client process may access the lock table at any time. Access to the lock table is governed by a mutex. The mutex can be requested conditionally -a wait is a failure and the request must be retried - or unconditionally - the request will wait until it is satisfied. This parameter establishes the number of attempts that will be made conditionally. The default is zero.
Background
It appears that the mutex is requested conditionally some number of times, determined by the LOCK_ACQUIRE_SPINS parameter, then falls into a non-conditional request. The comment suggests that this might have some merit on SMP machines. I doubt it.
Indications For Use
None.

CONNECTION TIMEOUT

Parameter:

CONNECTION_TIMEOUT  180
Effect
Sets the connection timeout interval for a port. The default is 180 seconds.
Background

To detect clients that have disconnected abnormally, including Windows clients who have powered down their machines without closing application, InterBase posts a dummy select with a timeout. If the select times out, InterBase sends a dummy packet to the client. If that send fails, InterBase drops the connection.

A timeout declared in the dbp (database parameter block) with the option isc_dpb_connect_timeout.

Indications For Use

The higher the value, the less dummy packet traffic you will see. On the other hand, "dead" connections will linger longer. Recommendation is to increase the value significantly if you are certain that clients will not disconnect abnormally.

DUMMY PACKET INTERVAL

Parameter:

DUMMY_PACKET_INTERVAL  60
Effect
This parameter determines how frequently dummy packets will be sent to verify that the client still exists. The default is sixty seconds.
Background

InterBase closes connections when the client has stopped responding. To detect that the client is no longer responding, it waits for a fixed Interval (CONNECTION_TIMEOUT) then sends dummy packets to keep the attachment alive. When it receives an error on a send, it assumes that the client has died.

You can adjust the frequency with which dummy packets are sent either with this configuration parameter, or with the dpb (database parameter block value: isc_dpb_dummy_packet_interval.

Indications For Use

The higher the value, the less dummy packet traffic you will see. On the other hand, "dead" connections will linger longer. Recommendation is to increase the value significantly if you are certain that clients will not disconnect abnormally.

TMP DIRECTORY

Parameter:

TMP_DIRECTORY  <size> <quoted directory string>
TMP_DIRECTORY 20000 "/opt/interbase/tmp"
Effect
This parameter can be used an arbitrary number of times to specify locations for temporary files. The size is in bytes If this configuration parameter does not exist, InterBase checks the following environmental variables: INTERBASE_TMP, TMP, and TEMP. This parameter is available only in SuperServer.
Background

InterBase uses temporary files for a variety of operations, most significantly to hold intermediate sort results. This configuration parameter allows you to specify a list of directories to be used for temporary files. Repeat the parameter to supply additional directories.

Because I am less that 100% sure of my scanf proficiency, here's the way the value is parsed.

if ( (n = sscanf(buf + sizeof(ISCCFG_TMPDIR) - 1,
          " %ld "%[^"]", &size, dir_name)) == 2 )
Indications For Use
This is the way to provide a number of temporary directories and specify the amount of space that will be used in each.

EXTERNAL FUNCTION DIRECTORY

Parameter:

EXTERNAL_FUNCTION_DIRECTORY  <quoted directory string>
EXTERNAL_FUNCTION_DIRECTORY "/opt/interbase/my_functions"
Effect
This parameter can be used an arbitrary number of times to specify locations for user defined function libraries. If this configuration parameter does not exist, InterBase checks the following environmental directories: INTERBASE/UDF or $INTERBASE/intl. This parameter is available only in SuperServer.
Background
InterBase looks in specific directories for libraries that it loads on reference. This parameter allows you to specify any number of directories in which InterBase will look for user defined function libraries or character set defintions. Repeat the parameter to supply additional directories.
Indications For Use
If you want to use more directories, or different directories than the default, this parameter is for you.

TCP REMOTE BUFFER

Parameter:

TCP_REMOTE_BUFFER   8192
Effect
This parameter establishes the maximum size of a TCP packet used in the client server interface. The legal values are 1448 to 32768, and are in bytes.
Background
InterBase reads ahead of the client and can send several rows of data in a single packet. The larger the packet size, the more data is sent per transfer.
Indications For Use
Heavy network traffic.

This paper was written by Ann Harrison in October 2000, and is copyright Ms. Harrison and IBPhoenix Inc. You may republish it verbatim, including this notation. You may update, correct, or expand the material, provided that you include a notation that the original work was produced by Ms. Harrison and IBPhoenix Inc.