Skip to content

5. Objects and Privileges#

This chapter describes how to manage objects and privileges in Altibase.

Database Objects#

Database objects are classified as either schema objects, which are managed within particular schema, or non-schema objects, which are managed by Altibase. This chapter describes the characteristics of both schema and non-schema database objects.

Schema Objects#

A schema is a logical group set of database objects that are owned by a single user and managed using SQL statements. The objects included in these schema are called schema objects, and Altibase provides the following schema objects:

Tables#

Tables are the basic units of data storage in an Altibase database. Tables are groups of records, each comprising the same number of columns. In Altibase, tables are classified as either memory tables or disk tables, depending on location in which data are stored. Additionally, tables are classified as either system tables, which are created and managed by the system, or user tables, which are created and managed by general users.

Moreover, special consideration must be given to the management of tables which are to be replicated and tables containing very large amounts of data.

This is described in detail in the Tables section below.

Partitioned Tables#

When a table data are divided into pieces(each piece is called a partition) and stored them in different table spaces, this table is called a partitioned table. For large tables, partitioned tables can make data management much easier.

More detailed information about partitioned tables, please refer to Chapter 7: Partitioned Objects.

Partitioned Indexes#

Depending on whether indexes are partitioned, it is classified as partitioned indexes or non-partitioned indexes. Non-partitioned indexes are indexes that is not partitioned. Partitioned indexes, similar to a partitioned table, refer to indexes partitioned, based on partition conditions.

For more detailed information about partitioned indexes, please refer to Chapter 7: Partitioned Objects.

Temporary Tables#

Temporary tables are used to temporarily hold data for the duration of a session or transaction. The use of temporary tables can enhance the performance speed of complex queries.

Temporary tables can only be created on the volatile tablespace.

Queue Tables#

The Altibase message queuing function supports asynchronous data transfer between the database and client applications. Queue tables are database objects, just like other database tables, and thus can be controlled using DDL and DML statements.

The concept and function of the queue table is described in detail in the Queues section below.

Constraints#

Constraints are restrictions that are imposed when data are inserted or modified in order to maintain the integrity of data.

Constraints are classified as either column constraints or table constraints depending on the target of the constraint, and are also classified as one of the following kinds of constraints depending on the nature of the restrictions:

  • NOT NULL / NULL Constraints

  • CHECK Constraints

  • Unique Key Constraints

  • Primary Key Constraints

  • Foreign Key Constraints

  • TIMESTAMP Constraints

Indexes#

Indexes are optionally created in association with specific tables so that the data in those tables can be accessed more quickly, thereby improving the performance of DML statements.

Views#

Views do not themselves actually contain any data, but are logical tables that are constructed on the basis of one or more tables, materialized views, or other views.

Materialized Views#

A materialized view is a database object that stores query results as data. Data can be based on more than one table, a view, or another materialized view.

Sequences#

Altibase provides sequences for generating unique sequential numerical values.

Synonyms#

Synonyms are provided as aliases for tables, sequences, views, stored procedures, and stored functions so that they can be used without being accessed the object name directly.

Stored Procedure or Function#

A procedure or function is a schema object that consists of a set of SQL statements, flow control statements, allocation statements, and error-handling routines. Procedures and functions are permanently saved in the database and allow a complete set of operations corresponding to a single task to be executed merely by calling the relevant procedure or function by name.

Procedures and functions are distinguished from each other in that a function returns a value, while a procedure does not.

Type Sets#

Type sets are database objects that allow to gather and manage user-defined types used in stored procedures and stored functions in one place.

For more detailed information, please refer to the Stored Procedures Manual.

Database Triggers#

A trigger is a special kind of stored procedure that is executed by the system when data are inserted into a table, deleted from a table, or updated, so that a particular task can be automatically executed. Along with constraints, user-defined triggers can help maintain the consistency of the data stored in tables.

Database Link unites disparate data sources on interconnected servers to produce a single unified result, even if the data are stored in different kinds of data servers that are physically far apart from one another.

For more detailed information, please refer to the Database Link User's Manual.

External Procedures or Functions#

External procedures or external function objects are database objects that correspond to user-defined C/C++ functions on a one-to-one basis. User-defined functions are executed through external procedure objects or external function objects. Depending on whether or not they return values differentiates external procedures from external functions.

For more detailed information, please refer to C/C++ External Procedures Manual

Libraries#

External procedures or external function objects are database objects that correspond to us- er-defined C/C++ functions on a one-to-one basis. User-defined functions are executed through external procedure objects or external function objects. Depending on whether or not they return values differentiates external procedures from external functions.

For more detailed information, please refer to C/C++ External Procedures Manual

Non-Schema Objects#

Non-schema objects are objects that are not assigned to any specific schema, but are managed at the database level. Altibase supports the following non-schema objects:

Directories#

Stored procedures are able to control files, which allow them to read from and write to text files in the file system managed by the operating system. Thanks to this functionality, the user can perform various kinds of tasks using stored procedures such as leaving messages in files, reporting the results of files or reading data from files for insertion into tables. The directory object is used to manage information about the directories accessed by stored procedures.

For more detailed information about the directory object, please refer to the SQL Reference.

For more detailed information on how to handle files using stored procedures, please refer to the Stored Procedures Manual.

Replications#

A replication is an object that maintains the consistency of the data in tables on different servers by automatically transferring data from a local server to a remote server.

For more detailed information on how to manage replication, please refer to the Replication Manual.

Tablespaces#

A database is divided into multiple tablespaces, which are the largest logical data storage unit.

Tablespaces are broadly classified as memory tablespaces and disk tablespaces based on the location in which data are stored. Every database contains the system tablespaces, which are created automatically when a database is created and cannot be deleted. Additionally, you are free to create or delete user tablespaces as required.

For more detailed information on how to manage tablespaces, please refer to Chapter 6: Tablespaces.

Users#

User accounts are necessary in order to connect to Altibase and to function as the owners of a schema. Users are created using the system, and are classified either as system users who manage the system, or as general users. General users require suitable privileges in order for them to connect to the database and perform operations on data.

User permissions are explained in detail in the following sections on Privileges and Roles.

Job#

A JOB is the addition of an execution schedule to a stored procedure. The stored procedure to be executed, the point in time of execution, the interval after which it is to be executed and etc. can be set when creating the JOB object. For the created JOB to automatically run, the value of the JOB_SCHEDULER_ENABLE property must be set to 1. The creation, alteration, and deletion of the JOB, and the management of the JOB scheduler is only enabled for the SYS user.

This is explained in detail in the following section on JOB.


Tables#

Tables are the basic units of data storage in an Altibase database. They are constructed of columns and contain multiple rows. This section defines the terminology related to tables and explains the concepts and methods pertaining to table management.

Memory Tables and Disk Tables#

Tables are classified as either memory tables or disk tables, depending on where the data they contain are stored. When a table is created, whether it is a memory table or a disk table (that is, whether it is to be maintained in memory tablespace or disk tablespace) must be specified.

System Tables and User Tables#

Tables are additionally classified as either system tables, which are internally created and managed by the system, and user tables, which are created and managed by users.

System tables, which are also known as the data dictionary, are further classified as either meta tables, in which information about database objects is stored, and process tables, in which information about processes is stored. Process tables are still further classified as either static tables or performance views.

For more detailed information on the data dictionary, please refer to the Altibase Data Dictionary.

Large Memory Tables#

Before executing SQL statements on large memory tables, it is important to understand the following:

Altering the Specifications of Large Memory Tables Using DDL#

When it is desired to execute a DDL statement on a table containing a large amount of data, rather than executing an ADD COLUMN or DROP COLUMN statement directly on the table, it is preferable to use the iLoader utility to download the data from the table, drop the table, create the table again with the new schema, and then use the iLoader utility to populate the table with the downloaded data.

Manipulating Data in Large Memory Tables Using DML#

Executing DML statements on tables that do not contain much data does not cause a big

problem from the viewpoint of Altibase performance or usage, as long as the data are properly managed. However, when even a single UPDATE or DELETE DML statement affects a large number of records in a table, the transaction associated with this DML statement can take a long time to execute. The occurrence of such slow transactions can cause the following serious problems, which negatively affect the use of Altibase:

Exclusive Access to the Table#

If a transaction takes a long time to process, other transactions attempting to access the table will be suspended because of the lock held by the transaction that is taking a long time. Moreover, if the size of records being changed exceeds the size specified by the LOCK_ESCALATION_MEMORY_SIZE property in the altibase.properties file, lock escalation can occur, in which case, even other transactions that merely intend to read data can fail to gain access to the table.

Increased Altibase Memory Usage#

In Altibase, an SCN (System Commit Number) is used so that the garbage collector can determine which versions of records are to be deleted. The garbage collector only deletes records that have SCNs lower than the SCNs that are being used by transactions that have not been committed. Therefore, transactions that are taking a long time to execute can fool the garbage collector into believing that there are no records to delete, and thus unnecessary records will not be deleted.

When bulk update/delete transactions take a long time to execute, the garbage collector stops working, and unnecessary versions of records accumulate, which increases the size of the database as well as the amount of memory consumed by Altibase.

Accumulation of Log Files#

Log files created by transactions, aside from those logs that are necessary for replication or for restart recovery, are deleted from disk when checkpointing occurs. The log file that is necessary for restart recovery is the oldest of the log files created by transactions that were underway at the time that checkpointing occurred.

Therefore, even after checkpointing has occurred, a transaction that takes a long time to execute can prevent the removal of log files that are unnecessary for restart recovery, and thus the file system in which the log files are saved may become incapable of storing any additional log files.

Multiplexed Page Lists#

In the case of memory tables, when the log file group feature is enabled, the number of page lists that is created is the same as the number of LFGs.

For more detailed information about the Log File Group functionality, please refer to the Tuning Altibase.

Replicated Tables#

In Altibase, DDL statements can be executed on tables that are to be replicated, but the following properties must first be set as below:

  • Set REPLICATION_DDL_ENABLE to 1.

  • Set the REPLICATION property of the session, which is set using the ALTER SESSION SET REPLICATION statement, to a value other than NONE.

For more detailed information about managing replicated tables, please refer to the Replication Manual.

Creating Tables#

Tables can be created using the CREATE TABLE statement.

When creating a table, you can specify column definitions, constraints, the tablespace to store the table, the maximum number of records that can be inserted into the table, how to use space in pages, the data access mode for the table or its partitions, etc.

Example#

example

CREATE TABLE book
(  
  isbn  CHAR(10)    CONSTRAINT const1 PRIMARY,
  title             VARCHAR(50),
  author            VARCHAR(30),
  edition           INTEGER DEFAULT 1,
  publishingyear    INTEGER,
  price             NUMBER(10,2),  
  pubcode           CHAR(4)
  ) MAXROWS 2 TABLESPACE user_data;

CREATE TABLE dept_c002 AS
SELECT * FROM employees 
 WHERE dno = 4002;
Considerations for Defining a Column in a Memory Table#

The user can specify a column of the VARCHAR data type as either FIXED or VARIABLE. If this is not set by the user and the length of the data is shorter than the value set in the MEMORY_VARIABLE_COLUMN_IN_ROW_SIZE property, the data are automatically stored in the FIXED area; otherwise, the data are stored in the VARIABLE area. For a FIXED type column, even though the type is VARCHAR, space for saving data up to the specified length is pre-allocated as it would be for a CHAR data type column, whereas for a VARIABLE type column, the amount of space that is allocated corresponds to the actual length of the data. When VARCHAR type data are compared, the columns are not padded with blank spaces, regardless of whether they are FIXED or VARIABLE type columns.

The following diagram shows how data are saved in columns declared as FIXED or VARIABLE. For a FIXED type column, even though the data type is VARCHAR, space in memory is pre-allocated just as for the CHAR data type, whereas for a VARIABLE type column, memory space corresponding to the actual length of the data is allocated.


Figure 5-1 VARCHAR Column Structure

Because the "name" column in the "item" table was declared as VARCHAR(20) of type FIXED, even though the actual data that are inserted ("msjung") is only 6 characters long, 20 bytes of space are allocated within the record.

In contrast, because the "description" column in the "item" table was declared as VARCHAR(1000) of type VARIABLE, the amount of space that is allocated to store the value is equal to the actual length of the value that is inserted ("variable test"), which is 13 characters long. However, this is not continuous space within the recordp; rather, it is a separately allocated space1.

The column declared with the VARIABLE attribute of the VARCHAR data type keeps the location of the actual data in the record header. For each slot separately stored, a variable header of 16 bytes and (n + 1) * 2 additional space for storing the positions of n columns are needed. Therefore, in the example above, the actual space used to store the value of the description column is 35 bytes.

Altering Tables#

Using the ALTER TABLE and RENAME statements, table definitions can be altered in the following ways.

  • Changing table names
  • Adding new columns
  • Dropping existing columns
  • Specifying column default values
  • Changing column names
  • Adding constraints
  • Dropping constraints
  • Compacting memory tables
  • Increasing the maximum allowable record count
  • Enabling and disabling indexes
  • Data access mode for tables or partitions
Example#
ALTER TABLE book ADD COLUMN ( isbn CHAR(10) PRIMARY KEY, edition INTEGER DEFAULT 1 );

ALTER TABLE book DROP COLUMN isbn;

ALTER TABLE department RENAME COLUMN dno TO dcode;

For more detailed information about the ALTER TABLE statement, please refer to the SQL Reference.

Dropping Tables#

Tables can be dropped (removed) using the DROP TABLE statement.

Example#
DROP TABLE employees;

Truncating Tables#

Table records can be deleted not only using the DELETE statement but also using the TRUNCATE TABLE statement. Internally, when the DELETE statement is executed, records are deleted one by one, whereas the TRUNCATE TABLE statement executes the DROP TABLE DDL statement and creates a new table having the identical specification.

Therefore, when executing the TRUNCATE TABLE statement, the entire table is first locked, and, after the TRUNCATE TABLE statement is successfully executed, the data cannot be restored using the ROLLBACK statement.

Data Manipulation#

The records in tables can be manipulated using the following DML statements:

  • INSERT

  • DELETE

  • UPDATE

  • SELECT

As mentioned above, because it is dangerous to perform bulk UPDATE/DELETE operations on large data while Altibase is running, when writing applications using Altibase CLI or preprocessor (APRE), it is recommended to do so in such a way that UPDATE/DELETE operations are first performed on individual records and then committed.

The following is an example of a program authored using the C/C++ Precompiler that avoids bulk UPDATE/DELETE operations and UPDATEs records individually:

UPDATE t1 SET col2=2 WHERE col1 > 1000;

EXEC SQL DECLARE update_cursor CURSOR
FOR
    SELECT col1 FROM t1 WHERE col1 > 1000;
    EXEC SQL  OPEN update_cursor;
    while (1)
    {
        EXEC SQL FETCH update_cursor INTO :t1_col;
        if (sqlca.sqlcode == SQL_NO_DATA) break;
        EXEC SQL UPDATE t1 SET col1=2 WHERE col1=:t1_col;
    }

The following SQL statements are supported for use with tables. For more detailed information, please refer to the SQL Reference.

  • CREATE TABLE

  • ALTER TABLE

  • RENAME TABLE

  • TRUNCATE TABLE

  • LOCK TABLE

  • INSERT

  • DELETE

  • UPDATE

  • SELECT


Temporary Tables#

Temporary tables temporarily store data while a session or transaction is running. These tables can improve the execution speed of compound queries. Users should use these tables to temporarily store the result sets of multiple DML operations.

Temporary table definitions can be viewed in all sessions. However, you can only view temporary table data in the session that inserted data into the table.

Indexes can be created on temporary tables. These indexes are also temporary. Index data stay valid in the session and transaction level with the same table data.

Unlike normal tables, temporary tables and their indexes are not automatically allocated segments when an object is created. However, segments are allocated for the first INSERT (or CREATE TABLE AS SELECT) operation.

Only one temporary table is allowed for one transaction.

Temporary table data is temporary, and it is impossible to recover it from backups or system failures. Users should take appropriate action to preserve temporary table data.

Considerations#

Temporary tables can only be created in volatile tablespaces.

Creating Tables#

Tables can be created using the CREATE [GLOBAL] TEMPORARY TABLE statement. The ON COMMIT clause specifies the scope of data commitment.

For more detailed information about this clause, please refer to the SQL Reference.

Since a temporary table can only be created in a volatile tablespace, users need to specify a volatile tablespace in the TABLESPACE clause.

Example#

Example: Create a temporary table that commits a transaction

CREATE VOLATILE TABLESPACE my_vol_tbs SIZE 12M AUTOEXTEND ON MAXSIZE 1G;

CREATE TEMPORARY TABLE temp1(i1 INTEGER, i2 VARCHAR(10))
    ON COMMIT DELETE ROWS TABLESPACE my_vol_tbs;

Altering Tables#

A temporary table in a session allows DDL operations (ALTER TABLE, DROP TABLE, CREATE INDEX, etc.) only if it is not bound to the session.

A temporary table in a transaction allows DDL operations, regardless of being bound or not.

However, if a DDL statement is operated on a temporary table, its table data disappears. This is because Altibase commits data before performing DDL operations.

Example#

Example: Execute a DDL statement on a temporary table that is bound to a session.

CREATE VOLATILE TABLESPACE my_vol_tbs SIZE 12M AUTOEXTEND ON MAXSIZE 1G;

CREATE TEMPORARY TABLE temp1(i1 INTEGER, i2 VARCHAR(10))
    ON COMMIT PRESERVE ROWS TABLESPACE my_vol_tbs;

INSERT INTO temp1 VALUES (1, 'ABC');

ALTER TABLE temp1 ADD CONSTRAINT temp1_pk PRIMARY KEY (i1);
[ERR-31363 : Cannot execute DDL when a temporary table is in use.]

Dropping Tables#

Temporary tables can be dropped (removed) using the DROP TABLE statement.

Example#

Example: Drop the temporary table temp1.

DROP TABLE temp1;

Data Manipulation#

Like normal tables, temporary table data can be manipulated with the following DML statements:

The following SQL statements are supported for temporary tables. For more detailed information, please refer to the SQL Reference.

  • CREATE TABLE

  • ALTER TABLE

  • RENAME TABLE

  • TRUNCATE TABLE

  • LOCK TABLE

  • INSERT

  • DELETE

  • UPDATE

  • SELECT


Compressed Tables#

A compressed table is a table that has a compressed column. If you create a table with a compressed column, the Altibase server automatically creates a dictionary table and a unique index to speed up SELECT operations. The dictionary table is the table that stores data, and a dictionary table is created for each compressed column. If data is inserted into or altered in a compressed column, the actual data is inserted into the dictionary table, whereas pointers (or OIDs) that point to the actual data are stored in compressed columns. Regardless of whether a compressed table is a memory table or a disk table, the dictionary table is generated in memory tablespace.


[Figure 5-2] The Relationship between Compressed Columns and Dictionary Tables

A compressed column stores data in a separate table to prevent duplicate values, and this can decrease memory usage. However, memory usage can increase if there are little duplicates because additional data storage space would need to be consumed for compression.

Regardless of whether the compression table is a memory table or a disk table, the dictionary table is created in the memory tablespace.

Constraints#

  • Compressed columns can only be created in memory and disk tablespaces.
  • For disk compressed tables, if the OID is saved to a compressed column and the transaction is rolled back afterward, data inserted to the dictionary table is not deleted, but retained.

Creating Tables#

Like normal tables, compressed tables can be created with the CREATE TABLE statement. However, you need to specify the column to be compressed in the COMPRESS clause. For more detailed information about the COMPRESS clause, please refer to the SQL Reference.

Example#

Example: Compress the columns department and position, and create the compressed table emp.

CREATE TABLE emp 
(
    name       CHAR( 20 ),
    department CHAR( 20 ),
    position   CHAR( 20 )
) COMPRESS ( department, position );

Altering Tables#

Like normal tables, compressed table definitions can be changed with the ALTER TABLE and RENAME statements. You can add compressed columns with the COMPRESS clause.

Example#

Example: Add the salary and join_date columns to the table emp, but add join_date as a compressed column.

ALTER TABLE emp
  ADD COLUMN (salary NUMBER, join_date DATE)
COMPRESS (join_date);

Rebuilding Tables#

The ALTER TABLE table_name REORGANIZE statement drops data that is not referenced in the dictionary table and manages storage space.

When users execute the DELETE or UPDATE statement on a compressed table, only new data is inserted without dropping or altering the corresponding data from the dictionary table. If users execute the DELETE or UPDATE multiple times on a compressed table, unreferenced data is stacked in the dictionary table. A compressed table is rebuilt so that unnecessary data does not consume memory storage space.

Example#

Example: Insert data into the compressed table emp, delete it, and then rebuild the table.

CREATE TABLE emp 
(
    name       CHAR( 20 ),
    department CHAR( 20 )
) COMPRESS ( department );


INSERT INTO emp VALUES ( 'Park',  'Personel' );
INSERT INTO emp VALUES ( 'Yun',   'Sales'    );
INSERT INTO emp VALUES ( 'Min',   'Personel' );

DELETE FROM emp WHERE name = 'Yun';

ALTER TABLE emp REORGANIZE COLUMN ( department );

Dropping Tables#

Compressed tables can be dropped (removed) using the DROP TABLE statement.

Example#

Example: Drop the compressed table temp1.

DROP TABLE compress_t1;

Data Manipulation#

Like normal tables, compressed table data can be manipulated with the following DML statements:

The following SQL statements are supported for compressed tables. For more detailed information, please refer to the SQL Reference.

  • CREATE TABLE

  • ALTER TABLE

  • RENAME TABLE

  • TRUNCATE TABLE

  • LOCK TABLE

  • INSERT

  • DELETE

  • UPDATE

  • SELECT


Queues#

The Altibase message queuing function supports asynchronous data transfer between the database and client applications. Queue tables are database objects, just like other database tables, and thus can be controlled using DDL and DML statements.

Creating Queues#

When the user uses the CREATE QUEUE statement to create a queue, the database creates a table having the name specified by the user. This is called a queue table. Queue tables have the following structure:

Column name Type Length Default Description
MSGID BIGINT 8 - A message identifier set by the user
CORRID INTEGER 4 0 The time that the message was added to the queue
MESSAGE VARCHAR Message length - The message text
ENQUEUE_TIME DATE 8 SYSDATE A message identifier, set by Altibase automatically

The user cannot freely change the name of the queue table or the names of its columns. A primary key is automatically created in the MSGID column.

The database internally creates a sequence called queue_nameNEXT_MSG_ID to generate unique MSGID values. The user can use the SYSTEM.SYS_TABLES_ meta table to view information about this sequence.

Because the sequence must be maintained until the queue table is deleted, the DROP SEQUENCE statement cannot be used to remove the sequence expressly.

Queue tables are saved as type 'Q' in the SYSTEM_.SYS_TABLES meta table. Indexes can be created for queue tables as desired using the CREATE INDEX statement.

Example#
CREATE QUEUE q1(40);

Alter Queues#

The structure of queue tables, which are created using the CREATE QUEUE statement, cannot be changed using an ALTER TABLE statement or the like. Queues can only be removed using the DROP QUEUE statement. However, the user can manipulate the data in queues using statements such as ENQUEUE/DEQUEUE, DELETE, SELECT, etc.

Removing Queues#

Queue tables can be removed from the database using the DROP QUEUE statement.

Example#
DROP QUEUE q1;

Deleting Data#

The TRUNCATE TABLE statement can be used when it is desired only to delete all of the messages loaded into a queue.

Example#
TRUNCATE TABLE q1;

Data Manipulation#

The records in queue tables can be manipulated using the following SQL statements:

  • ENQUEUE

  • DEQUEUE

  • DELETE

  • SELECT

The following SQL statements are provided for use with queue tables. For more detailed information about queues, please refer to the SQL Reference.

  • CREATE QUEUE

  • DROP QUEUE

  • ENQUEUE

  • DEQUEUE


Constraints#

Constraints are limitations that govern the insertion of data into tables and the changes that can be made to existing data. This section explains the kinds of constraints and how to use them to ensure data consistency.

Types#

The following kinds of constraints are supported in Altibase:

NOT NULL/NULL#

NOT NULL is a constraint that prevents NULL from being inserted into a column. NOT NULL can be defined in column units. Specifying NULL allows NULL values. If NOT NULL is not specified for a column, NULL is allowed by default.

CHECK Constraints#

The CHECK constraint enables the user to enforce integrity rules on data. CHECK constraints specify conditions evaluating to TRUE, FALSE, or NULL on one or more columns. If the value of a column modified by a DML statement results in the condition of the constraint evaluating to FALSE, the statement is processed as an error.

The following limitations concern the conditions of CHECK constraints:

  • SYNONYM, Subqueries, sequences, all pseudo columns, such as LEVEL, ROWNUM, etc., and non-deterministic SQL functions, such as SYSDATE, USER_ID, etc., cannot be included.

  • PRIOR operator cannot be used.

  • LOB type data cannot be used.

Multiple CHECK constraints can be defined on a single column; however, the user cannot specify the order in which CHECK constraints are to be evaluated. Altibase does not check the mutual exclusivity of CHECK constraints; the user must design CHECK constraints, so there is no conflict.

As omission of the year or month of the DATE constant as below is valid for CHECK constraints, caution is required for its use:

  • If no year is entered, the current year is entered by default.
  • If no month is entered, the current month is entered by default.
UNIQUE#

This constraint, which can be defined for one or more columns, prevents the insertion of duplicate values into one or more columns. A unique index is created when a unique key constraint is defined.

PRIMARY KEY#

The Primary Key constraint can be thought of as a combination of the Unique Key constraint and the NOT NULL constraint. A primary key constraint can be defined for one or more columns. When it is created, a unique index is created internally. NULL values cannot be entered in any of the columns included in the primary key constraint.

FOREIGN KEY#

A FOREIGN KEY constraint requires each value in a column or set of columns to match a value in an associated table's UNIQUE or PRIMARY KEY. A FOREIGN KEY constraint helps protect referential integrity.

TIMESTAMP#

This constraint sets the value of a column to the system time when a new record is inserted, or an existing record is updated. A TIMESTAMP constraint is usually set for one column of a table that is replicated.

Column Constraints and Table Constraints#

A column constraint is a constraint that is set for a single column, whereas a table constraint is a single constraint that is set for the entire table, and applies to multiple columns in the table.

The NOT NULL/NULL and TIMESTAMP constraints can only be used as column constraints, but the other kinds of constraints can be set as either column constraints or table constraints.

Creating Constraints#

The user can define a constraint when creating a table using the CREATE TABLE statement or altering a table using the ALTER TABLE statement.

The user can specify the name of a constraint when defining the constraint. If the user does not set the name of the constraint, the system will automatically assign a name. If the constraint is of a type that requires an index, the system automatically creates the index and assigns a name to it.

Example#
CREATE TABLE inventory
(
    subscriptionid  CHAR(10),
    isbn            CHAR(10),
    storecode       CHAR(4),
    purchasedate    DATE NOT NULL,
    quantity        INTEGER CHECK(quantity < 1000),
    paid            CHAR(1),
    PRIMARY KEY(subscriptionid, isbn),
    CONSTRAINT fk_isbn FOREIGN KEY(isbn, storecode) REFERENCES book(isbn, storecode)
)
TABLESPACE user_data;

ALTER TABLE book ADD CONSTRAINT const1 UNIQUE(bno);

Dropping Constraints#

A constraint can be removed by using the ALTER TABLE statement.

Example#
ALTER TABLE book DROP UNIQUE(bno);

The following SQL statements are supported for use with constraints. For more detailed information, please refer to the SQL Reference.

  • CREATE TABLE

  • ALTER TABLE


Indexes#

Indexes allow the records in tables to be accessed more quickly. This section describes the types of indexes that are supported in Altibase and how to manage and use index objects.

Index Types#

Altibase supports two types of indexes: B-tree indexes and R-tree indexes. The R-tree index is a multi-dimensional index type for use with spatial queries.

B-tree Indexes#

B-tree indexes are used with all data types except the GEOMETRY data type, which is a spatial data type. B-tree indexes have historically been used with DBMSs, and many variants thereof have arisen over the years due to a large amount of research that has been conducted to date. Of these variants, Altibase supports the B+-tree index type.

A B+-tree index consists of leaf nodes at the lowest index level, a root node at the highest level, and internal nodes in between the root and leaf nodes. Key values exist only for all leaf nodes, and root and index nodes comprise separator keys between left child nodes and right child nodes.

R-Tree Indexes#

R-Tree Indexes are used with the GEOMETRY spatial data type.

When finding target objects using R-tree indexes, the following procedure is used:

  1. Conditional filtering is conducted using the MBR (Minimum Bounding Rectangle) that covers each spatial object.
  2. "Refinement", which is checking for accurate index search conditions about objects that remain after step 1, is conducted.

The algorithms for adding, deleting, splitting, and merging nodes in R-Tree Indexes are similar to those for B-tree Indexes, except that they are based on MBRs.

Index Attributes#

Based on how a key column is configured when an index is created and on the attributes of the key column, an index has the following attributes.

Unique Index#

This index prevents the use of duplicate values in indexed columns.

Unique Keys vs Primary Keys#

Unique Keys and Primary Keys are alike in that neither of them permits the existence of duplicate values. However, they differ in whether they permit NULL values. Primary Keys do not permit NULL values.

Non-unique Index#

This index type permits duplicate values in index columns. If the UNIQUE KEY option is not set when an index is created, the default is to allow duplicate values.

Non-Composite Index#

This kind of index is based on only one column.

Composite Index#

When a single index is created based on multiple columns, it is called a Composite Index.

Direct Key Index#

Whereas a normal index stores only a record pointer in an index node, a direct key index stores the record pointer and the record in the index node, and reduces index scan cost.

Index Management#

Indexes are used to enable quicker access to the records in tables. Because an index is an object that is physical and logically independent from a table, it can be built, deleted, or changed without consideration for the table on which it is based.

If table records are changed, the corresponding indexes are also changed. Therefore, the user should create indexes only when necessary and should modify or delete them so that they are managed optimally based on the way in which the associated tables are accessed.

Creating Indexes#

An index is created for one or more columns in the table. Indexes are automatically created when constraints are defined, or users can explicitly create indexes using the CREATE INDEX statement.

Example#

Example: Creating an index by defining a table constraint

CREATE TABLE t1 (c1 INTEGER PRIMARY KEY, c2 INTEGER UNIQUE);

Example: Creating an index by changing a table constraint

ALTER TABLE t1 ADD PRIMARY KEY (c1);
ALTER TABLE t1 ADD UNIQUE (c2);

Example: Specifying the order of columns when creating a composite index

CREATE INDEX t1_idx1 ON t1 (c1 ASC, c2 DESC);

Example: Creating a index using the INDEXTYPE option to specify the type of index

CREATE INDEX t1_idx1 ON t1 (c1) INDEXTYPE IS BTREE;

Example: Creating a unique index using the UNIQUE option

CREATE UNIQUE INDEX t1_idx1 ON t1 (c1);
Options for Creating Disk B-Tree Indexes (NOLOGGING, NOFORCE)#

When a disk B-tree index is built, a log is recorded so that it can be used to recover the index in the event of a system error. In order to reduce the size of logs and the amount of time taken to build an index, the NOLOGGING option can be specified when the index is created.

When the NOLOGGING option is used, all pages of an index are written to disk immediately after the index is built, thus ensuring the consistency of the index after it is built, even if a system fault occurs.

However, when indexes are created with the NOLOGGING option, if the NOFORCE option (which specifies that index pages are not to be written to disk immediately) is also specified, although the time required to build the index is reduced, index consistency may be lost if a system or media fault occurs. Media backup should be conducted in order to ensure the durability of indexes that are created with both the NOLOGGING and NOFORCE options.

Total Index Build Time Consistency & Durability
LOGGING Index Building Time + Logging Time Recoverable when a system or media fault occurs
NOLOGGING FORCE Index Building TIme + Time Taken to Write Index to Disk Recoverable when a system fault occurs, but consistency may be lost when amediafault occurs.
NOLOGGING NOFORCE Index Building Time Consistency may be lost when a system or media fault occurs.
Example#

Example: Creating an index that is not logged and write the index to disk

CREATE INDEX t1_idx1 ON tb1(c1) NOLOGGING;
or
CREATE INDEX t1_idx1 ON tb1(c1) NOLOGGING FORCE;

Example: Creating an index that is not logged (NOLOGGING) and that is not written to disk after being built (NOFORCE)

CREATE INDEX t1_idx1 ON tb1(c1) NOLOGGING NOFORCE;
Modifying Indexes#

The attributes of an index can be changed using the ALTER INDEX statement.

Dropping Indexes#

An index can be removed explicitly using the DROP INDEX statement, or implicitly by removing the associated constraint.

Example#
DROP INDEX emp_idx1;

Using Indexes#

Bottom-Up Index Building#

In Altibase, indexes are built from the bottom up. Therefore, it is more efficient to build indexes after data have been uploaded. If a large volume of data is inserted into a table for which an index has been built, slow performance may result, because each time a record is inserted, the index will need to be changed to reflect this.

Disk Index Consistency#

For disk table indexes created with the NOLOGGING option, index consistency cannot be guaranteed in the event of a system or media fault. If such a fault occurs, use the V$DISK_BTREE_HEADER performance view to check the consistency of disk indexes. If an index for which IS_CONSISTENT is set to 'F' is found, delete the index and rebuild it when it is needed.

Function-based Index#

A function-based index is an index that is created based on the result values of functions or expressions. If a query that includes an identical expression used to create a function-based index is processed, the function-based index is used, and faster query processing speed can be anticipated.

The following SQL statements are supported for use with indexes. For more detailed information, please refer to the SQL Reference.

  • CREATE TABLE

  • ALTER TABLE

  • CREATE INDEX

  • ALTER INDEX

  • DROP INDEX


View#

A view is a presentation of data from one or more tables, materialized views or other views. A view contains no actual data, but rather presents data from the tables and views on which it is based. Views can be thought of as logical tables. This section describes how to manage views.

Base Tables and Views#

So-called "base tables" are just the objects (tables, materialized views, or other views) that views access, and from which they read data. More than one base table can be associated with a single view.

Creating Views#

Views can be created using the CREATE VIEW statement.

Example#
CREATE VIEW avg_sal AS
SELECT DNO
     , AVG(salary) emp_avg_sal  -- salary average of each department
  FROM employees
 GROUP BY dno;

Modifying Views#

Use the CREATE OR REPLACE VIEW statement to change the contents of an existing view, that is, change its underlying SELECT query statement.

Example#
CREATE OR REPLACE VIEW emp_cus AS
SELECT DISTINCT o.eno, e.e_lastname, c.c_lastname
  FROM employees e, customers c, orders o
 WHERE e.eno = o.eno AND o.cno = c.cno;

Compiling Views#

Because views are based on base tables, when the definition of a base table is changed using a DDL statement, any views based on the table may become invalid, that is, unable to be viewed. In such cases, the ALTER VIEW statement can be used with the COMPILE option to recompile the view so that it is valid.

Example#
ALTER VIEW avg_sal COMPILE;

Dropping Views#

Views can be removed using the DROP VIEW statement.

Example#
DROP VIEW avg_sal;

Data Manipulation#

Similar to normal tables, data can be queried with the SELECT statement and modified with INSERT, UPDATE, or DELETE statements on views. Updatable views are views that can modify data of the base table by executing DML statements(INSERT, UPDATE, DELETE). Rows of updatable views must have a one-to-one relationship with rows of base tables; however, views that contain the following cannot be modified:

  • Aggregate functions, analytic functions

  • DISTINCT, ROWNUM operators

  • GROUP BY, HAVING clause

  • Set operators, such as UNION or UNION ALL, etc

  • Subqueries or column operators in the select list

  • Unmodifiable views in the FROM clause

  • Subqueries in the WHERE clause referencing the table of the FROM clause

  • CONNECT BY or START WITH clause

Example#

Example: Create the updatable join view simple_emp, and then execute UPDATE on this view. Values for Salary before and after executing UPDATE are varied.

CREATE VIEW simple_emp AS
SELECT eno, e_lastname, salary
  FROM employees;
Create success.   

SELECT * FROM simple_emp WHERE eno = 20; 
ENO         E_LASTNAME            SALARY
--------------------------------------------------
20          Blake
1 row selected.

UPDATE simple_emp SET salary = 2000 WHERE eno = 20;
1 row updated.

SELECT * FROM simple_emp WHERE eno = 20; 
ENO         E_LASTNAME            SALARY
--------------------------------------------------
20          Blake                 2000
1 row selected.

The following SQL statements are supported for use with views. For more detailed information on these statements, please refer to the SQL Reference

  • CREATE VIEW

  • ALTER VIEW

  • DROP VIEW

  • SELECT

  • INSERT

  • DELETE

  • UPDATE


Materialized View#

A materialized view is a database object that stores query results as data. Data can be based on more than one table, view, or another materialized view. A materialized view cannot be replicated.

This section describes how to manage a materialized view.

Base Tables and Materialized Views#

A base table is an object(table, materialized view, or view) that a view accesses and reads data from. It is possible for multiple base tables to be related to only one view.

Altibase only supports a read-only materialized view. Updatable materialized view and writable materialized view are not supported.

Creating Materialized Views#

A materialized view can be created by using the CREATE MATERIALIZED VIEW statement. Although the tablespace where data is to be stored can be specified like other tables, column definitions and constraints cannot be created. After creation, however, the alteration of column definitions or the addition of constraints is possible using the ALTER TABLE mview_name… statement.

Example#
CREATE MATERIALIZED VIEW avg_sal
TABLESPACE SYS_TBS_MEM_DATA
BUILD IMMEDIATE
REFRESH FORCE ON DEMAND
AS SELECT dno, AVG(salary) emp_avg_sal
FROM employees
GROUP BY dno;

Modifying Material Views#

The refresh method and time can be altered by using the ALTER MATERIALIZED VIEW statement.

Altering the definition of the materialized view is possible by altering the definition of a table that has an identical name as the materialized view that is actually storing the data. If the table definition is altered in this manner, refreshing the materialized view may fail.

Example#

Example: Alter the refresh method of the materialized view.

 ALTER MATERIALIZED VIEW avg_sal REFRESH COMPLETE;

Example: Alter the definition of the materialized view.

ALTER TABLE avg_sal ADD PRIMARY KEY (dno);

Refreshing Materialized Views#

A user can manually update data from the materialized view by using the REFRESH_MATERIALIZED_VIEW stored procedure.

Example#
EXEC REFRESH_MATERIALIZED_VIEW('SYS', 'AVG_SAL');

Dropping Materialized Views#

A materialized view can be deleted by using the DROP MATERIALIZED VIEW statement.

Example#
DROP MATERIALIZED VIEW avg_sal;

Truncating Materialized Views#

Data of the materialized view can be deleted by using the TRUNCATE TABLE statement.

Data Manipulation#

Altibase only supports SELECT for materialized views.

The following SQL statements are provided for the materialized view. For more detailed information, please refer to SQL Reference:

  • CREATE MATERIALIZED VIEW

  • ALTER MATERIALIZED VIEW

  • DROP MATERIALIZED VIEW

For the matrialized view, the following SQL statements are provided for table where the actual data is contained, and refer to the SQL Reference for more detailed information.

  • ALTER TABLE

  • TRUNCATE TABLE

  • LOCK TABLE

  • SELECT


Sequences#

In Altibase, the Sequence object is provided for use as a generator of sequences of unique numbers. Next sequence values can be cached to ensure consistent performance.

Using Sequences#

The sequence generator is particularly useful in multi-universe environments for generating sequences of unique numbers without the overhead of disk I/O or transaction locking. For example, assume two users are simultaneously inserting new records into a table called "orders". By using a sequence to generate unique order numbers for the order_id column, neither of the users has to wait for the other to enter the next available order number. The sequence automatically generates a unique value for each user.

A sequence is generally used to generate a key value that is set in the desired column using a DML statement. The expressions [sequence_name].NEXTVAL and [sequence_name].CURRVAL are used to access the sequence.

  • [sequence_name].CURRVAL is used to obtain the current value in the sequence
  • [sequence_name].NEXTVAL is used to obtain the next value in the sequence

After a sequence is created and is the first time being executed, the sequence's [sequence_name].CURRVAL value cannot be used. In order to use the sequence_name.CURRVAL value for a newly created sequence, the [sequence_name].NEXTVAL value must first be accessed.

Every time the sequence's next value is accessed, the value of the sequence increments internally by the amount specified. The increment of the sequence is explicitly specified using the INCREMENT BY option when the index is created, and defaults to 1 if not specified.

Using Sequences in INSERT Statements#

The following example shows how to generate a key value using a sequence and insert it into a table:

CREATE SEQUENCE seq1; 
INSERT INTO t1 VALUES (seq1.NEXTVAL);

In the above example, assuming that the sequence has been newly created, its initial value of 1 will be entered into table t1, and seq1.nextval will increase from 1 to 2.

Creating Sequences#

The CREATE SEQUENCE statement is used to create a sequence. The following options can be used when creating a sequence:

START WITH#

This is the starting value of the sequence.

INCREMENT BY#

This is the amount by which the sequence increases or decreases.

MAXVALUE#

This is the maximum value of the sequence.

MINVALUE#

This is the minimum value of the sequence.

CYCLE#

This option is specified to ensure that the sequence will continue to generate values when it reaches its maximum or minimum value. The sequence cycles again from the minimum value in the case of an ascending sequence, or from the maximum value in the case of a descending sequence.

CACHE#

Sequence values can be created in advance and cached in memory so that they can be returned more quickly. The number of sequence values cached in this way is equal to the value specified using the CACHE option. The cache is populated when a key value is first requested from a new sequence, and is accessed every time the next key value is subsequently requested from the sequence. After the last sequence value in the cache has been used, the next request for a key value from the sequence causes new sequence values to be created and cached in memory. Then the first value is returned from this new cache. When a sequence is created, the default CACHE value is 20.

Example#

Example: Creating a basic sequence (starting from 1 and incrementing by 1)

 CREATE SEQUENCE seq1;

Example: Creating a sequence that generates even numbers and cycles from 0 to 100

CREATE SEQUENCE seq1
START WITH 0
INCREMENT BY 2
MAXVALUE 100
CYCLE ;

Modifying Sequences#

All sequence options except for the START WITH value can be modified using the ALTER SEQUENCE statement.

Example#
ALTER SEQUENCE seq1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 100;

Dropping Sequences#

Sequences can be removed as desired by using the DROP SEQUENCE statement.

Example#
DROP SEQUENCE seq1;

The following SQL statements are provided for use with sequences. For more detailed information, please refer to the SQL Reference.

  • CREATE SEQUENCE

  • ALTER SEQUENCE

  • DROP SEQUENCE


Synonyms#

Altibase supports the use of synonyms as aliases for tables, views, sequences, stored procedures, or stored functions.

Advantage of Synonyms#

Database synonyms are advantageously used in the following cases:

  • When it is desired to conceal the original name of a particular object or the identity of the user who created it
  • To simplify the use of a SQL statement
  • To minimize the changes that must be made to applications in order for them to be used by various users.

Creating Synonyms#

Synonyms can be created by using the CREATE SYNONYM statement

Example#

Example: To create the synonym "my_dept" as an alias for the table "dept"

CREATE SYNONYM my_dept FOR dept;

Dropping Synonyms#

Synonyms can be removed by using the DROP SYNONYM statement.

Example#

Example: To remove the synonym "my_dept".

DROP SYNONYM my_dept;

The following SQL statements are provided for use with synonyms. For more detailed information, please refer to SQL Reference.

  • CREATE SYNONYM

  • DROP SYNONYM


Stored Procedures and Functions#

A stored procedure is a set of SQL statements, flow control statements, assignment statements, error handling routines, etc. that are programmed in a single module that corresponds to a complete business task. The module is permanently stored in the database as a database object so that the entire business task can be conducted merely via the single action of calling the module on the server by name. This chapter describes how to manage stored procedures.

Stored procedures and stored functions are different in that stored functions return a value to the caller, while stored procedures do not. Because they are identical in all other respects, explanations of stored procedures can also be understood to apply to stored functions unless otherwise noted.

This chapter provides simple examples of how to manage stored procedures. For a more detailed explanation of the terminology, concepts, and management of stored procedures and stored functions, please refer to the Stored Procedures Manual.

Categories#

Stored Procedure#

A stored procedure is a database object that executes multiple SQL statements at one time in consideration of input, output, and input/output parameters according to conditions defined in its body. It does not have a return value, and thus sends values to the client using output or input/output parameters. Because it does not have a single return value, it cannot be used as an operand within an expression in another SQL statement.

Stored Function#

A stored function is identical to a stored procedure except that it has a return value. Because it differs from a stored procedure in this way, it can be used as an operand in an expression within another SQL statement, just like the functions provided by the system.

Typeset#

A typeset is a set of user-defined types used within a stored procedure. Typesets are usually used when stored procedures exchange user-defined types, that is, parameters and return values, with each other.

The following table shows the kinds of SQL statements that can be used with stored procedures.

[Table 5-1] Types of Stored Procedure Statements.

Task Statemnet Description
CREATE CREATE [OR REPLACE] PROCEDURE Creates a new stored procedure or redefines an existing stored procedure
CREATE [OR REPLACE] FUNCTION Creates a new stored function or redefines an existing stored function
CREATE [OR REPLACE] TYPESET Creates or modifies a typeset
ALTER ALTER PROCEDURE If the definitions of objects referred to in a stored procedure are changed after the stored procedure has been created, the current stored procedure execution plan tree may not be optimized. In such cases, this statement recompiles the stored procedure to create an optimized execution plan tree.
ALTER FUNCTION If the definitions of objects referred to in a stored function are changed after the stored function has been created, the current stored function execution plan tree may not be optimized. In such cases, this statement recompiles the stored function to create an optimized execution plan tree.
DROP DROP PROCEDURE Removes a previously created stored procedure
DROP FUNCTION Removes a previously created stored function
DROP TYPESET Removes a previously created typeset
EXECUTE EXECUTE Executes a stored procedure or stored function
[function name] Executes a stored function within a SQL statement, just like a built-in function

Creating Stored Procedures#

A stored procedure can be created using the CREATE PROCEDURE statement.

Example#
CREATE PROCEDURE proc1
(p1 IN INTEGER, p2 IN INTEGER, p3 IN INTEGER)
AS
  v1 INTEGER;
  v2 t1.i2%type;
  v3 INTEGER;
BEGIN
  SELECT *
  INTO v1, v2, v3
  FROM t1
  WHERE i1 = p1 AND i2 = p2 AND i3 = p3;

  IF v1 = 1 AND v2 = 1 AND v3 = 1 THEN
    UPDATE t1 SET i2 = 7 WHERE i1 = v1;
  ELSIF v1 = 2 AND v2 = 2 AND v3 = 2 then
    UPDATE t1 SET i2 = 7 WHERE i1 = v1;
  ELSIF v1 = 3 AND v2 = 3 AND v3 = 3 then
    UPDATE t1 SET i2 = 7 WHERE i1 = v1;
  ELSIF v1 = 4 AND v2 = 4 AND v3 = 4 then
    UPDATE t1 SET i2 = 7 WHERE i1 = v1;
  ELSE    -- ELSIF v1 = 5 AND v2 = 5 AND v3 = 5 then
    DELETE FROM t1;
  END IF;

  INSERT INTO t1 VALUES (p1+10, p2+10, p3+10);
END;
/

Creating or Replacing Procedures#

If a user wants to change the stored procedure's parameters or main frame while retaining the name of an existing stored procedure, a muser use the CREATE OR REPLACE PROCEDURE to recreate the stored procedure

Example#
CREATE OR REPLACE PROCEDURE proc1
(p1 IN INTEGER, p2 IN INTEGER, p3 IN INTEGER)
AS
  v1 INTEGER;
  v2 t1.i2%type;
  v3 INTEGER;
BEGIN

END;
/

If the definition of a table, sequence, and other stored procedure or function referenced in a stored procedure change from its definition from when it was created, the stored procedure will not be able to execute. In this case, this stored procedure is called invalid.

For example, if the indexes that existed when the stored procedure was first created is deleted, the previous execution schedule is intended to access the table through the indexes, so the table cannot be accessed by using the previous execution schedule.

The ALTER PROCEDURE statement is used to recompile invalid stored procedures to recreate an execution schedule with valid status.

Example#
ALTER PROCEDURE proc1 COMPILE;

Dropping Stored Procedures#

Stored procedures can be removed using the DROP PROCEDURE statement.

Example#
DROP PROCEDURE proc1;

The following SQL statements are supported for use with stored procedures and stored functions. For more detailed information, please refer to SQL Reference

  • CREATE PROCEDURE

  • CREATE FUNCTION

  • CREATE TYPESET

  • ALTER PROCEDURE

  • ALTER FUNCTION

  • DROP PROCEDURE

  • DROP FUNCTION

  • DROP TYPE SET

  • EXECUTE

  • [FUNCTION NAME]


Triggers#

A trigger is a special kind of stored procedure that is automatically executed (or "fired") by the system in order to accomplish a particular task when data are inserted into a table, deleted from a table, or modified. This section describes how to manage triggers.

Trigger Components#

The following trigger constituents determine when a trigger fires, whether it fires, and what it executes.

Trigger Event#

This is the SQL statement that causes the trigger to fire when executed.

Trigger condition (WHEN Clause)#

This is the SQL condition that must be satisfied to fire the trigger.

Trigger Action#

This is the body of the stored procedure that the trigger executes when the trigger condition is TRUE.

Trigger Event#

One of three DML statements can be specified as the event that causes the trigger to fire:

DELETE#

Specify DELETE to tell the trigger to fire whenever a row is removed from the table using a DELETE statement

INSERT#

Specify INSERT to tell the trigger to fire whenever a row is added to the table using an INSERT statement.

UPDATE#

Specify UPDATE to tell the trigger to fire whenever data in the table are changed using an UPDATE statement. If an OF clause is present in the UPDATE trigger event, the trigger fires only if data in the columns explicitly named in the OF clause are changed.

Note: In order to maintain the integrity of the database, changes made to tables by replication will not be processed as trigger events.

Creating Trigger#

Triggers can be created using the CREATE TRIGGER statement.

Example#
CREATE TRIGGER del_trigger
AFTER DELETE ON orders
REFERENCING OLD ROW old_row
FOR EACH ROW
AS 
BEGIN
    INSERT INTO log_tbl VALUES(old_row.ono, old_row.cno, old_row.qty, old_row.arrival_date, sysdate);
END;
/

Modifying Triggers#

The ALTER TRIGGER statement can be used to disable the execution of an existing trigger or recompile an invalid trigger. When a trigger is first created, it is automatically enabled by default. It can be subsequently disabled and enabled using the DISABLE and ENABLE clause with the ALTER TRIGGER statement.

Example#
ALTER TRIGGER del_trigger DISABLE;

Dropping Triggers#

The DROP TRIGGER statement can be used to remove a trigger from the database.

Example#
DROP TRIGGER del_trigger;

The following SQL statements are supported for use with triggers. For more detailed information, please refer to SQL Reference.

  • CREATE TRIGGER

  • ALTER TRIGGER

  • DROP TRIGGER

Additionally, because a trigger is a kind of stored procedure, for a detailed description of the trigger body, please refer to the Stored Procedures Manual.


JOB#

Altibase provides the JOB object, which is the addition of an execution schedule to a stored procedure. The stored procedure to be executed, the point in time of execution, the interval after which it is to be executed and etc. can be set when creating the JOB object.

Only the SYS user can create, alter, or drop the JOB, and only one procedure can be registered per JOB.

For the created JOB to run according to schedule, the job scheduler must be enabled. The operations of the job scheduler can be controlled with the JOB_SCHEDULER_ENABLE property.

This section explains how to control the job scheduler and also how to create, alter, and drop the JOB object.

Starting and Ending the Job Scheduler#

The creation of the JOB object is merely the registration of an execution schedule for a stored procedure.

Starting the Job Scheduler#

To start the job scheduler, first change the values of the following properties. If the value of the JOB_SCHEDULER_ENABLE property is set to 1, but the value of the JOB_THREAD_COUNT property is 0, the job scheduler will not start.

  • JOB_SCHEDULER_ENABLE (0 inactive (default), 1: active)

  • JOB_THREAD_COUNT (default: 0)

Ending the Job Scheduler#

To end the job scheduler, change the value of the JOB_SCHEDULER_ENABLE property to 0.

Constraints#
  • Job scheduler can only be controlled by the SYS user.

  • After changing the value of the JOB_THREAD_COUNT property, restart the server.

Example#

Example: Start the job scheduler for the registered JOB to run.

ALTER SYSTEM SET job_scheduler_enable = 1;

Example: Terminate the operation of the job scheduler.

ALTER SYSTEM SET job_scheduler_enable = 0;

Creating Jobs#

A JOB can be created using CREATE JOB statement. Here, the stored procedure to be executed, when to execute and the interval time of execution can be specified. If the user name of the procedure to be registered is omitted, it is assumed as SYS user by default.

When a JOB is created, it is in the DISABLE state by default. In order to operate the JOB according to the execution cycle, it must be changed to the ENABLE state.

Please refer to the CREATE JOB statement described in SQL Reference for more detailed information.

Constraints#
  • The value of JOB_SCHEDULER_ENABLE, JOB_THREAD_COUNT property should be set other than 0 before creating JOBs. Please refer to the General Reference > Chapter 2. Altibase Properties for more detailed information on properties.

  • Only one procedure can be registered per JOB.

Example#

Example: Create a JOB where the procedure proc1 runs for the first time in the current point in time for every hour and ends in three days.

 CREATE JOB job1 
 EXEC proc1 START sysdate
 END sysdate + 3 
 INTERVAL 1 HOUR;
Example#

Example: Modify it to ENABLE state so that job1 can be executed in the scheduler

ALTER JOB job1 SET ENABLE;

Altering Jobs#

With the ALTER JOB statement, the definition of the JOB statement can be altered. For more detailed information, please refer to ALTER JOB statement described in SQL Reference.

Example#

Example: Alter the start time of the JOB with the name job1 to January 1st, 2013.

ALTER JOB job1 SET START to_date('20130101','YYYYMMDD');

Dropping Jobs#

With the DROP JOB statement, a specified JOB can be dropped.

Example#

Example: Drop the JOB object, job1.

DROP JOB job1;

Checking the Job Log#

If the procedure of the most recently run JOB fails, the error code is stored in the ERROR_CODE column in the SYS_JOBS_ meta table, and detailed information, such as error messages, are stored to the trace log file(default: $ALTIBASE_HOME/trc/altibase_qp.log) set for the QP_MSGLOG_FILE property. However, trace logs are only written if the TRCLEVEL 2 for the QP module is turned on; therefore, the FLAG of TRCLEVEL 2 should be checked with the following query.

SELECT * from V$TRACELOG WHERE MODULE_NAME = 'JOB' AND DESCRIPTION != '---';
MODULE_NAME           TRCLEVEL    FLAG                  POWLEVEL             DESCRIPTION           
---------------------------------------------------------------------------------------------------------
JOB                   1           O                     1                    JOB Trace Log
...
JOB                   99          SUM                   1                    Total Sum of Trace Log Values

If the FLAG of TRCLEVEL 2 is 'X', the trace logging level can be altered with the following statement:

ALTER SYSTEM SET JOB_MSGLOG_FLAG = <previous value + 2>;

The previous value can be checked by querying the value of the POWLEVEL column of the record which has the value of 99 for the TRCLEVE column.

The following properties are related to the job scheduler:

  • JOB_SCHEDULER_ENABLE

  • JOB_THREAD_COUNT

  • JOB_THREAD_QUEUE_SIZE

Information of created jobs can be viewed in the SYS_JOBS_ meta table. For more detailed information on the SYS_JOBS_ meta table, please refer to General Reference > Chapter 3. Data Dictionary.

The following SQL statements are provided; for more detailed information, please refer to SQL Reference.

  • CREATE JOB

  • ALTER JOB

  • DROP JOB


Database Users#

After creating a database, only the system administrators, SYSTEM _ and SYS, exist in the database. Since these users are DBAs (database administrators), general users must be created to manage schema objects by constructing general schema. This section describes how to create and manage users.

The SYSTEM_ and SYS Users#

Database users can be classified into system administrators (created by the system when creating a database) and general users.

The system administrators comprise the SYSTEM_ user, who is the owner of meta tables and thus has the right to execute DDL and DML statements on meta tables, and the SYS user, a DBA (database administrator) who possesses all rights for normal tables and the right to conduct all tasks at the system level.

These users cannot be modified or removed using the DDL statements.

Creating Users#

Users can be created by using the CREATE USER statement. It is necessary to have the CREATE USER system privilege in order to execute this statement. When a user is created using the CREATE USER statement, a password must be specified. Additionally, the default tablespace for user-created objects can also be specified.

Example#
CREATE USER user1 IDENTIFIED BY user123
DEFAULT TABLESPACE user_tbs_data
TEMPORARY TABLESPACE user_tbs_temp
ACCESS SYS_TBS_MEM_DATA ON;

Modifying Users#

The ALTER USER statement can be used to change a user's password or modify tablespace settings.

Example#

Example: To change the user's password

ALTER USER user1 IDENTIFIED BY user12345;

Example: To change a user's default table

ALTER USER user1 DEFAULT TABLESPACE user_tbs_data1;

Example: To change a user's temporary tablespace

ALTER USER user1 TEMPORARY TABLESPACE user_tbs_temp1;

Example: To change user's access rights for a particular tablespace

ALTER USER user1 ACCESS user2_temp_data ON;

Dropping Users#

To remove a user, use the DROP USER statement. Additionally, to simultaneously delete all of the objects owned by the user, use the CASCADE option. Executing the DROP USER statement without the CASCADE option while there are objects remaining in the user's schema will result in an error.

Example#
DROP USER user1 CASCADE;

The following SQL statements are provided for use on users. For more detailed information, please refer to the SQL Reference.

  • CREATE USER

  • ALTER USER

  • DROP USER


Privileges and Roles#

In order to access objects and data in a database, a user must have appropriate privileges. This section describes system privileges, object privileges, roles and how to manage them.

Types of Privileges and Roles#

Altibase supports system privileges, object privileges and roles.

System Privileges#

System privileges are generally managed by the DBA. Users with system privileges can perform all the database tasks and access and access objects in all schemas.

A complete list of the system access privileges supported in Altibase is provided in the following table. For more detailed information about each privilege, please refer to the SQL Reference.

System Privilege SQL Statement
DATABASE ALTER SYSTEM
ALTER DATABASE
DROP DATABASE
INDEX CREATE ANY INDEX
ALTER ANY INDEX
DROP ANY INDEX
LIBRARY CREATE LIBRARY
CREATE ANY LIBRARY
ALTER ANY LIBRARY
DROP ANY LIBRARY
PROCEDURE CREATE PROCEDURE
CREATE ANY PROCEDURE
ALTER ANY PROCEDURE
DROP ANY PROCEDURE
EXECUTE ANY PROCEDURE
SEQUENCE CREATE SEQUENCE
CREATE ANY SEQUENCE
ALTER ANY SEQUENCE
DROP ANY SEQUENCE
SELECT ANY SEQUENCE
SESSION CREATE SESSION
ALTER SESSION
TABLE CREATE TABLE
CREATE ANY TABLE
ALTER ANY TABLE
DELETE ANY TABLE
DROP ANY TABLE
INSERT ANY TABLE
LOCK ANY TABLE
SELECT ANY TABLE
UPDATE ANY TABLE
TABLESPACE CREATE TABLESPACE
ALTER TABLESPACE
DROP TABLESPACE
MANAGE TABLESPACE
USER CREATE USER
ALTER USER
DROP USER
VIEW CREATE VIEW
CREATE ANY VIEW
DROP ANY VIEW
MISCELLANEOUS GRANT ANY PRIVILEGES
TRIGGER CREATE TRIGGER
CREATE ANY TRIGGER
ALTER ANY TRIGGER
DROP ANY TRIGGER
MATERIALIZED VIEW CREATE MATERIALIZED VIEW
CREATE ANY MATERIALIZED VIEW
ALTER ANY MATERIALIZED VIEW
DROP ANY MATERIALIZED VIEW
ROLE CREATE ROLE
DROP ANY ROLE
GRANT ANY ROLE
SYNONYM CREATE ANY SYNONYM
CREATE PUBLIC SYNONYM
CREATE SYNONYM
DROP ANY SYNONYM
DROP PUBLIC SYNONYM
JOB ALTER ANY JOB
CREATE ANY JOB
DROP ANY JOB
DIRECTORY CREATE ANY DIRECTORY
DROP ANY DIRECTORY
DATABASE LINK CREATE DATABASE LINK
CREATE PUBLIC DATABASE LINK
DROP PUBLIC DATABASE LINK
Object Privileges#

The privileges for access to objects are managed by the owner of the object. These privileges govern access to and manipulation of objects

The object access privileges supported in Altibase are shown in the following table:

Object privilege Table Sequence PSM/ External Procedure View Directory External Library
ALTER O O
DELETE O
EXECUTE O O
INDEX O
INSERT O
REFERENCES O
SELECT O O O O
UPDATE O O
Roles#

A role is a group of privileges; by using roles, you can easily grant multiple privileges to users. For further information on roles and their restrictions, please refer to the SQL Reference.

Granting Privileges#

The GRANT statement is used to to grant privileges to specific users or roles explicitly.

The SYSTEM_ and SYS users have all privileges as database administrators and can grant any privilege to a general user or role.

If a general user is created using the CREATE USER statement, the system automatically grants the following minimum privileges:

  • CREATE SESSION

  • CREATE TABLE

  • CREATE SEQUENCE

  • CREATE PROCEDURE

  • CREATE VIEW

  • CREATE TRIGGER

  • CREATE SYNONYM

  • CREATE MATERIALIZED VIEW

  • CREATE LIBRARY

Example#

Example: Granting system privileges

GRANT ALTER ANY SEQUENCE, INSERT ANY TABLE, SELECT ANY SEQUENCE TO uare5;
GRANT ALTER ANY SEQUENCE, INSERT ANY TABLE, SELECT ANY SEQUENCE TO role1;

Example: Granting object privileges

GRANT SELECT, DELETE ON sys.employees TO uare8; 
GRANT SELECT, DELETE ON sys.employees TO role2;

Revoking Privileges#

Privileges or roles that have previously been granted to users can be removed using the REVOKE statement.

Example#

Example: Revoking system privileges

REVOKE ALTER ANY TABLE, INSERT ANY TABLE, SELECT ANY TABLE, 
DELETE ANY TABLE FROM uare10;
REVOKE ALTER ANY SEQUENCE, INSERT ANY TABLE FROM role1;

Example: Revoking object privileges

REVOKE SELECT, DELETE ON sys.employees FROM uare7, uare8;
REVOKE DELETE ON sys.employees FROM role2;

The following SQL statements are provided for use in managing privileges. For more detailed information, please refer to the SQL Reference

  • CREATE ROLE

  • DROP ROLE

  • GRANT

  • REVOKE


  1. If an amount of memory corresponding to the size of data were allocated every time actual data were stored in a VARCHAR column of type VARIABLE, performance would be affected. Therefore, in Altibase, slots having internally determined sizes, such as 4kB, 8kB, and 16kB, are set aside in advance, and the server selects the optimal slot size for saving the data when entering data into a VARIABLE type VARCHAR column.