Pages

Monday, April 9, 2012

Export and Import



Oracle's export (exp) and import (imp) utilities are used to perform logical database backup and recovery. When exporting (exp), database objects are dumped to a binary file which can then be imported (imp) into another Oracle database.
These utilities can be used to move data between different machines, databases or schema. However, as they use a proprietary binary file format, they can only be used between Oracle databases. One cannot export data and expect to import it into a non-Oracle database.
Various parameters are available to control what objects are exported or imported. To get a list of available parameters, run the exp or imp utilities with the help=yes parameter.
Note: From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are enhanced versions of original Export and Import tools.
Logical Backup can be taken in three methods:

  1. Full database level
  2. User level
  3. Table level(Individual)
Logical Backup provides following advantages:

  • It requires very less disk space to export the data
  • It is useful to migrate data from one database to another database
  • It is useful to improve the database performance by using database reorganization
  • It is useful to migrate the data from one oracle version other oracle version
  • It is useful to migrate data from one operating system to another os
  • It is useful to move tables from one user to another user
Logical backups can be performed by exp and imp tools, and expdp and impdp tools.
Exp & imp tools:

Note: before performing logical backup database instance must be started
$ su – oracle
$ sqlplus ‘/as sysdba’
Sql>startup
Sql>select * from all_users;
Sql>select * from dba_users;
Sql>create user user1 identified by user1;
Sql>grant connect, resource to user1;



How to export complete full database backup:

$exp file = /Oracle/logicalbackup/dbfull.dmp log=dbfull.log full=y
Username : system/manager as sysdba
$ ls dbfull.dmp

How to export user level backup:

$exp file = /Oracle/logicalbackup/user1.dmp log=user1.log
Username = user1/user1
$ ls user1.dmp

How to export individual tables:

$exp file = /Oracle/logicalbackup/user1emp.dmp tables = emp log= emp.log

(OR)

How to export multiple tables

$ exp file=/Oracle/logicalbackup/user1tables.dmp tables = emp,dept,person
Username = user1/user1
$ ls user1tables.dmp
How to import full database:

$ sqlplus ‘/as sysdba’
Sql>startup
Sql>drop user user1 cascade;
$imp file=/Oracle/logicalbackup/dbfull.dmp full = y ignore = y

Note: ignore = y this option cannot export table if already table is available.

Username = system/manager as sysdba
 $sqlplus ‘/as sysdba’
Sql>select * from all_users;
 

 
How to import user1 objects into user2:

$ imp file = /Oracle/logicalbackup/user1.dmp from user = user1 to user = user2
Username = system/manager as sysdba
Username = system/manager as  sysdba

How to import user1 objects into user2 without records:

$ imp file = /Oracle/logicalbackup/user1.dmp from user = user1 to user = user2 rows = n

 
How to import individual tables:

$ sqlplus user1/user1
Sql>drop table emp;
Sql>select * from tab;
Sql>host
$ imp file = /Oracle/logicalbackup/user1emp.dmp tables = emp
Username = user1/user1
$ sqlplus user1/user  
Sql>select * from emp;


















Monday, April 2, 2012

Oracle Database Cloning


Oracle Database Cloning

* A database clone is an activity/procedure which is performed by every DBA on regular basis or when there is a requirement or request to do so from the different departments i.e. Test/Development teams.
* Cloning is nothing but creating a copy of production system in to a test or development environment. i.e., having an exact image of production database in test area.
* Cloning is a procedure for preparing and creating a test or development servers with the copy of Oracle production database for testing upgrades, migrating an existing system to new hardware.
* A cloning process includes a copy of Oracle Home (Directories and Binaries) backup and Database (Database related files) backup to prepare the instance on another server.
* Though, it is possible to clone a database on the same server, Oracle doesn’t suggest to clone a database on the same server, where the production database is running.
Here we can clone database using two methods they are
1.      Cloning using cold backup
2.      Cloning using hot backup
Cloning using cold backup
Clone an Oracle Database using Cold Physical Backup
Database Name: PROD

Source Database side: (PROD database)

Cold Backup Steps:
1. Get the file path information using below query

Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;

2. Parameter file backup
If PROD database running on spfile
Create pfile=’/u01/backup/inittroy.ora’ from spfile;

If database running in pfile using os command to copy the pfile and placed in backup path.
3. Taken the control file backup
sql>Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;
4.Shutdown immediate

5.Copy all the data files/log files using os command & placed in backup path.

6.Startup the database.
Clone Database side: (Clone database)

Database Name: TEST

Clone Database Steps:
1. Create the appropriate folder in corresponding path & placed the backup files in corresponding folder. (bdump,udump,create,pfile,cdump,oradata)

2. Change the init.ora parameter like control file path, dbname, instance name etc...

3. Create the password file using orapwd utility.

(Database in windows we need to create the service id using oradim utility)

4. Startup the Database in NOMOUNT stage.

5. Create the control file for cloning database.

Using backup controlfile trace to generate the create controlfile scripts.
Change the Database name & files path, also change 'REUSE' needs to be changed to 'SET'.


CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS FORCE LOGGING NOARCHIVELOG
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 1 '/U01/oradata/clone/redo01.log' SIZE 200M,
GROUP 2 '/U01/oradata/clone/redo02.log' SIZE 200M,
GROUP 3 '/U01/oradata/clone/redo03.log' SIZE 200M
DATAFILE
'/U01/oradata/clone/system01.dbf',
'/U01/oradata/clone/undotbs01.dbf',
'/U01/oradata/clone/users01.dbf',
CHARACTER SET WE8ISO8859P1;
Note: placed the script in sql prompt. Now controlfile created
6. Now open the database.

sql>Alter database open resetlogs;

Note: Check the logfile, datafiles & instance status
Cloning using Hot backup
Database Name: PROD

Database must be in Archive log mode.


Source Database side: (PROD database)

Hot Backup Steps:
1.Get the file path information using below query.
Select tablespace_name, file_name from dba_data_files order by 1;

2. Parameter file backup

If troy database running on spfile

Create pfile=’/u01/backup/inittroy.ora’ from spfile;

If database running in pfile using os command to copy the pfile and placed in backup path.

3.Put the database in begin backup mode Using os command to copy the datafiles, logfiles, archives  belongs to begin backup mode database& placed in backup path. (Refer below example)
4.Once copied the datafile,logfiles, archives release the database from begin backup mode to end backup
5.Taken the controlfile backup
sql >Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;

6.Backup all your archive log files between the previous backup and the new backup as well.

Example:
steps:
2.Alter database begin backup;
3.copy entire datafiles, logfiles, archives.
4. Alter database end backup;
Clone Database side: (TEST database)

Database Name: TEST

Clone Database Steps:

1. Create the appropriate folder in corresponding path & placed the backup files in corresponding folder.(bdump,udump,create,pfile,cdump,oradata)

2.Change the init.ora parameter like control file path, dbname, instance name etc...

3. Create the password file using orapwd utility.

(Database in windows we need to create the service id using oradim utility)

4. Startup the Database in NOMOUNT stage.


5. Create the control file for cloning database.

Using backup controlfile trace to generate the create controlfile scripts.
Change the Database name & files path, also change 'REUSE' needs to be changed to 'SET'.

CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS FORCE LOGGING NOARCHIVELOG
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 1 '/U01/oradata/clone/redo01.log' SIZE 200M,
GROUP 2 '/U01/oradata/clone/redo02.log' SIZE 200M,
GROUP 3 '/U01/oradata/clone/redo03.log' SIZE 200M
DATAFILE
'/U01/oradata/clone/system01.dbf',
'/U01/oradata/clone/undotbs01.dbf',
'/U01/oradata/clone/users01.dbf',
CHARACTER SET WE8ISO8859P1;
Note: placed the script in sql prompt. Now controlfile created.
6. Recover the database using controlfile.

Recover database using backup controlfile until cancel;

7.Now open the database.

Alter database open resetlogs;

Note: Check the logfile, datafiles status.

USER MANAGED BACKUP AND RECOVERY



User Managed Backup

Backup means taking copy of our database which is overcome for database failures and with that backup we can recover database whenever it crash. Here we have two types of backups are there, they are
·        Physical Backup
  1. Cold Backup (Consistent Backup)
 2.  Hot Backup (Inconsistent Backup)
·         Logical Backup
1.  Export / Import
2.  Expdp /Impdp (available on oracle10g onwards)
Here first we see about physical backup, later we discuss about logical backup
Physical Backup

Cold Backup: (Consistent Backup)

A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode. It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent.
Why you say cold backup is consistent backup?
After shutdown the database we have taken the cold backup. During this time all datafile headers SCN are same. When we restore the cold backup, no redo is needed in case of recovery. We had taken backup datafile, logfile, controlfile, parameter file & password file.

Cold Backup Steps:

1.       Get the file path information using below query
Sql> Select name from v$datafile;
Sql> select member from v$logfile;
Sql> select name from v$controlfile;
2.      Taken the password file & parameter file backup ( Optional)
3.      Sql>Alter system switch logfile;
4.      Sql>Shutdown immediate;
5.      Copy all the data files /control files /log files using os command & placed in backup path(in any secure location)
6.      Startup the database ( sql> startup)

Hot Backup: (Inconsistent Backup)
A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.
Why you say Hot backup is inconsistent backup?
While database running we have taken the hot backup. During this time backup tablespace datafile headers SCN are not same with another tablespace datafiles. When we restore the hot backup, redo is needed for recovery.
Hot Backup Steps:

1       First we have to check our database is in archive log mode or not
Sql > archive log list;
If database is in archive log then it shows archive log enable and location of archive log files, if not it shows archive log mode disable
If archive log mode is disable then we have to shutdown the database and enable the archive log mode using below commands
Sql>shutdown immediate;
Open parameter file of that database to edit the location of archive logs
$ Vi initTEST.ora
Log_archive_dest=’/Oracle/product/10.2.0/db_1/admin/arch’
Save and exit(:wq)
Sql>startup mount
Sql>alter database archivelog;
Sql>alter database open;
And check database is in archive log mode or not
Sql>archive log list;
2.       Now we can take the complete database backup or tablespace backup
Let us suppose we want to take online backup of USERS" tablespace. You can query the V$DATAFILE view to find out the name of datafiles associated with this tablespace. Let’s suppose the file is  ‘/u01/oradata/usr1.dbf’

Sql>alter tablespace USERS begin backup;
Sql> host cp /u01/oradata/usr1.dbf   /u02/backup
Sql>alter tablespace USERS end backup;
Sql>exit;

Similarly for complete database backup
Sql>alter database begin backup;
Copy datafiles, control files, redo logfiles, archive log files to secure location
After completion of copying
Sql>alter database end backup;
Sql>exit;

RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE

Option 1: When you don’t have a backup

If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then, you can drop the damaged datafile and open the database. You will loose all information contained in the damaged datafile.
The following are the steps to drop a damaged datafile and open the database.

STEP 1: First take full backup of database for safety.
STEP 2: Start the sqlplus and give the following commands.

$sqlplus
Enter User: / as sysdba
SQL> STARTUP MOUNT
SQL> ALTER DATABASE DATAFILE ‘/u01/ica/usr1.dbf ‘offline drop;
SQL>alter database open;

Option 2: When you have the Backup

If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options for you.
i . Either you can drop the damaged datafile, if it does not contain important information which you can afford to loose.
ii . Or you can restore from full backup. You will loose all the changes made to the database since last full backup.
To drop the damaged datafile follow the steps shown previously.
To restore from full database backup. Do the following.
STEP 1: Take a full backup of current database.
STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations.
Suppose the backup is in “/u2/oracle/backup" directory. Then do the following
$cp /u02/backup/* /u01/ica
This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations.

RECOVERING FROM LOST OF CONTROL FILE


If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location to the damaged location and open the database. If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can recreate a control file.
If you have already taken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since then, just create the controlfile by executing the statement
Buf If you have added any new tablespace after generating create controlfile statement. Then you have to alter the script and include the filename and size of the file in script file.
If your script file containing the control file creation statement is "CR.SQL"
Then just do the following.
STEP 1: Start sqlplus
STEP 2: connect / as sysdba
STEP 3: Start and do not mount a database like this.
SQL> STARTUP NOMOUNT
STEP 4: Run the "CR.SQL" script file.
STEP 5: Mount and Open the database.
SQL>alter database mount;
SQL>alter database open;
If you do not have a backup of Control file creation statement. Then you have to manually give the CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You will lose any datafiles which you do not include.

Recovering Database when the database is running in ARCHIVELOG Mode

Recovering from the lost of Damaged Datafile

If you have lost one datafile. Then follow the steps shown below


STEP 1: Shutdown the Database if it is running.
STEP 2: Restore the datafile from most recent backup.
STEP 3: Then Start sqlplus and connect as SYSDBA.
$sqlplus
Enter User:/ as sysdba
SQL>Startup mount;
SQL>Set autorecovery on;
SQL>alter database recover;

 If all archive log files are available then recovery should go on smoothly. After you get the "Media Recovery completely" statement. Go on to next step.


STEP 4: Now open the database
SQL>alter database open;
Recovering from the Lost Archived Files:
If you have lost the archived files. Then Immediately shutdown the database and take a full offline backup.

Time Based Recovery (INCOMPLETE RECOVERY)


Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table.
You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14-Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday.
Now to recover the table follows these steps.
STEP 1: Shutdown the database and take a full offline backup.
STEP 2: Restore all the datafiles, logfiles and control file from the full offline backup which was taken on Monday.
STEP 3: Start SQLPLUS and start and mount the database.
STEP 4: Then give the following command to recover database until specified time.
SQL> recover database until time '2007:08:16:13:55:00'
         using backup controlfile;
STEP 5: Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this
SQL> alter database open resetlogs;
STEP 6: After database is open. Export the table to a dump file using Export Utility.
STEP 7: Restore from the full database backup which you have taken on Saturday.
STEP 8: Open the database and Import the table.

Note: In Oracle 10g you can easily recover drop tables by using Flashback feature.

Managing the UNDO TABLESPACE


Managing the UNDO TABLESPACE
Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.
Undo records are used to:
  • Roll back transactions when a ROLLBACK statement is issued
  • Recover the database
  • Provide read consistency
  • Analyze data as of an earlier point in time by using Flashback Query
  • Recover from logical corruptions using Flashback features
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.
Switching to Automatic Management of Undo Space
To go for automatic management of undo space set the following parameter.
Steps:-
  1. If you have not created an undo tablespace at the time of creating a database then, create an undo tablespace by typing the following command
SQL>create undo tablespace myundo datafile 
         ‘/u01/oracle/ica/undo_tbs.dbf’ size 500M
                        autoextend ON next 5M ;

When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed

  1. Shutdown the Database and set the following parameters in parameter file.
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=myundo
  1. Start the Database.

Now Oracle Database will use Automatic Undo Space Management.
You can calculate space requirements manually using the following formula:
UndoSpace = UR * UPS + overhead
where:
  • UndoSpace is the number of undo blocks
  • UR is UNDO_RETENTION in seconds. This value should take into consideration long-running queries and any flashback requirements.
  • UPS is undo blocks for each second
  • overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)
As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for each second, with a 8K block size, the required undo space is computed as follows:
(3 * 3600 * 100 * 8K) = 8.24GBs
To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement
SQL> Select * from V$UNDOSTAT;
Altering UNDO Tablespace
If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it
The following example extends an existing datafile
SQL> alter database datafile ‘/u01/oracle/ica/undo_tbs.dbf’ resize 700M
The following example adds a new datafile to undo tablespace

SQL> ALTER TABLESPACE myundo
     ADD DATAFILE '/u01/oracle/ica/undo02.dbf' SIZE 200M AUTOEXTEND ON
                   NEXT 1M MAXSIZE UNLIMITED;

Dropping an Undo Tablespace
Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01:
SQL> DROP TABLESPACE myundo;
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.
Switching Undo Tablespaces
You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.
The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = myundo2;

Assuming myundo is the current undo tablespace, after this command successfully executes, the instance uses myundo2 in place of myundo as its undo tablespace.
Viewing Information about Undo Tablespace
To view statistics for tuning undo tablespace query the following dictionary
SQL>select * from v$undostat;
To see how many active Transactions are there and to see undo segment information give the following command
SQL>select * from v$transaction;
To see the sizes of extents in the undo tablespace give the following query
SQL>select * from DBA_UNDO_EXTENTS;