Wednesday 26 November 2014

Difference between Conventional path Export & Direct path Export



Conventional path Export.

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.

 Direct path Export.


When using a direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file.

. The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).

 You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export.  Your exact performance gain depends upon the following factors:
- DB_BLOCK_SIZE
- the types of columns in your table
- your I/O layout (the drive receiving the export file should be separate from the disk drive where the database files reside)

For example, invoking a Direct path Export with a maximum I/O buffer of 64kb can improve the performance of the Export with almost 50%. This can be achieved by specifying the additional Export parameters DIRECT and RECORDLENGTH


LIMITATIONS

1)  A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import. 
2) You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode.  You can use the DIRECT=Y parameter when exporting in full, user or table mode
3) The parameter QUERY and BUFFER applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).
4) A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0.

I hope this article helped you. Your suggestions/feedback are most welcome.

Keep learning... Have a great day!!!

Monday 24 November 2014

Oracle Data Pump (expdp and impdp) in Oracle Database 10g and 11g




Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.

Getting Started.....

For the examples to work we must first unlock the SCOTT (Not only with SCOTT you can use any schema) account and create a directory object it can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system of the database server.

CONN / AS SYSDBA
ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK;

CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';
GRANT READ, WRITE ON DIRECTORY test_dir TO scott;


Existing directories can be queried using the ALL_DIRECTORIES view.

Note. Data Pump is a server-based technology, so it typically deals with directory objects pointing to physical directories on the database server. It does not write to the local file system on your client PC.

Table Exports/Imports

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax.

expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log.

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Exports/Imports

the OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
For example output files see expdpSCOTT.log and impdpSCOTT.log.

Database Exports/Imports

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax.

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
For an example output file see expdpDB10G.log.

INCLUDE and EXCLUDE

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.

INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
The following code shows how they can be used as command line parameters.

expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

If the parameter is used from the command line, depending on your OS, the special characters in the clause may need to be escaped, as follows. Because of this, it is easier to use a parameter file.

include=TABLE:\"IN (\'EMP\', \'DEPT\')\"
A single import/export can include multiple references to the parameters, so to export tables, views and some packages we could use either of the following approaches.

INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"

or

INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
Multiple objects can be targeted in once statement using the LIKE and IN operators.

EXCLUDE=SCHEMA:"LIKE 'SYS%'"

EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYMOUS')"
The valid object type paths that can be included or excluded can be displayed using the DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS views.

Network Exports/Imports (NETWORK_LINK)

The NETWORK_LINK parameter identifies a database link to be used as the source for a network export/import. The following database link will be used to demonstrate its use.

CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;

CONN test/test
CREATE DATABASE LINK remote_scott CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';


In the case of exports, the NETWORK_LINK parameter identifies the database link pointing to the source server. The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. Both the local and remote users require the EXP_FULL_DATABASE role granted to them.

expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR dumpfile=EMP.dmp logfile=expdpEMP.log

For imports, the NETWORK_LINK parameter also identifies the database link pointing to the source server. The difference here is the objects are imported directly from the source into the local server without being written to a dump file. Although there is no need for a DUMPFILE parameter, a directory object is still required for the logs associated with the operation. Both the local and remote users require the IMP_FULL_DATABASE role granted to them.

impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR logfile=impdpSCOTT.log remap_schema=SCOTT:TEST

Flashback Exports

The exp utility used the CONSISTENT=Y parameter to indicate the export should be consistent to a point in time. By default the expdp utility exports are only consistent on a per table basis. If you want all tables in the export to be consistent to the same point in time, you need to use the FLASHBACK_SCN or FLASHBACK_TIME parameter.

The FLASHBACK_TIME parameter value is converted to the approximate SCN for the specified time.

expdp ..... flashback_time=systimestamp

# In parameter file.
flashback_time="to_timestamp('09-05-2011 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"

# Escaped on command line.
expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"
Not surprisingly, you can make exports consistent to an earlier point in time by specifying an earlier time or SCN, provided you have enough UNDO space to keep a read consistent view of the data during the export operation.

If you prefer to use the SCN, you can retrieve the current SCN using one of the following queries.

SELECT current_scn FROM v$database;
SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;


That SCN is then used with the FLASHBACK_SCN parameter.

expdp ..... flashback_scn=5474280
The following queries may prove useful for converting between timestamps and SCNs.

SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;
SELECT SCN_TO_TIMESTAMP(5474751) FROM dual;
In 11.2, the introduction of legacy mode means that you can use the CONSISTENT=Y parameter with the expdp utility if you wish.

Miscellaneous Information

Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.

All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job.

Export> status

Job: SYS_EXPORT_FULL_01
  Operation: EXPORT
  Mode: FULL
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: D:\TEMP\DB10G.DMP
    bytes written: 4,096

Worker 1 Status:
  State: EXECUTING
  Object Schema: SYSMAN
  Object Name: MGMT_CONTAINER_CRED_ARRAY
  Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
  Completed Objects: 261
  Total Objects: 261

Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read. The same wildcard can be used during the import to allow you to reference multiple files.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log

The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs.

system@db10g> select * from dba_datapump_jobs;

OWNER_NAME                     JOB_NAME                       OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE                       STATE                              DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM                         SYS_EXPORT_FULL_01             EXPORT
FULL                           EXECUTING                               1                 1

Data Pump API

Once the job has started the status can be checked using.

system@db10g> select * from dba_datapump_jobs;




I hope this article helped you. Your suggestions/feedback are most welcome.

Keep learning... Have a great day!!!

Monday 27 October 2014

Oracle Database 11g R2 Silent Installation (OUI)

In this article we will learn Oracle Database 11g R2 Silent Installation (OUI) on Linux

STEP 1: Hardware Requirements Check:
Note: Login to the system as a root user for performing the below activities.

Memory Requirements:

Minimum:1 GB of RAM
Recommended:2 GB of RAM
To determine the RAM size, enter the following command:

[root@oelinux ~]# grep MemTotal /proc/meminfo

Note:If the size of the RAM is less than the required size, then you must install more memory before continuing.

Available RAM
SWAP Space Requirement
Between 1 GB and 2 GB
1.5 times the size of the RAM
Between 2 GB and 16 GB
Equal to the size of the RAM
More than 16 GB
16 GB

To determine the size of the configured swap space, enter the following command:

[root@oelinux ~]# grep SwapTotal /proc/meminfo

Note:If there is a need of increasing swap space, see O/S documentation for increasing the swap space.

Disk Space Requirements:

At least 1 GB of free disk space should be available in the /tmp directory.
To determine the amount of disk space available in the /tmp directory, enter the following command:

[root@oelinux ~]# df -kh /tmp

If there is less than 1 GB of free disk space available in the /tmp directory, then do one of the below mentioned steps:
Ξ Delete unnecessary files from the tmp directory.
Ξ Increase the size of tmp directory as follows (Extend the file system that contains the /tmp directory):

Disk space requirements for software files, and data files for each installation type on Linux x86-64:


Installation Type
Minimum Requirement for Software Files (GB)
Enterprise Edition
4.35
Standard Edition
4.22
Installation Type
Minimum Requirement for Data Files (GB)
Enterprise Edition
1.7
Standard Edition
1.5

To determine the amount of free disk space on the system, enter the following command:

[root@oelinux ~]# df -kh 

If your diskspace is not enough, please contact administrator for increasing the disk space.


STEP2 : Package Requirements Check: 

After the installation of OEL5_x64, the next task is to install all packages required by Oracle Database 11g R2 x64. During the installation process the Oracle Universal Installer performs prerequisites checks before the actual installation of database. For ensuring that these prerequisite checks complete successfully, ensure that all the packages listed below are installed properly.The following are the list of packages required for Oracle Database 11g Release 2 (11.2) x64 on OEL5 x64:
  • binutils-2.17.50.0.6
  • compat-libstdc++-33-3.2.3
  • compat-libstdc++-33-3.2.3 (32 bit)
  • elfutils-libelf-0.125
  • elfutils-libelf-devel-0.125
  • gcc-4.1.2
  • gcc-c++-4.1.2
  • glibc-2.5-24
  • glibc-2.5-24 (32 bit)
  • glibc-common-2.5
  • glibc-devel-2.5
  • glibc-devel-2.5 (32 bit)
  • glibc-headers-2.5
  • ksh-20060214
  • libaio-0.3.106
  • libaio-0.3.106 (32 bit)
  • libaio-devel-0.3.106
  • libaio-devel-0.3.106 (32 bit)
  • libgcc-4.1.2
  • libgcc-4.1.2 (32 bit)
  • libstdc++-4.1.2
  • libstdc++-4.1.2 (32 bit)
  • libstdc++-devel 4.1.2
  • make-3.81
  • sysstat-7.0.2
To determine whether the required packages are installed, enter commands similar to the following:
[root@oelinux ~]# rpm -q package_name
e.g: [root@oelinux ~]# rpm –q binutils-2.17.50.0.6
If the package is not installed then install the package as mentioned below:
[root@oelinux ~]# rpm -ivh package_name

N.B: To use ODBC, you must also install the following additional ODBC RPMs which are listed below:
  • unixODBC-2.2.11 (32-bit)
  • unixODBC-devel-2.2.11 (64-bit)
  • unixODBC-2.2.11 (64-bit)
The odbc packages can be downloaded from http://www.unixodbc.org and can be installed.


STEP3: Network Configuration:
Edit /etc/hosts file:

Ensure that the hostname(oelinux) is not mentioned in the loopback address information in the /etc/hosts file.

Disable SELinux and Firewall:

[root@oelinux ~]# system-config-selinux
[root@oelinux ~]# service iptables stop

STEP 4: Creation of Required O/S Users and Groups:
The following local operating system user and groups are required for oracle database installation:
  • Oracle Inventory Group -- oinstall
  • OSBDA Group -- dba
  • OSOPER Group -- oper
  • Oracle Software Owner User -- Oracle
Check whether the groups/user exists , if doesn't exist please create them.
Follow the below mentioned steps...
  1. To determine whether "oinstall" group exists, type the following command:
    [root@oelinux ~]# more /etc/oraInst.loc
    If the output shows oinstall group then the group exists and need not to be created.
    .
  2. To determine whether "dba" group exists, type the following command:
    [root@oelinux ~]# grep dba /etc/group
    If the output shows dba group then the group exists and need not to be created.
    .
  3. To determine whether "oracle" user exists, type the following command:
    [root@oelinux ~]# id oracle
    If the output shows oracle user then the user exists and need not to be created.
    The output should be similar to following, indicating "oinstall" as primary and "dba" as secondary group.
    uid=1001(oracle) gid=1001(oinstall) groups=1002(dba),1003(oper).
    If the primary group is not "oinstall" and the user "oracle" is not the member of "dba" group then we have to modify the user "oracle" as mentioned below::
    [root@oelinux ~]#usermod -g oinstall -G dba oracle
  4. If the user or the required groups doesn't exist then create them as mentioned below:
    [root@oelinux ~]# groupadd -g 1001 oinstall
    [root@oelinux ~]# groupadd -g 1002 dba
    [root@oelinux ~]# groupadd -g 1003 oper

    [root@oelinux ~]# useradd -m -u 1001 -g oinstall -G dba,oper oracle
    Now check the status of "oracle" user.
    [root@oelinux ~]# id oracle
    The output should be similar to following, indicating "oinstall" as primary and "dba" as secondary group.
    uid=1001(oracle) gid=1001(oinstall) groups=1002(dba),1003(oper).
    Now set the password of "oracle" user.
    [root@oelinux ~]# passwd oracle
    and input a password for "oracle" user.
  5. Verify That the User nobody Exists:
    Before installation of oracle software, complete the following steps to verify that the user "nobody" exists on the system:
    To determine if the user exists, enter the following command:
    [root@oelinux ~]# id nobody
    If this command displays information about the "nobody" user, then you do not have to create that user.
    If the user "nobody" does not exist, then enter the following command to create it:
    [root@oelinux ~]# useradd nobody
STEP5: Configuration of Kernel Parameters:
Verify that the kernel parameters shown in the following table are set to values greater than or equal to the minimum value shown.
Using any text editor, edit the /etc/sysctl.conf file, and add or edit lines similar to the following:
fs.aio-max-nr
1048576
fs.file-max
6815744
kernel.shmall
2097152
kernel.shmmax
536870912
kernel.shmmni
4096
kernel.sem
250 32000 100 128
net.ipv4.ip_local_port_range
9000 65500
net.core.rmem_default
262144
net.core.rmem_max
4194304
net.core.wmem_default
262144
net.core.wmem_max
1048586

[root@oelinux ~]# vim /etc/sysctl.conf

After adding the above lines to the /etc/sysctl.conf file, they persist each time the system reboots. If you would like to make these kernel parameter value changes to the current system without having to reboot, enter the following command:
[root@oelinux ~]# sysctl -p

If the output of the above command shows any error, please rectify the error before proceeding to next step.
To improve the performance on Linux systems, Oracle recommends us to increase the shell limits for the oracle software owner user "oracle".
Update the resource limits in the /etc/security/limits.conf configuration file for the installation owner (Here "oracle" user is the owner).
----------------------------
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
----------------------------
[root@oelinux ~]# vim /etc/security/limits.conf

STEP6 : Creation of Required Directories: 

The next step is to create a directory that will be used to store the "Oracle Database software", "Oracle Database Datafiles" and "Flash Recovery Data".
After the directory is created, we must then specify the correct owner, group, and permissions for it.

[root@oelinux ~]# mkdir -p /data/oracle/app/
[root@oelinux ~]# chown -R oracle:oinstall /u01/app/
[root@oelinux ~]# chmod -R 775 /u01/app/

STEP 7:Configuring the oracle User’s Environment: 

Set the default file mode creation mask (umask) to 022 for "oracle" user. For setting so, do the following:
[root@oelinux ~]# su - oracle 

[oracle@oelinux ~]$ vim ~/.bash_profile

Add a line as written below and save the file.

umask 022

N.B:You do not have to set the DISPLAY environment variable if you are completing a silent mode installation.

STEP 8: Installation of Oracle Database using silent mode: 

After configuring the oracle user’s environment, create a response file for silent installation and configuration of the database.
A response file is available in the response directory of the installation disc.
I will copy and edit the response file for our purpose.
I will install and configure database through the response file.
Specify the Installation Option in the response file:
Parameter
Response
oracle.install.option
INSTALL_DB_AND_CONFIG
ORACLE_HOSTNAME
oelinux
UNIX_GROUP_NAME
oinstall
INVENTORY_LOCATION
/data/oracle/app/oraInventory
SELECTED_LANGUAGES
en
ORACLE_HOME
/data/oracle/app/oracle/product/11.2.0/db_1
ORACLE_BASE
/data/oracle/app/oracle
oracle.install.db.InstallEdition
EE
oracle.install.db.EEOptionsSelection
false
oracle.install.db.DBA_GROUP
oinstall
oracle.install.db.OPER_GROUP
oinstall
oracle.install.db.config.starterdb.type
GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName
TEST
oracle.install.db.config.starterdb.SID
TEST
oracle.install.db.config.starterdb.characterSet
AL32UTF8
oracle.install.db.config.starterdb.memoryOption
true
oracle.install.db.config.starterdb.memoryLimit
700
oracle.install.db.config.starterdb.installExampleSchemas
false
oracle.install.db.config.starterdb.enableSecuritySettings
true
oracle.install.db.config.starterdb.password.ALL
Amit123
oracle.install.db.config.starterdb.control
DB_CONTROL
oracle.install.db.config.starterdb.automatedBackup.enable
false
oracle.install.db.config.starterdb.storageType
FILE_SYSTEM_STORAGE
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation
/data/oracle/app/oracle/oradata
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation
/data/oracle/app/oracle/flash_recovery_area
SECURITY_UPDATES_VIA_MYORACLESUPPORT
false
DECLINE_SECURITY_UPDATES
true
oracle.installer.autoupdates.option
SKIP_UPDATES

click here  sample .rsp file for sample .rsp file.
 

Open terminal  go to software directory and run installer with the response file option as "oracle" user.
[oracle@oelinux ~]$ ./runInstaller -silent -responseFile /data/oracle/db.rsp -ignorePrereq -ignoreSysPrereqs -printtime -showProgress -force

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 183254 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16383 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-10-27_01-01-43PM. Please wait ...[oracle@NVMBD1BZY150D00 database]$ [WARNING] [INS-30011] The ADMIN password entered does not conform to the Oracle recommended standards.
   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
   ACTION: Provide a password that conforms to the Oracle recommended standards.
You can find the log of this install session at:
 /data/oracle/app/oraInventory/logs/installActions2014-10-27_01-01-43PM.log

Prepare in progress.
..................................................   8% Done.

Prepare successful.

Copy files in progress.
..................................................   13% Done.
..................................................   19% Done.
..................................................   24% Done.
..................................................   29% Done.
..................................................   36% Done.
..................................................   41% Done.
..................................................   46% Done.
..................................................   51% Done.
..................................................   56% Done.
..................................................   61% Done.
..................................................   66% Done.
........................................
Copy files successful.

Link binaries in progress.
..........
Link binaries successful.

Setup files in progress.
..................................................   71% Done.
..................................................   76% Done.

Setup files successful.
The installation of Oracle Database 11g was successful.
Please check '/data/oracle/app/oraInventory/logs/silentInstall2014-10-27_01-01-43PM.log' for more details.

Oracle Net Configuration Assistant in progress.
..................................................   86% Done.

Oracle Net Configuration Assistant successful.

Oracle Database Configuration Assistant in progress.
..................................................   95% Done.

Oracle Database Configuration Assistant successful.

Execute Root Scripts in progress.

As a root user, execute the following script(s):
        1. /data/oracle/app/oracle/product/11.2.0/db_1/root.sh


..................................................   100% Done.

Execute Root Scripts successful.
Successfully Setup Software.

Wait for the installation step to prompt for executing the scripts.
Open terminal and execute the scripts mentioned in the above screenshot as "root" user.
[root@oelinux ~]# / data/oracle/app/oraInventory/orainstRoot.sh

After this run the second script, also as a "root" user .

[root@oelinux ~]# /data/oracle/app/oracle/product/11.2.0/db_1/root.sh

This finishes the Oracle database installation and configuration.

View the setup logfiles :

[root@oelinux ~]# cat /data/oracle/app/oraInventory/logs/silentinstallxxxxx.log

[root@oelinux ~]# cat /data/oracle/app/oraInventory/logs/installactionsxxxxx.log

Set Oracle User Profile :

Open the profile file and add the below mentioned lines:

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export ORACLE_SID=TEST
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin

[oracle@oelinux ~]$ vim ~/.bash_profile

Or use any editor of your choice, to edit the profile of "Oracle" user.
After editing the profile run it for checking errors and loading.

[oracle@oelinux ~]$ . ~/.bash_profile
If there is any syntax error in the profile it will be reported.If there is no error then we will get the prompt silently.

Connect to the Database:
 
[oracle@oelinux ~]$ ps -eaf | grep pmon
[oracle@oelinux ~]$ export ORACLE_SID=TEST
[oracle@oelinux ~]$ sqlplus sys@TEST as sysdba
And you will be connected to the database.


I hope this article helped you. Your suggestions/feedback are most welcome.

Keep learning... Have a great day!!!