Oracle Cloud Infrastructure
Coming soon ...
Apps R12.2
  • R12.2 Architecture
  • Cloning Error (RC-50208)
  • Apps R12.1
  • Major changes from 11i
  • R12:HTTP Debug Logging
  • Compile Apps Schema invalid objects in R12
  • Apps 11i
  • Apps Components and Architecture
  • Concurrent managers
  • Patching
  • Using AD Patch
  • Using AD Control
  • FNDCPASS utility
  • Single-node Installation
  • Multi-node Installation
  • Find Apps Version
  • Cloning
  • Upgrade 11.5.9 to 11.5.10.2
  • Upgrade from 11.5.10.2 to R12
  • Upgrading 9i to 10g with 11i
  • 11i/R12 General Topics
  • AppsDBA System Management Utilities Guide
  • Identifying Long Idle Sessions
  • Identifying High Active Sessions
  • Change hostname for EBS
  • Oracle 12c Database
  • Oracle12c PSU Apply
  • Oracle12c Datafile moved Online
  • Oracle 11g Database
  • Upgrade 10g to 11g R1
  • Upgrade 11.2.0.2 to 11.2.0.3
  • Database 9i-10g
  • Top 99 Responsibilities of a DBA
  • General Info
  • Database Patching
  • 10g:ASM
  • 10g:Data Pump
  • 10g:Data Guard Installing
  • 10g:Rollback Monitoring
  • 10g:Flashback Table
  • Tablespace Management
  • Materialized Views
  • 10g:Enterprise Manager
  • 10g:Upgrade
  • Error:Ora-01631
  • DBA Scripts
  • Disk I/O,Events,Waits
  • Tablespace Information
  • Session Statistics
  • Hit/Miss Ratios
  • User Information
  • Rollback Segments
  • Full Table Scans
  • Contention/Locking
  • Redo Log Buffer
  • Data Dictionary Info
  • Oracle10g Application Server
  • Oracle10g Application Installation
  • (Re)Securing OAS Control
  • Oracle AS10g null protocol issue
  • Oracle Backup & Recovery
  • RMAN
  • RMAN Setup
  • Backup Recovery
  • Flash Recovery Area
  • Oracle10g Discoverer with Apps
    Coming soon ..............
    Troubleshooting
  • Discoverer Troubleshooting
  • Access EBS in mozile
  • Linux and Unix Platforms
  • How To configure NFS
  • Unix/Linux Command
  • Change hostname in Linux
  • SENDMAIL configuration
  • This Oracle Application DBA Portal is the biggest knowledge gateway for the people in the world of Oracle...
    Friday, April 3, 2009
    Oracle10g: Data Pump
    Oracle10g Data Pump

    OVERVIEW
    Oracle Data Pump is a feature of Oracle Database 10g that enables very fast bulk data and metadata movement between Oracle databases. Oracle Data Pump provides new high-speed, parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise Manager interface.

    The following are the major features of Oracle Data Pump utility,

    • The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. (PARALLEL)
    • The ability to restart Data Pump jobs. (START_JOB)
    • The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (ATTACH)
    • Support for export and import operations over the network, in which the source of each operation is a remote instance. (NETWORK_LINK)
    • The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. (REMAP_DATAFILE)
    • Enhanced support for remapping tablespaces during an import operation. (REMAP_TABLESPACE)
    • Support for filtering the metadata that is exported and imported, based upon objects and object types. (INCLUDE & EXCLUDE)
    • Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs.
    • The ability to estimate how much space an export job would consume, without actually performing the export. (ESTIMATE_ONLY)
    • Most Data Pump export and import operations occur on the Oracle database server.

    Oracle Data Pump is made up of three distinct parts:• The command-line clients, expdp and impdp
    • The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
    • The DBMS_METADATA PL/SQL package (also known as the Metadata API)


    PROCESS STRUCTURE
    There are various processes that comprise a Data Pump job. They are described in the order of their creation.
    Client Process – This is the process that makes calls to the Data Pump API.
    Shadow Process – This is the standard Oracle shadow (or foreground) process created when a client logs in to Oracle Database.
    Master Control Process (MCP) – As the name implies, the MCP controls the execution and sequencing of a Data Pump job. An MCP has a process name of the form: _DMnn_.
    Worker Process – Upon receipt of a START_JOB request, the MCP creates a number of worker processes according to the value of the PARALLEL parameter. The worker processes perform the tasks requested by the MCP (primarily unloading and loading of metadata and data), and maintain the object rows that make up the bulk of the master table. A worker process has a name of the form: _DWnn_.
    Parallel Query (PQ) Process – If the External Tables data access method is chosen for loading or unloading a table or partition, some parallel query processes are created by the worker process that was given the load or unload assignment, and the worker process then acts as the query coordinator.

    DATA MOVEMENT
    Data Pump supports two access methods to load and unload table row data: direct path and external tables.

    The Direct Path access method is the faster of the two, but does not support intrapartition parallelism. The External Tables access method does support this function, and therefore may be chosen to load or unload a very large table or partition.


    METADATA MOVEMENT
    The Metadata API (DBMS_METADATA) is used by worker processes for all metadata unloading and loading. Unlike the original exp function (which stored object definitions as SQL DDL), the Metadata API extracts object definitions from the database, and writes them to the dumpfile set as XML documents.


    MONITORING JOB STATUS
    DATA PUMP VIEWS – Data Pump maintains a number of user- and DBA-accessible views to monitor the progress of jobs:
    DBA_DATAPUMP_JOBS: This shows a summary of all active Data Pump jobs on the system.
    USER_DATAPUMP_JOBS: This shows a summary of the current user’s active Data Pump jobs.

    DBA_DATAPUMP_SESSIONS: This shows all sessions currently attached to Data Pump jobs.
    V$SESSION_LONGOPS: A row is maintained in the view showing progress on each active Data Pump job. The OPNAME column displays the Data Pump job name.

    DIRECTORY MANAGEMENT
    Because Data Pump is server-based, rather than client-based, dump files, log files,
    and SQL files are accessed relative to server-based directory paths. Data Pump
    requires you to specify directory paths as directory objects. A directory object maps
    a name to a directory path on the file system. The directory objects enforce a security model that can be used by DBAs to control access to these files.

    For example, the following SQL statement creates a directory object named dpump_
    dir1 that is mapped to a directory located at /usr/apps/datafiles.

    SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';

    After a directory is created, the user creating the directory object needs to grant
    READ or WRITE permission on the directory to other users. For example, to allow
    the Oracle database to read and write files on behalf of user hr in the directory
    named by dpump_dir1, the DBA must execute the following command:

    SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;

    Data Pump Export

    Data Pump Export is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility.

    DATA PUMP EXPORT MODES

    Full Export Mode
    A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded.

    Schema Mode
    A schema export is specified using the SCHEMAS parameter. This is the default export mode.

    Table Mode
    A table export is specified using the TABLES parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded.

    Tablespace Mode
    A tablespace export is specified using the TABLESPACES parameter. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded.

    Transportable Tablespace Mode
    A transportable tablespace export is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded.


    expdp syntax:-

    $ expdp help=y

    Export: Release 10.2.0.1.0 - Production on Wednesday, 30 May, 2007 16:24:26

    Copyright (c) 2003, 2005, Oracle. All rights reserved.


    The Data Pump export utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:

    Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    You can control how Export runs by entering the 'expdp' command followed
    by various parameters. To specify parameters, you use keywords:

    Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
    or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

    USERID must be the first parameter on the command line.

    Keyword Description (Default)
    ------------------------------------------------------------------------------
    ATTACH Attach to existing job, e.g. ATTACH [=job name].
    COMPRESSION Reduce size of dumpfile contents where valid
    keyword values are: (METADATA_ONLY) and NONE.
    CONTENT Specifies data to unload where the valid keywords are:
    (ALL), DATA_ONLY, and METADATA_ONLY.
    DIRECTORY Directory object to be used for dumpfiles and logfiles.
    DUMPFILE List of destination dump files (expdat.dmp),
    e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ENCRYPTION_PASSWORD Password key for creating encrypted column data.
    ESTIMATE Calculate job estimates where the valid keywords are:
    (BLOCKS) and STATISTICS.
    ESTIMATE_ONLY Calculate job estimates without performing the export.
    EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
    FILESIZE Specify the size of each dumpfile in units of bytes.
    FLASHBACK_SCN SCN used to set session snapshot back to.
    FLASHBACK_TIME Time used to get the SCN closest to the specified time.
    FULL Export entire database (N).
    HELP Display Help messages (N).
    INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
    JOB_NAME Name of export job to create.
    LOGFILE Log file name (export.log).
    NETWORK_LINK Name of remote database link to the source system.
    NOLOGFILE Do not write logfile (N).
    PARALLEL Change the number of active workers for current job.
    PARFILE Specify parameter file.
    QUERY Predicate clause used to export a subset of a table.
    SAMPLE Percentage of data to be exported;
    SCHEMAS List of schemas to export (login schema).
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    TABLES Identifies a list of tables to export - one schema only.
    TABLESPACES Identifies a list of tablespaces to export.
    TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
    TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
    VERSION Version of objects to export where valid keywords are:
    (COMPATIBLE), LATEST, or any valid database version.

    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed

    Command Description
    ------------------------------------------------------------------------------
    ADD_FILE Add dumpfile to dumpfile set.
    CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
    EXIT_CLIENT Quit client session and leave job running.
    FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
    HELP Summarize interactive commands.
    KILL_JOB Detach and delete job.
    PARALLEL Change the number of active workers for current job.
    PARALLEL=.
    START_JOB Start/resume current job.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STATUS[=interval]
    STOP_JOB Orderly shutdown of job execution and exits the client.
    STOP_JOB=IMMEDIATE performs an immediate shutdown of the
    Data Pump job.

    EXAMPLES OF USING DATA PUMP EXPORT

    1. Performing a Table-Mode Export

    Example 1
    $ expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y

    Because user hr is exporting tables in his own schema, it is not necessary to specify
    the schema name for the tables. The NOLOGFILE=y parameter indicates that an
    Export log file of the operation will not be generated.

    2. Data-Only Unload of Selected Tables and Rows

    Example 2
    Contents of exp.par parameter file,

    DIRECTORY=dpump_dir1
    DUMPFILE=dataonly.dmp
    CONTENT=DATA_ONLY
    EXCLUDE=TABLE:"IN ('COUNTRIES', 'LOCATIONS', 'REGIONS')"
    QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"

    You can issue the following command to execute the exp.par parameter file:

    $ expdp hr/hr PARFILE=exp.par

    A schema-mode export (the default mode) is performed, but the CONTENT parameter effectively limits the export to an unload of just the tables.

    3. Estimating Disk Space Needed in a Table-Mode Export

    Example 3
    $ expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees,
    departments, locations LOGFILE=estimate.log

    The estimate is printed in the log file and displayed on the client's standard output
    device. The estimate is for table row data only; it does not include metadata.

    4. Performing a Schema-Mode Export

    Example 4
    $ expdp hr/hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log

    In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS parameter on the command line, unless you are specifying more than one schema or a schema other than your own.

    5. Performing a Parallel Full Database Export

    Example 5
    $ expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
    FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull

    Because this is a full database export, all data and metadata in the database will be exported. Dump files full101.dmp, full201.dmp, full102.dmp, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects.

    6. Using Interactive Mode to Stop and Reattach to a Job

    To start this example, reexecute the parallel full export in Example 5. While the
    export is running, press Ctrl+C. This will start the interactive-command interface of
    Data Pump Export. In the interactive interface, logging to the terminal stops and the
    Export prompt is displayed.
    Issue the following command to stop the job:

    Export> STOP_JOB=IMMEDIATE
    Are you sure you wish to stop this job ([y]/n): y

    The job is placed in a stopped state and exits the client. Example 2–6 shows how to
    reattach to the job.

    Example
    Enter the following command to reattach to the job you just stopped:

    $ expdp hr/hr ATTACH=EXPFULL

    After the job status is displayed, you can issue the CONTINUE_CLIENT command to
    resume logging mode and restart the expfull job.

    Export> CONTINUE_CLIENT

    A message is displayed that the job has been reopened, and processing status is output to the client.


    Data Pump Import

    Data Pump Import is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.


    DATA PUMP IMPORT MODES

    Full Import Mode
    A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports.

    Schema Mode
    A schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the current user are loaded. The source can be a full or schema-mode export dump file set or another database.

    Table Mode
    A table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database.

    Tablespace Mode
    A tablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database.

    Transportable Tablespace Mode
    A transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles specified by the TRANSPORT_DATAFILES parameter must be made available from the source system for use in the target database, typically by copying them over to the target system.

    impdp syntax:-
    $ impdp help=y

    Import: Release 10.2.0.1.0 - Production on Wednesday, 30 May, 2007 16:59:40

    Copyright (c) 2003, 2005, Oracle. All rights reserved.


    The Data Pump Import utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:

    Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    You can control how Import runs by entering the 'impdp' command followed
    by various parameters. To specify parameters, you use keywords:

    Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    USERID must be the first parameter on the command line.

    Keyword Description (Default)
    ------------------------------------------------------------------------------
    ATTACH Attach to existing job, e.g. ATTACH [=job name].
    CONTENT Specifies data to load where the valid keywords are:
    (ALL), DATA_ONLY, and METADATA_ONLY.
    DIRECTORY Directory object to be used for dump, log, and sql files.
    DUMPFILE List of dumpfiles to import from (expdat.dmp),
    e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ENCRYPTION_PASSWORD Password key for accessing encrypted column data.
    This parameter is not valid for network import jobs.
    ESTIMATE Calculate job estimates where the valid keywords are:
    (BLOCKS) and STATISTICS.
    EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
    FLASHBACK_SCN SCN used to set session snapshot back to.
    FLASHBACK_TIME Time used to get the SCN closest to the specified time.
    FULL Import everything from source (Y).
    HELP Display help messages (N).
    INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
    JOB_NAME Name of import job to create.
    LOGFILE Log file name (import.log).
    NETWORK_LINK Name of remote database link to the source system.
    NOLOGFILE Do not write logfile.
    PARALLEL Change the number of active workers for current job.
    PARFILE Specify parameter file.
    QUERY Predicate clause used to import a subset of a table.
    REMAP_DATAFILE Redefine datafile references in all DDL statements.
    REMAP_SCHEMA Objects from one schema are loaded into another schema.
    REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
    REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
    SCHEMAS List of schemas to import.
    SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
    SQLFILE Write all the SQL DDL to a specified file.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STREAMS_CONFIGURATION Enable the loading of Streams metadata
    TABLE_EXISTS_ACTION Action to take if imported object already exists.
    Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
    TABLES Identifies a list of tables to import.
    TABLESPACES Identifies a list of tablespaces to import.
    TRANSFORM Metadata transform to apply to applicable objects.
    Valid transform keywords: SEGMENT_ATTRIBUTES, STORAGE
    OID, and PCTSPACE.
    TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
    TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
    TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
    Only valid in NETWORK_LINK mode import operations.
    VERSION Version of objects to export where valid keywords are:
    (COMPATIBLE), LATEST, or any valid database version.
    Only valid for NETWORK_LINK and SQLFILE.

    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed

    Command Description (Default)
    ------------------------------------------------------------------------------
    CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
    EXIT_CLIENT Quit client session and leave job running.
    HELP Summarize interactive commands.
    KILL_JOB Detach and delete job.
    PARALLEL Change the number of active workers for current job.
    PARALLEL=.
    START_JOB Start/resume current job.
    START_JOB=SKIP_CURRENT will start the job after skipping
    any action which was in progress when job was stopped.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STATUS[=interval]
    STOP_JOB Orderly shutdown of job execution and exits the client.
    STOP_JOB=IMMEDIATE performs an immediate shutdown of the
    Data Pump job.

    EXAMPLES OF USING DATA PUMP IMPORT

    1. Performing a Data-Only Table-Mode Import

    Example 1
    $ impdp hr/hr TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp
    NOLOGFILE=y

    The CONTENT=DATA_ONLY parameter filters out any database object definitions (metadata). Only table row data is loaded.

    2. Performing a Schema-Mode Import
    Example 2

    $ impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
    EXCLUDE=CONSTRAINT, REF_CONSTRAINT, INDEX TABLE_EXISTS_ACTION=REPLACE

    The EXCLUDE parameter filters the metadata that is imported. For the given mode
    of import, all the objects contained within the source, and all their dependent
    objects, are included except those specified in an EXCLUDE statement. If an object is
    excluded, all of its dependent objects are also excluded.

    The TABLE_EXISTS_ACTION=REPLACE parameter tells Import to drop the table if
    it already exists and to then re-create and load it using the dump file contents.

    3. Performing a Network-Mode Import
    Example 3

    $ impdp hr/hr TABLES=employees REMAP_SCHEMA=hr:scott DIRECTORY=dpump_dir1
    NETWORK_LINK=dblink

    This example imports the employees table from the hr schema into the scott schema. The dblink references a source database that is different than the target database.
    REMAP_SCHEMA loads all the objects from the source schema into the target schema.

    The Data Pump API

    The Data Pump API, DBMS_DATAPUMP, provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.

    How Does the Client Interface to the Data Pump API Work?
    The main structure used in the client interface is a job handle, which appears to the caller as an integer. Handles are created using the DBMS_DATAPUMP.OPEN or DBMS_DATAPUMP.ATTACH function. Other sessions can attach to a job to monitor and control its progress. This allows a DBA to start up a job before departing from work and then watch the progress of the job from home. Handles are session specific. The same job can create different handles in different sessions.

    What Are the Basic Steps in Using the Data Pump API?
    To use the Data Pump API, you use the procedures provided in the DBMS_
    DATAPUMP package. The following steps list the basic activities involved in using
    the Data Pump API. The steps are presented in the order in which the activities
    would generally be performed:

    1. Execute the DBMS_DATAPUMP.OPEN procedure to create a Data Pump job and its infrastructure.
    2. Define any parameters for the job.
    3. Start the job.
    4. Optionally, monitor the job until it completes.
    5. Optionally, detach from the job and reattach at a later time.
    6. Optionally, stop the job.
    7. Optionally, restart the job, if desired.

    Labels:

    posted by Srinivasan .R @ 12:58 PM  
    0 Comments:
    Post a Comment
    << Home
     
    About Me

    Name: Srinivasan .R
    Home: Chennai, India

    About Me:
    I am working as an Oracle Applications DBA specializing in EBS 11i/R12 with Over 14+ years of experience, mainly in different versions of Oracle Database & Application administration on various platforms like HP-UX, SOLARIS, AIX, Red hat Linux & Windows
    See my complete profile
    High Availability
  • Oracle10g RAC Installation
  • A Quick Reference for Oracle Database 10g RAC on Linux and Unix Platforms
  • Implementing Oracle 10g RAC with ASM on AIX
  • Locked objects for whole RAC
  • Monitor Memory RAC
  • Sessions RAC
  • Install Oracle 11g RAC On Linux
  • Migrating Oracle10g DB to ASM
  • Helpful Links
  • Good Metalink Notes
  • Discoverer:Metalink Notes
  • Logs Scripts:Metalink Notes
  • Support:Metalink Notes
  • Previous Post
  • Oracle10g RAC Installation
  • Identifying High Active Sessions
  • FNDCPASS utility
  • Identifying Long Idle Sessions
  • Apps Components and Architecture
  • AppsDBA System Management Utilities
  • Top 99 Responsibilities of a DBA
  • A Quick Reference for Oracle Database 10g RAC on L...
  • Implementing Oracle 10g RAC with ASM on AIX
  • DBA
  • Archives
    Download Software
  • Oracle 11g
  • Oracle 10g
  • 10g Express Edition
  • Oracle 9i
  • Oracle Apps
  • Oracle Linux
  • Oracle VM
  • App Server
  • Solaris
  • Fedora
  • Fedora
  • OpenSUSE
  • Ubuntu
  • Advertisement Links
    INTUIT Technology

    MACHS DATA

    Add Ons
    Locations of visitors to this page

    Add to Google Reader or Homepage

    Template by
    Sreene



    Oracle Application DBA Portal