Monday, 31 August 2009

Performance Tuning - Question 053.

What happens when the DBA issues the ALTER SYSTEM SET UNDO_TABLESPACE= command? (Choose two.)

A . Existing transactions will be forced to roll back; however, they will automatically restart in the newly activated tablespace, transparent to the end user.

B. All new transactions will begin using the newly activated undo tablespace.

C. Existing transactions will continue to use the previous tablespace until they are complete.

D . Existing transactions will continue to use the previous tablespace for read consistency, but use the new tablespace for undo segment updates.

Explanation:

Answer B and C: The DBA can change the current undo tablespace to a new one, but existing transactions will continue to use the tablespace that they were using when the transaction started. So while although only one tablespace can be the "active" tablespace, it is possible to have active undo segments in multiple tablespaces. This is true while the old UNDO tablespace has a setting of OFFLINE PENDING. Once it goes OFFLINE, access to the old UNDO tablespace is not permitted.

Sunday, 30 August 2009

Performance Tuning - Question 052.

The DBA reduces the likelihood of application failure due to ORA-01555 errors by setting which parameters?

A. UNDO_RETENTION and UNDO_SUPPRESS_MESSAGES

B . UNDO_RETENTION and UNDO_SUPPRESS_ERRORS

C . UNDO_RETENTION and UNDO_SPACE_MANAGEMENT

D . REDO_RETENTION

E . UNDO_RETENTION and REDO_RETENTION

Explanation:

Answer A: When using automatic undo management, the initialization parameter UNDO_RETENTION specifies how long to keep a read-consistent block in the undo segment after the transaction completes. REDO_RETENTION, UNDO_SPACE_MANAGEMENT, and UNDO_SUPPRESS_MESSAGES are not valid parameters; therefore, you can eliminate UNDO_RETENTION and REDO_RETENTION, UNDO_RETENTION and UNDO_SUPPRESS_MESSAGES, UNDO_RETENTION and UNDO_SPACE_MANAGEMENT, and REDO_RETENTION. That leaves UNDO_RETENTION and UNDO_SUPPRESS_ERRORS as the only valid answer.

Saturday, 29 August 2009

Performance Tuning - Question 051.

On large DSS systems where the transactions are relatively few and large, which approach should you use to define the number and size of rollback segments?

A . Smaller and more rollback segments.

B . Fewer and smaller rollback segments.

C. Larger and fewer rollback segments.

D . Larger and more rollback segments.

Explanation:

Answer C: Since there are few concurrent transactions, there are fewer demands on rollback segment header blocks, which should result in us choosing to create fewer rollback segments. However, transactions will probably be much larger and run longer, so we should increase the size of the rollback segments appropriate. Smaller and more rollback segments apply to OLTP standard operations. Of course, there will be a mix for hybrid systems.

Friday, 28 August 2009

Performance Tuning - Question 050

Which of the following are not tuning goals for undo segment management?

A . No transaction should ever run out of undo segment space.

B. Undo segments should never wrap during normal operation.

C . Readers should always see the read-consistent images they need.

D . Transactions should never wait for access to undo segments.

E . Undo segments should never extend during normal operation.



Explanation:

Answer B: Wrapping is expected and a normal operation. Each of the other answers are valid undo tuning goals. Transactions should never wait for access to undo segments and they should never run out of undo space; undo segments should not dynamically extend, and readers should always see the read-consistent view that they require for their application. Unnecessary waits can be remedied with more undo segments. Space issues can usually be resolved by adding more space to the undo tablespace. Dynamic extension, in a manually configured undo environment, is controlled by properly setting the extent size and optimal number of extents. Read-consistency issues can usually be resolved by adding more space to rollback segments, adding more rollback segments, or by increasing the UNDO_RETENTION parameter.

Thursday, 27 August 2009

Performance Tuning - Question 049

V$ROLLSTAT can be used to determine?

A . The performance characteristics of all undo segments defined in the database.

B. The performance characteristics of all undo segments currently online in the database.

C . Only the STATUS of an undo segment is ONLINE or OFFLINE.

D . The performance characteristics of all manually configured rollback segments defined in the database only.

E . The performance characteristics of all manually managed redo segments defined in the database only.



Explanation:

Answer B: The V$ROLLSTAT view is used for either manually or automatically configured undo segments, but not both at the same time, because the database will only support one or the other. V$ROLLSTAT is limited to one row for each undo segment that is currently online.

Wednesday, 26 August 2009

Performance Tuning - Question 048

In most cases, the DBA can tune latches and minimize latch contention by configuring which latch parameters?

A . DB_LATCH_COUNT

B . Setting DB_LATCH_CONTENTION=SAFE

C . Setting DB_LATCH_MONITORING=AUTO

D. The DBA doesn't tune latches; instead, the DBA tunes the processes that contribute to latch contention.

E . By setting the values for individual problem latches in the init.ora file.



Explanation:

Answer D: In Oracle9i, there are no parameters that directly tune latches. Latch contention is evidence that some other application code or database parameter needs to be tuned. Hidden parameters (_*) can be used to configured some latches, but they are usually unsupported unless you have been specifically instructed to use them by Oracle Support.

Tuesday, 25 August 2009

Performance Tuning - Question 047

Which of the following steps should you take to diagnose contention for latches? (Choose two.)

A . Query V$SYSSTAT to see if the latch free event has a high value for the TIME_WAITED column.

B. Query V$SYSTEM_EVENT to see if the latch free event has a high value for the TIME_WAITED column.

C. Query V$LATCH and V$LATCH_CHILDREN based on the results of V$SYSTEM_EVENT.

D . Query V$LATCH and V$LATCH_CHILDREN based on the results of V$SYSTAT.



Explanation:

Answer B and C: The first place to start is in the V$SYSTEM_EVENT view, looking for latch free waits. If the TIME_WAITED column has a high value, then query the V$LATCH and V$LATCH_CHILDREN, if needed, to investigate the specific latch or child latch that is contributing the most to the time waited. 'Query V$SYSSTAT to see if the latch free event has a high value for the TIME_WAITED column' and Query V$LATCH and V$LATCH_CHILDREN based on the results of V$SYSTAT specify an incorrect dynamic performance view for wait events.

Monday, 24 August 2009

Performance Tuning - Question 046

Which of these statements is true about latches?

A. While one process holds a latch on a memory structure, no other process can access or scan the memory structure.

B . All latches are in one of two modes: scan or lock.

C . Scan latches are used on the LRU lists in the buffer cache.

D . One process can scan a memory structure while another process holds a latch on the structure.



Explanation:

Answer A: Latches, with only a few exceptions, are exclusive. The structure of a latch is simple; it does not support queueing or sharing. The idea is that a latch will only be held briefly. Scanning a memory structure requires a latch, since because otherwise the memory structure could be changed by another process, invalidating the first scan.

Sunday, 23 August 2009

Performance Tuning - Question 045

Which of the following will reduce the number of disk sorts?

A. Increasing SORT_AREA_SIZE.

B . Increasing the SORT_MEMORY_ALLOCATED parameter.

C . Decreasing SORT_AREA_RETAINED_SIZE.

D . Altering SORT_AREA_SIZE dynamically, based on available memory within SGA_MAX_SIZE.



Explanation:

Answer A: The value of SORT_AREA_SIZE determines the amount of memory that can be used for sorting operations; a lower value increases the likelihood that sorting will overflow the area and require disk extents. Increasing the size of the sort area will reduce free memory available to the remaining processes on the database server, but will also reduce the likelihood that sort operations will require disk space allocated from the temporary tablespace. SORT_AREA_SIZE, if too large, can contribute to paging and swapping.

Saturday, 22 August 2009

Performance Tuning - Question 044

When a process needs to use the defined temporary tablespace for sorting operations, which of the following does it do?

A . It looks in the PGA sort extent pool (SEP) to determine if there are free extents in the system tablespace.

B. It looks in the SGA sort extent pool (SEP) to determine if there are free extents in the sort segment.

C . It looks in the PGA sort extent pool (SEP) to determine if there are free extents in the sort segment.



Explanation:

Answer B: A sort segment is created when the first sort operation uses the tablespace, and is dropped when the database is closed. The sort segment grows as needed, and is made up of extents that can be used by different sort operations. When a process needs sort space, it looks in the SGA Sort sort Extent extent Pool pool (SEP) to determine if there are free extents in the sort segment.

Friday, 21 August 2009

Performance Tuning - Question 043

Which of the following SQL operations require sort operations? (Choose two.)

A. Creating a function-based index

B . Index creation on ordered data using the NOSORT option

C. Creating a B-Tree index on ordered data

D . ANALYZE with the DELETE STATISTICS option



Explanation:

Answer A and C: Creating a B-Tree index on sorted data will still cause a sorting operation, because the NOSORT parameter was not specified. The sort is required to get the key values into the correct order in the B-Tree. Function-based indexes also require a sort, because correct order must be built into the B-Tree.

Thursday, 20 August 2009

Performance Tuning - Question 042

Which of the following would indicate a performance problem with the redo logs?

A. The EVENT log file parallel write in the V$SYSTEM_EVENT view

B . The EVENT redo log file parallel write in the V$SYSSTAT view

C . The EVENT log file parallel write in the V$SYSSTAT view

D . The EVENT redo log file parallel write in the V$SYSTEM view



Explanation:

Answer A: The wait event log file parallel write indicates that there may be an I/O performance issue. Each of the other answers uses an incorrect wait event, or an incorrectly specified dynamic performance view.

Wednesday, 19 August 2009

Performance Tuning - Question 041

Checkpoints occur too frequently, causing performance problems. Which of these are valid solutions? (Choose two.)

A . Decrease LOG_CHECKPOINT_TIMEOUT.

B. Increase LOG_CHECKPOINT_TIMEOUT.

C. Increase the size of the redo logs.

D . Decrease the size of the redo logs.



Explanation:

Answer B and C: Increasing the size of the redo logs will reduce the frequency of checkpoints caused by a log switch, called complete checkpoints. Increasing the LOG_CHECKPOINT_TIMEOUT will also reduce the number of incremental checkpoints.

Tuesday, 18 August 2009

Performance Tuning - Question 040.

Which of these events does not trigger a checkpoint?

A . A redo log file switch.

B . Issuing the ALTER SYSTEM SWITCH LOGFILE command.

C . When LOG_CHECKPOINT_TIMEOUT is reached.

D. When (LOG_CHECKPOINT_INTERVAL* size of LOG_BUFFER) bytes of data is written to the current redo log file.

E . Issuing the ALTER SYSTEM CHECKPOINT command.



Explanation:

Answer D: A checkpoint is triggered when there is a redo log file switch, when the ALTER SYSTEM CHECKPOINT or ALTER SYSTEM SWITCH LOGFILE command is entered, or when LOG_CHECKPOINT_TIMEOUT is reached. The remaining checkpoint trigger is when the bytes of data written to the current redo log file are equal to the product of LOG_CHECKPOINT_INTERVAL and the OS block size.

Monday, 17 August 2009

Performance Tuning - Question 039

Which of these views would be used to diagnose tablespace performance problems?

A . V$FILE_IO

B. V$FILESTAT

C . V$TABLESPACE

D . V$TABLESPACE_USAGE

E . DBA_TABLESPACES



Explanation:

Answer B: The V$FILESTAT view contains accumulated data about the number of physical reads and writes, blocks read and written, and the total read and write time. These statistics can be used to determine hot spots, or disks that are much more active than others; , and to evaluate the average read and write time for the files in a tablespace. The information can then be used to determine if files need to be moved to different disk drives.

Sunday, 16 August 2009

Performance Tuning - Question 038

Which of these is an accurate implementation of manual striping?

A. CREATE TABLESPACE APP_DATADATAFILE 'D:\ORACLE\ORADATA\OR9I\APP_DATA01.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA02.DBF' SIZE 10 M,
'F:\ORACLE\ORADATA\OR9I\APP_DATA03.DBF' SIZE 10 M,
'G:\ORACLE\ORADATA\OR9I\APP_DATA04.DBF' SIZE 10 M/

B . CREATE TABLESPACE APP_DATADATAFILE 'E:\ORACLE\ORADATA\OR9I\APP_DATA01.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA02.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA03.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA04.DBF' SIZE 10 M/

C . CREATE TABLESPACE APP_DATADATAFILE 'D:\ORACLE\ORADATA\OR9I\APP_DATA01.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA02.DBF' SIZE 10 M,
'D:\ORACLE\ORADATA\OR9I\APP_DATA03.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA04.DBF' SIZE 10 M
USING MANUAL STRIPING/



Explanation:

Answer A: The CREATE TABLESPACE command specifically creates datafiles of equal size on four separate disk devices. This is a good example of manual striping. CREATE TABLESPACE APP_DATADATAFILE 'E:\ORACLE\ORADATA\OR9I\APP_DATA01.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA02.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA03.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA04.DBF' SIZE 10 M/
is incorrect because all four data files are created on the same device.
CREATE TABLESPACE APP_DATADATAFILE 'D:\ORACLE\ORADATA\OR9I\APP_DATA01.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA02.DBF' SIZE 10 M,
'D:\ORACLE\ORADATA\OR9I\APP_DATA03.DBF' SIZE 10 M,
'E:\ORACLE\ORADATA\OR9I\APP_DATA04.DBF' SIZE 10 M
USING MANUAL STRIPINGis incorrect because the "MANUAL STRIPING" clause is invalid.

Saturday, 15 August 2009

Performance Tuning - Question 037

Which of the following is true about multiple DBWn processes? (Choose three.)

A. Configured by setting the DB_WRITER_PROCESSES initialization parameter to a value greater than 1 and less than or equal to 10.

B. The processes are named DBW0 through DBW9.

C. They can improve dirty buffer write performance on multiprocessor systems.

D . The processes are named DBW1 through DBW10.

E . The number of processes is dynamically configurable by the DBA.



Explanation:

Answer A, B and C: The DB_WRITER_PROCESSES, default value 1, and values 1 through 10 set the number of DBW processes. If 1, then only the DBW0 process will be used. The valid processes are DBW0 through DBW9. Multiple DBW processes can reduce write contention due to the DBW0 process being overworked. The number of processes is set in the init.ora parameter and is not dynamically configurable, or automatically adjusted.

Friday, 14 August 2009

Performance Tuning - Question 036

What does the result of this query indicate? SELECT * FROM V$SGASTATWHERE POOL LIKE '%java%'

A . The number of Java session memory structures allocated

B. The amount of Java pool free memory and memory in use

C . The size of each Java session memory structure



Explanation:

Answer B: V$SGASTAT has two entries for the Java pool: 'free memory' and 'memory in use'. The sum of these two values is equal to the java_pool_size.

Thursday, 13 August 2009

Performance Tuning - Question 035

Which events trigger the log writer process, LGWR, to begin writing from the redo log buffer to the online redo log file or to members of the active redo log group? (Choose three.)

A. A server process performs a COMMIT or ROLLBACK.

B . The redo log file becomes one-third full.

C. A DBWn process tells LGWR to write.

D. The log buffer becomes one-third full.

E . The server process tells ARCH to write.



Explanation:

Answer A, C and D: By definition, these are the events that trigger LGWR to write all the log buffer entries to the redo log file or files. The server processes do not tell ARCH to write to the redo logs; ARCH is responsible for writing to the archived redo logs.

Wednesday, 12 August 2009

Performance Tuning - Question 034

Contention can occur on the cache buffer LRU chain latch on very busy systems because?

A . A latch must be obtained only when adding a buffer to one of the LRU lists.

B. A latch must be obtained when moving, adding, or removing a buffer from one of the LRU lists.

C . A latch must be obtained only when removing a buffer from one of the LRU lists.

D . A latch must be obtained only when moving or removing a buffer from one of the LRU lists.



Explanation:

Answer B: Any activity that affects the status of a block of data in the DB buffer cache requires obtaining a LRU latch. The cache buffer LRU chain latch is responsible for protecting the LRU list.

Tuesday, 11 August 2009

Performance Tuning - Question 033

To support tablespaces that use a block size other than the default, the DBA can configure separate buffer caches. Which of the following accurately describes configuring these separate buffer caches?

A . Set the init.ora DB_nK_CACHE_SIZE parameters to the required cache size where n is some block size in the series 2, 4, 8, 16, and 32, including the default block size.

B . The init.ora parameter DB_CACHE_BLOCK_SIZE can be set to any one of the values 2, 4, 8, 16, or 32.

C. Set the init.ora DB_nK_CACHE_SIZE parameters to the required cache size where n is some block size in the series 2, 4, 8, 16, and 32, excluding the default block size.

D . Set the init.ora parameters DB_nK_CACHE_SIZE to the required cache size where n is some block size in the range of 2 to 32, excluding the default block size.

E . Set the init.ora parameters DB_nK_CACHE_SIZE to the required cache size where n is some block size in the range of 1 to 32, including the default block size.



Explanation:

Answer C: The DBA can create separate buffer caches, based on the non-default block sizes. Acceptable block size values are 2, 4, 8, 16, and 32, excluding the value for DB_BLOCK_SIZE, which is the default database block size and the block size for the system tablespace.

Monday, 10 August 2009

Performance Tuning - Question 032

You want to allocate the memory structures in the shared pool so that you can begin collecting cache advice information at a later time. What is the correct setting for the DB_CACHE_ADVICE parameter?

A . OFF

B. READY

C . FALSE

E . ALLOCATE



Explanation:

Answer B: Setting DB_CACHE_ADVICE to READY allocates the memory structures in the shared pool to collect the statistics needed to populate the V$DB_CACHE_ADVICE view. The memory structures, and the view, are not populated with data until the DB_CACHE_ADVICE is set to ON, which signals the collection of buffer cache advice statistics. Setting the parameter to OFF stops collecting data and deallocates the shared pool memory structures.

Sunday, 9 August 2009

Performance Tuning - Question 031

The default buffer cache can be expanded dynamically if needed, but only in granule-size increments of? (Choose two.)

A . 2Mb if the SGA is less than 128Mb

B. 4Mb if the SGA is less than 128Mb

C . 8Mb if the SGA is larger than 128Mb

D . 8Mb if the SGA is less than 128Mb

E. 16Mb if the SGA is larger than 128Mb



Explanation:

Answer B and E: The granule is defined as a unit of allocation in the dynamic SGA. When an SGA component (DB buffer cache, shared pool) is expanded or shrunk, it must be in granule increments. The minimum number of granules allocated is 1 for each of the SGA structures (buffer cache, shared pool, and fixed SGA). The maximum number of granules is determined by MAX_SGA_SIZE, which is measured in bytes.

Saturday, 8 August 2009

Backup and Recovery Q011

What is management's role in the development and execution of your backup and recovery strategy?

a. To provide the necessary corporate resources and support for implementation.
b. To provide recommendations on how to minimise downtime.
c. To provide recommendations on how to minimise data loss.
d. To implement the strategy.



Answer A: Management is responsible for approving the appropriate resources and support for implementation. Answers B, C and D are DBA roles.

Performance Tuning - Question 030

Buffer cache performance is measured by selecting and comparing what information from which dynamic performance views?

A. Physical and logical reads from V$SYSSTAT

B . Logical and consistent reads from V$SYSSTAT

C . Physical and logical reads from V$BUFFER_POOL

D . Consistent gets and logical reads from V$BUFFER_POOL



Explanation:

Answer A: The formula is: 1-((physical reads - physical reads direct - physical reads direct (lob))/session logical reads); ). Eeach of these values is obtained from the VALUE column of V$SYSSTAT. We are measuring the hit ratio as the ratio of logical to physical reads. Logical reads are on a buffer cache block. Physical reads, of course, result in an I/O operation against a disk device. The V$BUFFER_POOL view is legitimate, but consistent gets, logical reads, and physical reads are not accumulated in this view.

Friday, 7 August 2009

Performance Tuning - Question 029

When LGWR signals a checkpoint, what happens in the buffer cache?

A . The server process copies dirty data blocks from the LRU list to the write list, and DBWn writes the data blocks to datafiles.

B . The reading, batching, and writing of dirty data blocks are parallelized.

C. DBWn copies dirty data blocks from the LRU list to the write list and writes the data blocks to datafiles.

D . LGWR copies dirty data blocks from the LRU list to the write list, and the server process writes the data blocks to datafiles.



Explanation:

Answer C: One of the primary functions of the checkpoint is to synchronize dirty blocks in memory with blocks on disk. LGWR signals the checkpoint, and DBWn copies the dirty blocks from the LRU list to the write list, and then writes them to disk. The reading, batching, and writing of dirty blocks is parallelized when we use multiple DBW processes. In 'The server process copies dirty data blocks from the LRU list to the write list, and DBWn writes the data blocks to datafiles' and 'LGWR copies dirty data blocks from the LRU list to the write list, and the server process writes the data blocks to datafiles,' we have obfuscated the responsibilities.

Thursday, 6 August 2009

Performance Tuning - Question 028

Which of these statements about processes and the buffer cache is accurate?

A . The ARCH process reads dirty data blocks from the buffer cache and writes them to the archive log destination.

B . The server processes are responsible for reading dirty data blocks from disk.

C . The LGWR process writes dirty data blocks to the redo log.

D. The server processes are responsible for reading blocks of data in or into the buffer cache.

E . The server processes are responsible for writing dirty data blocks back to disk.



Explanation:

Answer D: Server processes look for a data block in the buffer cache. If it is found, we have a hit. If not, the server process will scan for dirty blocks that can be moved out of the cache and be replaced by the block that needs to be read from disk. Once the free block is made available, the server process reads the required block from disk and places it in the free block in the buffer cache. DBWn writes blocks to disk; LGWR writes log buffer entries into the redo log files, and the ARCH process reads the redo log files and writes them to the archived redo log file destinations.

Wednesday, 5 August 2009

Performance Tuning - Question 027

Which of the following is true when CURSOR_SPACE_FOR_TIME=TRUE ?

A. The cursor cannot be deallocated from memory while an application cursor that is associated with it is open.

B . The cursor will be deallocated from memory as soon as all application cursors are closed.

C . The cursor must be deallocated from memory as soon as an application cursor that is associated with it is open.

D . The cursor management algorithm will attempt to deallocate used cursors from memory while an application cursor that is associated with it is open, if the library cache hit ratio is less than the specified threshold.



Explanation:

Answer A: Setting CURSOR_SPACE_FOR_TIME=TRUE indicates that the cursor will not be deallocated from memory as long as the application cursor is open. Only set this to TRUE if you have no library cache misses; the library cache hit ratio is almost 100 percent. Each of the other statements is an incorrect assessment of how the setting influences cursor retention.

Tuesday, 4 August 2009

Performance Tuning - Question 026

Which of these accurately describes how the UGA is configured?

A . In both shared server and dedicated server environments, the session stack space is kept in the SGA.

B . The sort areas and private SQL areas are included in the SGA in the dedicated server environment.

C. In both shared server and dedicated server environments, the session stack space is kept in the PGA.

D . The sort areas and private SQL areas are included in the PGA, regardless of the server configuration.



Explanation:

Answer C: By definition, the session stack space is kept in the PGA, whether the process memory is assigned to a shared server or a dedicated server. The sort areas and private SQL areas are stored in the PGA for dedicated servers, and in the SGA for shared servers.

Monday, 3 August 2009

Performance Tuning - Question 025

You query the V$SHARED_POOL_RESERVED dynamic performance view and notice that REQUEST_FAILURES is greater than zero and increasing. What should you do?

A. Reserved pool is too small; increase the SHARED_POOL_RESERVED_SIZE init.ora parameter.

B . No action is required as long as REQUEST_MISSES is not increasing.

C . Reserved pool is too small; increase the SHARED_POOL_RESERVED_SPACE init.ora parameter.

D . REQUEST_FAILURES will always be zero if the shared pool reserved space is configured.



Explanation:

Answer A: The REQUEST_FAILURES column indicates the number of times the reserved list did not have a free piece of memory to satisfy the request, and started flushing objects from the LRU list. To remedy this, increase the size of the shared pool reserved space by increasing the SHARED_POOL_RESERVED_SIZE parameter.

Sunday, 2 August 2009

Performance Tuning - Question 024

When is the best time to pin large, frequently used objects in the shared pool?

A. Immediately after startup.

B . On startup, by specifying the PIN_LARGE_OBJECTS parameter.

C . Immediately prior to their use.

D . On startup, by specifying the FAST_LOAD_OBJECTS parameter.

E . Immediately after shutdown.



Explanation:

Answer A: We can eliminate Immediately after shutdown, On startup, by specifying the PIN_LARGE_OBJECTS parameter, and On startup, by specifying the FAST_LOAD_OBJECTS parameter; we cannot pin objects when the database is down, and the two parameters mentioned are not valid init.ora parameters. So that leaves Immediately after startup or Immediately prior to their use: ; if we choose to pin large objects immediately prior to their use, then the first process that accesses the object will take a performance hit. To avoid this hit, and possible fragmentation, load the large frequently- used objects immediately after startup.

Saturday, 1 August 2009

Performance Tuning - Question 023

Which of these are true statements about tuning the dictionary cache? (Choose two.)

A . The DBA should perform DDL during peak activity hours.

B. The algorithm that manages data in the shared pool prefers to keep dictionary cache data over library cache data.

C . The algorithm that manages data in the shared pool prefers to keep library cache data over dictionary cache data.

D. When using sequences, the CACHE option improves dictionary cache performance.



Explanation:

Answer B and D: It is important to remember that dictionary cache misses are more expensive than library cache misses, so the algorithm that manages the shared pool prefers to keep objects in the dictionary cache. If the library cache statistics look good, that means the library cache is sized appropriately, and infers that the dictionary cache is sized appropriately. If you do not use the CACHE option on a sequence, each call to NEXTVAL creates a get in the dc_sequences dictionary cache item. Also, the DBA should perform DDL during off-peak hours.