:

Oracle Background Processes

Oracle background processes

The Oracle background processes are there to maximize performance. The idea being that common tasks which each user process would normally have to complete can be done by one Oracle background process with greater efficiency.

There are some core background processes and others which only run when you have certain optional, additional set-ups configured in your database. In this post I will cover some of the core Oracle background processes and issues/considerations of the performance of them within your database configuration.

Database Writer DBWn

The database writer, or writers because there can be up to 36, write modified (also known as dirty) data blocks to the datafiles on disk from the buffer cache. The buffer cache could be the default one, the keep pool, recycle pool or an nK block size buffer cache. It doesn’t matter, the database writer manages them all in the same way.

You can change the number of database writer processes with the DB_WRITER_PROCESSES initialisation parameter, but the database will also change the number of them based on the number of CPUs and processor groups. More is not necessarily better.

The database will also write more heavily if you have a more aggressive FAST_START_MTTR_TARGET parameter specified. Also, if there is a lot of space pressure in the buffer cache (requests for more space) the DBW processes will have to make room by writing out dirty blocks (modified data blocks) to the datafile headers to make room.

Log Writer LGWR

The Log Writer Oracle background process is responsible for ensuring that REDO entries, which are records of every transaction which happened in the database, get written to the online REDO logs. This process is absolutely critical for ensuring that the database is able to recover in the event of an instance failure

The log writer process reads REDO records from the log buffer which is relatively small (2-32MB in size usually) to the online REDO logs in a sequential manner. This means that your REDO log files should be on separate disks to your datafiles and that they are configured for fast write times of sequential I/O.

If you see large log file sync wait events you should look at the following:

  • What is the response time in milliseconds? It’s available in an AWR report
  • Are there disk queues on the disks where your REDO logs are?
  • Is there any other form of contention on these disks, such as other database files sharing with them?
  • Are you committing too frequently in your code?
  • Do you have large and short transactions going through?

You can only have a single log writer process so this is definitely an area which can cause contention if not configured properly for the demands of your system.

If you can afford to lose some transactions then it might be acceptable to look at using the NOWAIT and BATCH options of the commit command. One example could be that you have transactions which are just for logging purposes so you could look to use the COMMIT NOWAIT BATCH statement for these transactions.

Checkpoint CKPT

The idea behind the checkpoint process is for Oracle to say “I can confirm that all changes up to point in time X have been written to disk”. There are various different types of checkpointing, as has been discussed on the Oracle forums.

One key thing to remember is that each time there is a log switch a checkpoint operation happens which will increase I/O, so your online REDO logs shouldn’t be switching too frequently; no more than every 10-15 mins.

Also, it’s worth nothing that during a checkpoint operation all of the datafile headers have to be updated so instead of creating thousands of smaller data files try to create larger datafiles to reduce the unnecessary overhead of writing to many more data files.

System Monitor SMON

This process is responsible for cleaning up the mess on restart after your instance crashes. It also deals with the cleaning up of temporary segments that are no longer in use.

Process Monitor PMON

The Process Monitor Oracle background process cleans up failed user and server processes, releasing the locks on cached objects and the memory in the PGA allocated to that user. In general, it ensures that all dead/failed processes are not holding resources in the database.

Archiver ARCn

The archiver process is responsible for copying the online REDO logs once they are full and creating archived REDO logs. There can be many archiver processes. Usually you would only need 1 for local archiving, and additional ones if you are shipping logs. Log shipping might be because you have Data Guard configured or Change Data Capture (CDC).

The more archiver processes you have the more CPU you will use. Also, remember that the archiver processes read from the online pharmaceutical pain treatment tramadol REDO logs which are drives which you want to have very fast response time for, so as not to slow down database actions, as mentioned above. Hopefully you have a good SAN which has a decent sized cache so the data which has been written to the online REDO log will still be available in cache when the archiver processes come to read it, thereby reducing the load compared to them having to complete a disk read.

You can control the maximum number of archiver processes using the LOG_ARCHIVE_MAX_PROCESSES parameter in the database.

That covers some of the core background processes, but not all of them. More information is available from the very good Oracle documentation online.

If you have any questions, please leave a message below or send me a message and I will do my best to help you.

Like it, share it...

Category: Articles


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *