Normally when you run backups through RMAN you would delete the archivelogs from your files system because once you have a backup there is no point having them hanging around filling up your disk space.
In my Dataguard environment the primary databases get backed up but the standby databases at the remote site don’t so I have a problem where I have to monitor the disk space on the remote site and delete the files manually when they become a problem.
Python is available in my environment so I set about writing a script that would delete the logfiles that were no longer required on a daily basis. To make a connection to the Oracle database we need to download and install the cx_oracle module so we can import it into our script.
import cx_Oracle, os ## ARC10816_794572399_1 : Archivelog format, yours may be different ## The location of the logs on disk archive = "E:\\Oracle\\Oradata\\myDB\\archive" ## The Oracle database connection, I use OS authentication and connect as SYSDBA ## don't put passwords in scripts if you can help it conn = cx_Oracle.connect('/@myDB', mode=cx_Oracle.SYSDBA) curs = conn.cursor() curs.arraysize = 50 ## Execute a querry to determine the last log applied to the database curs.execute('select max(sequence#) from v$archived_log where applied = \'YES\'') maxlog = curs.fetchall() ## For each file in the archive location, make sure its a logfile, determine the sequence ## number and if it's less than the last log applied, delete it. ## You will have to mess with this if your file format is different from mine for dirpath, dnames, fnames in os.walk(archive): for f in fnames: if f[0:3]=="ARC" sequence = f[3:f.find("_")] if int(sequence) < maxlog: os.remove('E:\\Oracle\\Oradata\\myDB\\archive\\' + f) ##
Once the script has been tested you can schedule it to run every night. If you are using OS authentication make sure the user that runs the scheduled task has the correct permissions to connect to the database.