top of page
Writer's pictureSarah Ansell

7 Day Rolling Artifact Snapshot Backups in the Cloud (Groovy Script)

Updated: Nov 8, 2023

Each Oracle Planning Cloud environment has a Daily Maintenance window in which Oracle perform routine maintenance. During this automated process, the system will create a system backup called 'Artifact Snapshot'.


This is very handy for administrators as there is always an option to restore an application to the state as at the previous maintenance snapshot window. However, if an issue has taken more than 24 hours to be spotted, an administrator may wish to roll the system back further, or load the backup onto a test environment to unpick the issue further.


In this blog I demonstrate how a business rule can be used to ensure the Migration area retains 7 rolling days worth of these Artifact Snapshots. These are retained within the cloud in the Planning 'Migration' area for ease of access and quick restoration of artifacts.


Screenshot from Migration - Naming convention can be adjusted.

Step 1: Create a Connection (or use existing Connection)

This method works by using Groovy / REST API to rename the Artifact Snapshot. Therefore we need to call an External Web Services Connection within the business rule.

Create the connection as follows, or if you already have one that can be reused - jump to Step 2.

  • Navigate to Connections

  • Create an Other Web Service Provider Connection

  • Enter a Connection Name and Description

  • Enter the URL

  • Enter User and Password (In some cases e.g. Classic OCI, rather than OCI Gen2 you may need the format User = IdentityDomainName.UserName)

  • Save and Close

Other Web Service Provider Screenshot
Screenshot of example Connection settings.

Step 2: Create the Business Rule

  • Navigate to Calculation Manager / Rules

  • Create a new rule

  • Convert the rule to a Groovy Script

Below are the key sections of my script.


Set your global variables:

// Global - Get Connection and App Name //
Connection conn = operation.application.getConnection("Local_EPM")
Application application = operation.getApplication()
String AppName = application.getName()

I usually distinguish Test vs Prod environments when naming a backup, so here is how you can automatically check whether the connection is an Oracle test environment or a production environment. Make sure to include the .toLowerCase because sometimes EPM likes to throw out some wacky combinations of uppercase and lowercase into the URL output.

// Decide if TEST or PROD Application //  
HttpResponse<String> jsonResponseR1 = conn.get("/interop/rest/")
	.header("Content-Type", "application/json")
	.asString();
if(!(200..299).contains(jsonResponseR1.status)){
throwVetoException("Error occured: $jsonResponseR1.statusText")}  
    
def object = new JsonSlurper().parseText(jsonResponseR1.body) as Map
//println "$object.links"
if("$object.links".toLowerCase().contains('-test')){
	AppName = AppName + '- Test'
}
println "Application is $AppName"

Get the date for renaming, then print to the Jobs log what the rename will be:

// Retrieve today's date. //
import java.text.SimpleDateFormat
def date = new Date()
def sdf = new SimpleDateFormat("yyyy-MM-dd")

// Configuring the new backup name. //
String SnapshotName = sdf.format(date) + " - " + AppName + " - Artifact Snapshot"
println "Renaming Artifact Snapshot to " + SnapshotName

Now the fun part - the actual rename.

Example: Renaming 'Artifact Snapshot' to '2023-07-25 - AppName - Artifact Snapshot'

// Posts the rename job. This sends the rename request to the server.//
HttpResponse<String> jsonResponseR2 = conn.put("/interop/rest/v2/snapshots/rename")
	.header("Content-Type", "application/json")
	.body(json(["snapshotName": "Artifact Snapshot", "newSnapshotName":SnapshotName]))
	.asString();

def object2 = new JsonSlurper().parseText(jsonResponseR2.body) as Map
println "Rename ${object2.status == 0 ? "successful" : "failed"}.\nDetails: $object2.details"

You could simply leave your script there, and keep renaming the backups daily. Oracle will automatically delete backups that are older than 60 days or when the storage reaches 150GB.

However, I like to keep the folder nice and tidy and only retain 7 automatic backups. This leaves me room for my manual backups which I name differently to ensure they are retained for longer.

See below, I use REST API and Groovy to:

  • retrieve a list of my backups,

  • create a new list of only those with the automatic naming convention,

  • then delete the automated backup if the date in the name is older than 7 days.

// Create a list of existing backup names //
HttpResponse<String> jsonResponseR3 = conn.get("/interop/rest/11.1.2.3.600/applicationsnapshots")
	.header("Content-Type", "application/json")
	.asString();

def snapshotList = (List) []
def jsonMap = (Map) new JsonSlurper().parseText(jsonResponseR3.body)
def tempList = ((List) jsonMap.get("items")).each{ item ->
	if( (((Map) (item)).get("type")) == "LCM"){
		snapshotList.add((((Map) (item)).get("name")))
	}
}
println "Automated Snapshots Available:"
def autoSnapshotList = (List) []
def pattern = /....-..-.. - $AppName - Artifact Snapshot\.zip/
def tempList2 = ((List) snapshotList).each{ entry -> 
	if(((String) entry) ==~ pattern) {
 		autoSnapshotList.add(entry)
 		println entry
	}
}

// Check dates, if backup older than deleteBeforeDate (today.minus(x)) then delete backup //
def deleteBeforeDate = date.minus(7)
def autoSnapshotDeleteList = (List) []
def tempList3 = ((List) autoSnapshotList).each{ entry -> 
	if(Date.parse("yyyy-MM-dd",((String)entry).substring(0,10)).before(deleteBeforeDate)) {
		//println sdf.format(Date.parse("yyyy-MM-dd",((String)entry).substring(0,10)))
		autoSnapshotDeleteList.add(entry)
	}
}

// Delete Old Auto Snapshots (Snapshot Name Must be Encoded) //
String snapshotName = []
if(autoSnapshotDeleteList.size() > 0){
    println "Deleting Older Snapshots"
    def tempList4 = autoSnapshotDeleteList.each{ entry ->
        //Encoding//
        snapshotName = entry.toString().replaceAll(" ","%20")
        HttpResponse<String> jsonResponseR4 = conn.delete("/interop/rest/11.1.2.3.600/applicationsnapshots/$snapshotName")
        .header("Content-Type", "application/json")
        .asString();
        object2 = new JsonSlurper().parseText(jsonResponseR4.body) as Map
        println "Delete ${object2.status == 0 ? "successful" : "failed"}.\nDetails: $object2.details"
    }
}

This segment of code can probably be tidier, however it gets the job done!


Step 3: Schedule the Business Rule

  • Navigate to Jobs

  • Schedule Job

Here is an example of the Log output when this rule is run in Jobs.

Log messages :
Application is AppName
Renaming Artifact Snapshot to 2023-07-25 - AppName - Artifact Snapshot
Rename successful.
Details: null
Automated Snapshots Available:
2023-07-18 - AppName - Artifact Snapshot.zip
2023-07-19 - AppName - Artifact Snapshot.zip
2023-07-20 - AppName - Artifact Snapshot.zip
2023-07-21 - AppName - Artifact Snapshot.zip
2023-07-22 - AppName - Artifact Snapshot.zip
2023-07-23 - AppName - Artifact Snapshot.zip
2023-07-24 - AppName - Artifact Snapshot.zip
2023-07-25 - AppName - Artifact Snapshot.zip
Deleting Older Snapshots
Delete successful.
Details: null

Please note: the 'details' section would give error status details if any processes were unsuccessful.

 

I hope this gives some peace of mind, knowing that backups are tidy, automated, and accessible.

Please comment if you have any improvements / suggestions / questions.

Enjoy!


0 comments

Comments


bottom of page