Groovy Lab – Groovy Rules Ideas – part 1

March 30, 2026

Introduction

Welcome back to the Groovy Lab. In previous blog posts, we covered Groovy fundamentals, managing dates, the Groovy engine upgrade and validator deep dive, ASO data clear techniques, and more. This time, we are taking it further with a curated collection of Groovy rule ideas that address real-world use cases in Oracle EPM Planning.

This is a two-part series. Part 1 focuses on orchestration and automation – the patterns you need when your Groovy rules are coordinating processes, calling APIs, running jobs, and waiting for results. Part 2 will cover data interaction and user-facing techniques such as rolling forecasts, reading form data, exporting to CSV, and more.

In this post, we will walk through six topics:

  1. Assign Values from Substitution Variables
  2. Run Business Rules / Rulesets via REST API
  3. Execute Essbase Calc Scripts from Groovy
  4. Run Data Management (DM) Load Rules
  5. Check Job Status and Wait for Completion
  6. Execute Data Maps Programmatically

Each section includes code snippets you can reference and adapt, along with key takeaways and tips. Let’s get started.

 

1. Assign Values from Substitution Variables

Why This Matters

Substitution variables are a core feature of EPM Planning. They are commonly used to control the current forecast year, scenario, version, and other dynamic values across the application. However, Groovy does not provide a built-in API to read substitution variables directly, you need to use the Planning REST API.

This pattern is essential for any Groovy rule that needs to dynamically determine the current forecast period, active scenario, or any other application-level setting stored as a substitution variable.

How It Works

  • Establish a connection to the Planning REST API using a pre-configured Connection object.
  • Call the substitution variables REST endpoint for your application and plan type.
  • Parse the JSON response and extract the specific variable value you need.

Code Example

/* Use Connection to access substitution variables */

Connection conFin = operation.application.getConnection(“Connection”)

 

/* Create variables to hold values */

String curFcstYear

String nextFcstYear

 

/* Call REST API to retrieve substitution variables */

HttpResponse<String> jsonResponse = conFin.get(

“/rest/v3/applications/xxx/plantypes/OEP_FS/substitutionvariables”).asString()

 

Map<String,Collection<?>> response =

new JsonSlurper().parseText(jsonResponse.body) as Map

 

/* Find specific variable by name */

curFcstYear = response.items.find{

it[“name”] == ‘OEP_FCSTStartYr’}[“value”].toString()

 

/* Derive next year dynamically */

nextFcstYear = “FY” + (

curFcstYear.substring(2,4).toInteger() + 1).toString()

 

println “Start fcst year: ” + curFcstYear

 

Key Takeaways

  • You must configure a Connection object in the application before referencing it in Groovy (Navigate > Administration > Manage > Connections). The connection name must match exactly.
  • The REST endpoint returns all substitution variables for the specified plan type. Use the find method to locate the one you need by name.
  • You can derive additional values (like next year) using string manipulation and integer arithmetic.
  • Always log key values with println for debugging in the job console.
💡 Tip

Replace the application name (“xxx”) and plan type (“OEP_FS”) with your own application and plan type values. These must match exactly what is configured in your environment.

2. Run Business Rules / Rulesets via REST API

Why This Matters

When building admin-level Groovy rules, you often need to orchestrate multiple business rules or rulesets as part of a larger automated process. Instead of relying on the job console, task manager, or EPM Automate, you can trigger rules and rulesets directly from within a Groovy script using the REST API.

This is particularly useful for multi-step processes: load data, run calculations, push to reporting, all driven from a single Groovy rule.

Code Example

/* Run a Ruleset via REST API */

try {

HttpResponse<String> jsonResponse1 = conFin.post(

     “/rest/v3/applications/xxx/jobs”)

     .header(“Content-Type”, “application/json”)

     .body(“””

{

     “jobType”: “Ruleset”,

     “jobName”: “Ruleset Name”

}

“””).asString()

 

println “Response: ” + jsonResponse1.body

} catch (Exception e) {

throwVetoException(“Error: ${e}”)

}

 

Key Takeaways

  • The jobType can be “Ruleset” to run a ruleset, or “Rules” to run an individual business rule.
  • The jobName must match the exact name as defined in the application – it is case-sensitive.
  • Always wrap REST calls in a try-catch block for graceful error handling.
  • The response body contains the jobId and status. Use these to poll for completion (covered in Topic 5 below).

 

⚠ Note

When running a rule or ruleset via REST API, it launches as an asynchronous job. If your downstream logic depends on the result, you must poll for job completion before proceeding. See Topic 5 for the polling pattern.

 

 

3. Execute Essbase Calc Scripts from Groovy

Why This Matters

While Groovy is powerful for dynamic logic, there are scenarios where traditional Essbase calc scripts remain the right tool – particularly for dense-dimension calculations, complex FIX statements, or legacy logic that is already well-tested and validated.

Groovy’s StringBuilder allows you to dynamically construct Essbase calc scripts at runtime. This gives you the best of both worlds: Groovy’s flexibility for conditional logic and variable interpolation, combined with Essbase’s optimized calculation engine for the heavy lifting.

Code Example

StringBuilder scriptBldr = StringBuilder.newInstance()

scriptBldr << “””

 

set aggmissg on;

set updatecalc off;

 

/* Enables parallel calculation */

/*SET CALCPARALLEL 4;*/

 

Fix (

/* Add your FIX criteria here */

)

endfix;

 

“””

 

/* Log the script for debugging */

println scriptBldr

 

/* Execute the calc script against a BSO cube */

Cube cube = operation.application.getCube(“FIN”)

cube.executeCalcScript(scriptBldr.toString())

 

/* Log execution details */

String curruser = operation.User.getFullName()

String curruserID = operation.User.getName()

println(“Script was executed by: $curruser ($curruserID)”)

 

println(“Process ended at ” +

(new Date()).format(“yyyy-MM-dd,HH:mm:ss”,

TimeZone.getTimeZone(“PST”)) + ” PST”)

 

Key Takeaways

  • Use StringBuilder to dynamically construct calc scripts with variable substitution. This is extremely useful when FIX criteria change based on user input or substitution variables.
  • Always print the generated script to the job log before executing. This makes debugging significantly easier when things go wrong.
  • The getCube method retrieves a reference to the BSO cube – the cube name must match exactly.
  • Logging the user name and timestamp provides an audit trail, which is valuable for admin and compliance purposes.
💡 Tip

You can inject Groovy variables directly into the calc script string using Groovy string interpolation (${variable}). This is powerful for building dynamic FIX statements based on RTPs or substitution variables.

 

4. Run Data Management (DM) Load Rules

Why This Matters

Data Management (formerly FDMEE) load rules are the standard mechanism for importing data from external sources into Planning. Triggering these loads from Groovy enables end-to-end automated workflows – load data, run calculations, push to reporting – all orchestrated from a single Groovy business rule.

This pattern is especially valuable for rolling forecast processes, daily data refreshes, and any scenario where manual intervention should be minimized.

Code Example

/* Execute a DM data load rule */

HttpResponse<String> jsonResponseDM = conDM.post()

.header(“Content-Type”, “application/json”)

.body(“””

{“jobType”: “DATARULE”,

  “jobName”: “${currScenario} REV”,

  “startPeriod”: “${curStartPeriod}”,

  “endPeriod”: “${curEndFcstPeriod}”,

  “importMode”: “REPLACE”,

  “exportMode”: “STORE_DATA”}

“””).asString()

 

/* Wait for the DM job to complete */

boolean success = awaitCompletion(

jsonResponseDM, “DM”, “Push data”)

 

awaitCompletion – Polling Function

def awaitCompletion(HttpResponse<String> resp,

     String connectionName, String opName) {

final int IN_PROGRESS = -1

if (!(200..299).contains(resp.status))

     throwVetoException(“Error: $resp.statusText”)

 

ReadContext ctx = JsonPath.parse(resp.body)

int status = ctx.read(‘$.status’)

 

for (long delay = 50; status == IN_PROGRESS;

         delay = Math.min(1000, delay * 2)) {

     sleep(delay)

     status = getJobStatus(connectionName,

         (String) ctx.read(‘$.jobId’))

}

 

println(“$opName ${status == 0 ?

     \”successful\” : \”failed\”}.”)

return status == 0

}

 

int getJobStatus(String connName, String jobId) {

HttpResponse<String> pingResponse =

     operation.application.getConnection(connName)

         .get(“/” + jobId).asString()

return JsonPath.parse(pingResponse.body)

     .read(‘$.status’)

}

 

Key Takeaways

  • The conDM connection must be configured to point to your Data Management environment.
  • importMode controls data loading behavior: REPLACE removes existing data first; MERGE adds to existing data.
  • The awaitCompletion function uses exponential backoff (starting at 50ms, capping at 1 second), which is a best practice to avoid overloading the server with rapid polling.
  • Always check the return status before proceeding to the next step in your workflow.

 

💡 Tip

Use Groovy string interpolation to inject dynamic period values (${curStartPeriod}, ${curEndFcstPeriod}) into the DM request body. Combine this with substitution variable retrieval from Topic 1 for a fully dynamic data load process.

 

5. Check Job Status and Wait for Completion

Why This Matters

When you launch jobs via the REST API, whether business rules, rulesets, DM loads, or other operations, they run asynchronously. Your Groovy script needs to wait for the job to finish and check whether it succeeded or failed before moving on to the next step.

This topic covers a robust job-status polling pattern with error handling and informative logging.

Code Example

/* Parse the initial response to get jobId */

def json = new JSONObject(jsonResponse.body)

String jobId = json.getInt(“jobId”).toString()

 

/* Poll until job completes */

int currentStatus = json.getInt(“status”)

while (currentStatus == -1) {

jsonResponse = conFin.get(

     “/rest/v3/applications/xxx/jobs/${jobId}”)

     .header(“Content-Type”, “application/json”)

     .asString()

json = new JSONObject(jsonResponse.body)

currentStatus = json.getInt(“status”)

sleep(1000)  // 1-second interval

}

 

/* Check final status and handle results */

String msg = “”

if (response[“status”] != null) {

msg = “Error returned with status ${response[‘status’]}. “

     + “${response[‘localizedMessage’]}”

println msg

throwVetoException(msg)

}

else if (response[“numRejectedCells”] != 0) {

msg = “Process completed successfully”

println msg

}

 

println “Process ended at ” + (new Date()).format(

“yyyy-MM-dd,HH:mm:ss:SSS z”,

TimeZone.getTimeZone(“PST”))

 

Understanding Job Status Codes

Status Meaning Action
-1 In Progress Continue polling
0 Success Proceed to next step
1 Error Log and throw exception

 

Key Takeaways

  • Always wrap the polling logic in a try-catch block to handle unexpected errors gracefully.
  • A status of -1 means the job is still in progress. A status of 0 means success. Any other value indicates an error.
  • Use throwVetoException to stop the Groovy rule and display an error to the user when a job fails.
  • Include timestamps and user details in your log output for auditing and troubleshooting.

 

⚠ Note

Consider adding a maximum retry count or timeout to your polling loop to prevent infinite loops in case the job never completes. A reasonable timeout might be 10–15 minutes depending on your process.

 

6. Execute Data Maps Programmatically

Why This Matters

Data Maps in Planning allow you to push data between cubes – for example, from a forecast cube to a reporting cube. Running Data Maps from Groovy gives you programmatic control over when and how data is pushed, including the ability to pass dynamic member filters based on the current forecast period, scenario, or any other dimension.

This is a key building block for automated reporting workflows.

Code Example

/* Get application reference */

def app = operation.application

 

/* Dynamically expand time period members */

def parentTimePeriod = app.getDimension(

“Time Period”, cube)

def sparentTimePeriod = parentTimePeriod

.getEvaluatedMembers(

     “Lvl0Descendants(${sCurrTimePeriodP})” as String,

     cube)

.collect{ fixValues(it) }

 

/* Join as comma-separated list */

def expanded = sparentTimePeriod.join(“, “)

def expandedFinal = expanded + “, No Time Period”

 

/* Execute Data Maps with dynamic filters */

if (app.hasDataMap(“FCST to RPT”))

app.getDataMap(“FCST to RPT”)

     .execute([“Years”: curActYear], true)

 

if (app.hasDataMap(“FCST to RPT Fcst curr Year”))

app.getDataMap(“FCST to RPT Fcst curr Year”)

     .execute([“Years”: curFcstYear,

               “Scenario”: currScenario], true)

 

Key Takeaways

  • Always use hasDataMap to verify the Data Map exists before executing. This prevents runtime errors if the map has been renamed or removed.
  • The execute method accepts a Map of dimension-to-member overrides. This lets you dynamically control which data slice is pushed.
  • The second parameter (true) runs the Data Map synchronously. Use false for asynchronous execution if you do not need to wait for the result.
  • You can expand member lists dynamically using getEvaluatedMembers with functions like Lvl0Descendants, useful for pushing all level-0 time periods.

 

💡 Tip

Combine Data Map execution with rolling forecast logic (covered in Part 2). After calculating the rolling forecast window, push only the relevant time periods to the reporting cube using dynamic filters.

 

Summary

In Part 1 of this series, we covered six essential Groovy patterns for orchestration and automation in Oracle EPM Planning:

  • Reading substitution variables via REST API to drive dynamic logic.
  • Launching business rules and rulesets programmatically from Groovy.
  • Building and executing Essbase calc scripts dynamically using StringBuilder.
  • Triggering Data Management load rules with configurable parameters.
  • Polling for job completion with proper error handling and status checking.
  • Running Data Maps with dynamic member filters for automated reporting workflows.

 

These patterns form the foundation for building sophisticated, automated EPM processes that go far beyond what traditional calc scripts can achieve.

 

Coming Up in Part 2

In Part 2, we will shift focus to data interaction and user-facing techniques:

  •   Rolling Forecast Logic – dynamically calculating forecast windows.
  •   Reading Information from Planning Forms – iterating through grid data.
  •   Writing Data to CSV Files – exporting data for downstream systems.
  •   Presetting User Variables Daily – automating user variable assignments.
  •   Running Reports and Bursting Definitions – triggering report generation.
  •   Sending Emails via Bursting Reports – automated notifications.

Stay tuned!

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *