Adventures in Groovy – Part 54: Collections Are Your Best Friend

I received a number of questions when I attended KScope. A lot of people had a tough time wrapping their head around the fact that using Groovy could actually reduce cost and development time, and significantly decrease the effort to maintain an application. The feedback I get on my lightning lessons has been overwhelmingly positive. The basis of the concept of the use of Groovy making things simpler stems from understanding, using, and becoming confident using collections. I thought I would share some examples, use cases that you all use, and some samples to get those started. If you doubt Groovy can mean simpler and faster, I HOPE this will get you to at least think about coming over to the dark side and provide some perspective that will help.

I would like to create more of these and potentially move my training to be free. Before I can do that I need everybody to watch what I post AND subscribe! If I can generate enough of that, I can create enough add revenue to fun my effort and time. So subscribe, watch, and we will see what happens.

In the mean time, let me know what you think of having videos like this rather than traditional written content. If you would like to learn more about dynamic scripts for data maps, forms, and improving performance with Esssbase scripts by removing serial mode and threading the right way, head over to in2hyperion.podia.com. I have more lightening lessons ready for you to take advantage of!




Improve the Planner Experience with Groovy (ODTUG)

Due to the overwhelmingly positive feedback from my last KScope presentation, I was asked to repeat the presentation for ODTUG. First of all, I can’t think you all enough for the generous and kind feedback! If you would like to attend, I would love to have you there.



Event Information

You can register here. The event is on Wednesday, Dec 1, 2021 from 12:00 PM – 1:00 PM EST

Presentation Summary

You have heard of Groovy, and you probably have heard of how drastically it can improve performance and functionality. The functionality talked about largely misses the focus of planning and forecasting, which is the user experience.

This session will walk through real-word examples implemented at organizations which are focused specifically on improving the user experience. It will highlight how to improve data input and new functionality that can be added which focuses specifically on the planning process. The topics will include everything from improving data entry to identifying issues before they become issues and giving users functionality that was prior only available to administrators. Allowing users to manage metadata effectively, performing multiple transfers at one time in workforce planning, and allowing users to fix data load errors on the fly will all be covered.

Attendees will leave this session with the knowledge and examples that will allow them to go back to their workplaces and use what they learned to improve their applications. They will leave with ideas, specific use cases, and the Groovy code to implement them. If you think the cloud limits your ability to be creative, think again!




Announcing the Essbase Ribbon, Part 2 – Power Adhoc Ribbon

Many of you used the Essbase ribbon I built 10+ years ago. I have been asked by many to build a similar ribbon for Smart View. Today I am introducing the Power Adhoc Ribbon. It has all the bells and whistles of the old ribbon completely re-engineered for Smart View. There is a nominal fee, but you can try it for free. Some of the awesome features are summarized below.

The Power Adhoc Ribbon has it’s own website where you can get updates, download the most recent version, and read more about what the options and future enhancements. Head over to the Power Adhoc website for all the details.

  • Refreshing worksheets is the cornerstone of any adhoc reporting. We put refresh on steroids. You can refresh the worksheet or all the worksheets in the workbook, but we added even more. The ribbon has two other options that let you refresh with suppression on or off, without changing your Smart View settings.
  • We know you change your aliases all the time. Now it sits on the ribbon to toggle it on the fly. Change between no aliases and the default aliases whenever you want, quickly and efficiently.
  • The ribbon also includes quick options to change your aliases between default and none. It doesn’t stop there. You can also toggle your formatting between Excel and Cell Styles.
  • Now, all your most commonly edited worksheet settings are on the ribbon, including zoom, suppression, and retention. When you change worksheets, the ribbon automatically updates to reflect the current worksheet settings.
  • What do you do when you want to distribute those workbooks with Smart View functions? Zap will remove all the functions in your worksheet or workbook with the values they return and leave any Excel formulas.
  • How many times have you wanted to duplicate a worksheet and been frustrated because it doesn’t copy your settings and connection? That is now a thing of the past. The worksheet duplicator will copy any worksheet and include the connection and worksheet settings.
  • The Power Adhoc ribbon adds a complete template management vehicle. Commonly used worksheets can be saved in groupings and reused at a later time by opening them right from the ribbon. Group them by application, purpose, or however you want. Templates can be starter adhocs or standard reports. You manage your templates and how they are organized. Your templates can be renamed, reorganized, or removed at any time.
  • Power Options allows you to show/hide the options you want. You no longer need to see the options that aren’t relevant to you.
  • Because you maintain your subscription, we can easily let you know when updates are released.  As soon as any improvements are available, you will be the first to know.
  • Not everything on the ribbon will be new to you. To reduce the need to toggle between ribbons, the most commonly used options from Smart View are included.
  • To reduce the learning curve, every option has a power tooltip that explains its purpose.
  • By having a subscription, you get all the updates and you decide what enhancements are prioritized. If there is an enhancement you need, ask for it.
  • Many operations, like refreshing and submitting, can sometimes take longer than you expect. Just to make sure you can monitor those requests, we notify you how long it took by updating Excel Status Bar.



Adventures in Groovy – Part 52: And You Thought Essbase Only Stored Numbers

My 20+ years of using Essbase I was told, and had no reason not to believe, only stored numbers. We obviously have lists and with Planning it appears we have text. If you aren’t familiar with how this works, the strings are stored in the Planning repository and the index is stored in Essbase. If you thought the same thing, you were as wrong as I was.



What is NaN

I have been learning and implementing Groovy solutions for 2-3 years now and came across something I now only have never seen, but didn’t think was possible. Java, and therefore Groovy, has a concept of NaN. NaN stands for Not A Number. NaN is the result of mathematical operators that create non numbers. Log, square root, division, and I am sure plenty of other formulas that I learned before I was 15 and long forgot, can result in what Java interprets as non numeric values. The two that I have found are NaN and Infinity. An example of 4/0 would result in NaN. 0/4 would result in Infinity.

NaN in Groovy

Prior to about 2 months ago, I accounted for these scenarios in my logic and never had an issue. Recently, in writing some basic math, like revenue / units, I didn’t account for the possibility that revenue or units would be a zero. If these scenarios are tested in Groovy, errors are encountered and honestly, I thought my logic in a business rule would have produced a divide by 0 error when the denominator was a 0.

java.lang.ArithmeticException: Division by zero
at ConsoleScript2.run(ConsoleScript2:1)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

I thought, like in Essbase, 4/0 would result in a 0. I found out the hard way that is not the case!

Types Of NaNs and Infinities

In my case, I didn’t care of the sub type of Nan or Infinity the results was, just that it happened and I needed to account for it. These can be checked very simply.

double simpleSample = 4/0
if( simpleSample.inNaN() || simpleSample.isInfinite() ) {
  println 'ERROR'
}

Your situation might be different. If it is, these are the types I am aware of that you can check for

//NaN variances
isNaN(nan)
isNaN(zero_div_zero)
isNaN(sqrt_negative);
isNaN(inf_minus_inf);
isNaN(inf_times_zero);
isNaN(quiet_nan1);
isNaN(quiet_nan2);
isNaN(signaling_nan1);
isNaN(signaling_nan2);
isNaN(nan_minus);
isNaN(log_negative);
isNaN(positive_inf);
isNaN(negative_inf);
 // Infinite variances
isInfinite(positive_inf);
isInfinite(negative_inf);

What Do You See In Planning and Essbase

So here is where I really was confused! Everything I I thought I knew was wrong.

Surprise Number One

If either of these conditions occurs, the cell that was calculated in Groovy and stored in Planning/Essbase is actually stored differently. I can’t say for sure what happens on the back end, but when the data is exported, rather than a numeric value, it will export NaN. Yes, you will see something like 10,20,20,NaN,40….

Surprise Number Two

If either of these conditions occurs, the cell that was calculated in Groovy and stored in Planning/Essbase shows a number that makes no sense in a data form when opened in Smart View. A value of 65535 will be displayed. This value can be edited/changed. If it is the source of another member formula or calculation, it will also show a value of 65535.

Surprise Number Three

The same thing is NOT what you see in a data form opened in the UI. In the UI (web version), NaN or Infinity will actually be displayed in the effected cell. This almost makes sense if I didn’t see 65535 in Smart View.

Stop NaNs From Happening

There are probably a million ways to handle this. For what it is worth, I want to share how I handled it and why. First, I created a function in my calculation that accepted one parameter, which was the value in which I was evaluating for Nan or Infinity. Inside this I used an Elvis operator and returned 0 if it was Nan or Infinity, and the value submitted to the function if it was a numeric value. The reason I created a function was because I had more than 30 formulas that I needed to check for this and it was easier to write the code once.

double nanCheck(double input){ (input.isNaN() || input.isInfinite()) ? 0 : input }

// Use Case Example
DataCell rate
DataCell units
operation.grid.dataCellIterator({DataCell cell -> cell.edited}).each{cell->
    rate = cell
    if(rate.accountName == 'Small_Unit_Cost'){
        units = cell.crossDimCell('Small_Units')
        units.data = nanCheck(cell.crossDimCell('Revenue').data / rate.data)
    }
    else if(...)
        {...}
}

That’s A Wrap

it is really important to account for this for obvious reasons. If you are testing for NaN and Infinity, save yourself some trouble and if there is a possibility of it occurring, start doing it now. It is a pain to strip it out afterwards if it gets into a UAT situation or even Production. One last thing. If you are looking at this and thinking, this should really return #Missing. You surely can do that. There are a few changes that have to be made. First, the function can’t be double. Since #Missing is a string, it would need to be a string. The second issue is that you can’t set data, which is a double, to a string. You would have to use formattedValue. The changes would look something like this.

String nanCheck(double input){ (input.isNaN() || input.isInfinite()) ? '#Missing' : input }

// Use Case Example
DataCell rate
DataCell units
operation.grid.dataCellIterator({DataCell cell -> cell.edited}).each{cell->
    rate = cell
    if(rate.accountName == 'Small_Unit_Cost'){
        units = cell.crossDimCell('Small_Units')
        units.formattedValue = nanCheck(cell.crossDimCell('Revenue').data / rate.data)
    }
    else if(...)
        {...}
}



Upgrade Or Downgrade To Or From Hybrid In The Cloud

There are benefits to moving to Hybrid, but there are also some challenges.  The content of this post is not around the pros and cons, but the fact that you can upgrade your environment to use it.  If you find it isn’t for you, you can “downgrade” back to BSO.  The flexibility provides everybody the ability to try it.



Recreate Introduction

EPMAutomate comes with a function that allows the ability to restore an environment to a clean slate.  I don’t think this is new to anybody that has used EPMAutomate or EPM Cloud Planning.  What might be a surprise is that it does more than just reset an environment so you can start over.  It can also:

  1. Change the type of Essbase database to Hybrid or a standard BSO.
  2. Temporarily convert a Planning, Enterprise Planning, Tax Reporting, or Financial Consolidation and Close environment to an Account Reconciliation, Oracle Enterprise Data Management Cloud, or Profitability and Cost Management environment.

Using Recreate

The usage of the Recreate command is as follows, which all options.

 epmautomate recreate [-f] [removeAll=true|false] [EssbaseChange=Upgrade|Downgrade] [TempServiceType=Service_type]
  • -f forces the re-create process to start without user confirmation. If you do not use the -f option, EPM Automate prompts you to confirm your action. Be careful using this option.  If you have a long day and aren’t focused, this can make the day a whole lot worse!
  • removeAll, removes all of the existing snapshots, as well as the content of the inbox and outbox.  The default is false, meaning it retains the snapshots and the content of inbox and outbox and nothing is removed.
  • EssbaseChange upgrades or downgrades the current Essbase version in legacy Oracle Financial Consolidation and Close Cloud, Oracle Enterprise Planning and Budgeting Cloud or Planning and Budgeting Cloud Plus 1 environments.
  • TempServiceType temporarily converts an environment to a different service environment.

Changing Your Essbase Version

To change your environment to BSO from Hybrid

epmautomate recreate EssbaseChange downgrade

To change your environment to Hybrid from BSO

epmautomate recreate EssbaseChange upgrade

Trying A Different Service

There are some details that must be understood to use this option and is dependent on the version of the cloud service you have.  For subscriptions other than EPM Standard Cloud Service and EPM Enterprise Cloud Service, meaning PBCS and EPBCS, you can use this option to convert, temporarily, to

  • Account Reconciliation
  • Oracle Enterprise Data Management Cloud
  • Profitability and Cost Management environment

To use this option to convert your environment to something it wasn’t originally intended for:

epmautomate recreate -f removeAll=true TempServiceType=ARCS

To change your environment back to its original service:

epmautomate recreate

For EPM Standard Cloud Service and EPM Enterprise Cloud Service subscriptions, you can use this option to convert to any supported EPM Cloud service.  EPM Enterprise Cloud Service subscriptions use a common EPM Cloud platform. Initially, you can deploy any supported EPM Cloud business process. 

To switch from a deployed business process to another, you must re-create the environment to delete the current deployment and to bring it back to the original EPM Cloud platform. You then re-create it again as the new service type.

For example, if you created an Account Reconciliation business process but now want to create an Oracle Enterprise Data Management Cloud environment, you must run the re-create command twice.

First, reset the service.

epmautomate recreate -f removeAll=true

Second, change the service type.

epmautomate recreate -f TempServiceType=EDMCS

The acceptable service types, currenty, are

  • ARCS (Account Reconciliation)
  • EDMCS (Oracle Enterprise Data Management Cloud)
  • EPRCS (Narrative Reporting)
  • PCMCS (Profitability and Cost Management)

That’s A Wrap

It is great that Oracle allows us to do these things. We have a ton of flexibility, not normally afforded to us in the cloud, to test and use different core database types.  It also allows those using the old SKU to try the new services, or business processes, to see if they might be something you want to purchase. 

If you want to give Hybrid a try, use your test environment and give it a spin.  If you want to get exposed to one of the other business processes, you now have the ability to see it without jumping through hoops.




Working With Planning Formula Expressions

Most of us know that there is a button in the calc rule editor that allows us the ability to select a smart list and the smart list entry.  It will add something [[smartlist name.smartlist entry]].  If this is new to you, what it does is replace reference the smart list and replaces it with the numeric value that exists in Essbase.  The beauty of this is that it is dynamic, so if the smart list is changed in any way, you don’t have to go into your rules and replace change the index values for the smart list entries to match.  Guess what, there are more!



Well, don’t I feel like the F*@$& idiot, to pull a quote from A Few Good Men.

What Is A Planning Formula Expression

As described above, it is an expression that allows you to get valuable information dynamically about artifact properties in a Planning application.  The following formula expressions currently exist.

  • SmartLists
  • Dimensions
  • Planning User Variables
  • Periods
  • Scenarios
  • Cross-References
  • Workforce Cube Year to Date
  • Get ID for String

SmartLists

I already discussed the Smart List, but here is an example if this is new to you.  The calculation manager syntax is [[SLName.entryname]].

FIX (Mar, Actual, Working, FY15, P_000, "111") 
   "Product Channel" = [[Channel.Retail]] ;
ENDFIX

Which would return something like this.

FIX (Mar, Actual, Working, FY15, P_000, "111") 
   "Product Channel" = 2 ;
ENDFIX

Dimensions

The dimension expressions are not all that useful unless you are building calculations that might go across applications that have different names for the 6 required dimensions, plus currency.  Using the following dimension tags, the customized name will be returned when they are added to the dimension expression. The syntax used for this function is [[Dimension("DIM_NAME_ENTITY")]].

  • DIM_NAME_PERIOD
  • DIM_NAME_YEAR
  • DIM_NAME_ACCOUNT
  • DIM_NAME_ENTITY
  • DIM_NAME_SCENARIO
  • DIM_NAME_VERSION
  • DIM_NAME_CURRENCY

An example would look like this. This runs a calc dim on whatever your account dimension is.

CALC DIM([[Dimension("DIM_NAME_ENTITY")]]);

In this application, Entity is named Entity, so the above script returns:

CALC DIM ("Entity");

If the entity dimension was named Cost Center, it would return:

CALC DIM ("Cost Center");

Planning User Variables

Planning user variables return the user variable’s member.  This can be pretty useful if you have variables that are used to do things like fix on their area of a hierarchy.  These can be gathered through run time prompts if they exist in the POV, but that isn’t always the case.  You might use them to show the products, for example, that are under a user variable, in the rows, in which case without Groovy, it can’t be passed in RTPs. The calculation manager syntax is [[PlanningFunctions.getUserVarValue("xyz")]].

An example where the user variable is used to run a calculation might look like this.

FIX (Feb, Actual, Working, E_000, @RELATIVE([[PlanningFunctions.getUserVarValue("Product View")]],0) )
   Revenue = Units * Cost;
ENDFIX

Period Functions

Period(periodName) returns the specified period. The options for this function are and the calculation manager syntax is [[Period("FIRST_QTR_PERIOD")]].

  • FIRST_QTR_PERIOD
  • SECOND_QTR_PERIOD
  • THIRD_QTR_PERIOD
  • FOURTH_QTR_PERIOD
  • FIRST_PERIOD
  • LAST_PERIOD

This example:

FIX ( Mar, Actual, Working, P_000, "6100", FY15 )
   "120" =[[Period("FIRST_QTR_PERIOD")]];  
ENDFIX

would return a script like this

FIX (Mar, Actual, Working, P_000, "6100", FY15) 
   "120" = "Mar";
ENDFIX

The NumberofPeriodsInYear returns the number of periods in the year and NumberofYears returns the number of years in the application. The calculation manager syntax for this is

  • [[NumberOfPeriodsInYear]]
  • [[NumberOfYears]]

The following example

FIX (Mar, Actual, Working, P_000, "6100", FY15)
   "120"=[[NumberOfPeriodsInYear]];
   "120"=[[NumberOfYears]]; 
ENDFIX

would produce this.

FIX (Mar, Actual, Working, P_000, "6100", FY15)
   "120"=12;
   "120"=9;
ENDFIX

Scenarios

This one is my favorite ones.  I have been using Groovy to get these not knowing they existed.  These allow the reduction of if statements and improved performance.  We can get the open periods.  If this go across years, then my groovy solution probably comes back into play.  For ranges that include one year, or even two could be handled, this offers great functionality.  The options available are

  • Start Year
  • End Year
  • Start Month
  • End Month

The calculation manager syntax is as follows.

  • [[getStartYear(“ScenarioName”)]]
  • [[getEndYear(“ScenarioName”)]]
  • [[getStartMonth(“ScenarioName”)]]
  • [[getEndMonth(“ScenarioName “)]]

A use case would look something like this. Assume{rtpScenario} is a run-time prompt variable of type member with a default value of “Actual”:

FIX({rtpScenario}, [[getStartYear({rtpScenario})]]:[[getEndYear({rtpScenario})]],
   [[getStartMonth({rtpScenario})]]:[[getEndMonth({rtpScenario})]])
      FIX ( Working, P_000, "111")
        "5800" = 5500;
      ENDFIX
ENDFIX  

This would build out the following calculation

FIX ("Actual", "FY10" : "FY18", "Jan" : "Dec") 
    FIX (Working, P_000, "111") 
      "5800" = 5500;
    ENDFIX 
ENDFIX

If your open range consisted of two years, you could do something like this

FIX({rtpScenario},[[getStartYear({rtpScenario})]],[[getStartMonth({rtpScenario})]]:”Dec”)
      FIX ( Working, P_000, "111")
         "5800" = 5500;
      ENDFIX
ENDFIX  
FIX({rtpScenario},[[getEndYear({rtpScenario})]],”Jan:[[getEndMonth({rtpScenario})]]
      FIX ( Working, P_000, "111")
        "5800" = 5500;
      ENDFIX
ENDFIX  

Cross-References

This function comes in a few flavors but does something pretty awesome. How it works might change your naming convention a little, or make it more consistent anyway. What id does is generate a cross dimensional reference to our default members, like No Product. The syntax is CrossRef(accountName, prefix, true) but the last two parameters are optional. If you use CrossRef(“Revenue”), it would produce the following, assuming your 6 required dimensions and a product dimension.

"BegBalance"->"No Scenario"->"No Version"->"No Entity"->"No Product"->"Revneue";

I can change my prefix by adding the second parameter. I don’t like having spaces in my member names, so I would do the above with CrossRef(“Revenue”,”No_”) which would produce

"BegBalance"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"Revneue";

If I change my syntax to CrossRef(“Revenue”,”No_”,true) I have a cross dim operator for all dimensions Except Period (uses BegBalance), and Currency, but includes year

"BegBalance"->"No_Year"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"Revneue"; 

Used in a Fix Statement the following example

FIX (Aug, Actual, Working, FY15, P_000, "112")
   "111" = [[CrossRef("5800", "No_", true)]];
ENDFIX

would produce the following script.

FIX (Aug, Actual, Working, FY15, P_000, "112") 
   "111" = "BegBalance"->"No_Year"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"5800";
ENDFIX

Workforce Cube Year to Date

If you use workforce, you probably have noticed the members it generates to get the month index for both the calendar and fiscal period. These can be used with this function to build a calendar to date value. The syntax is [[CYTD(memberName)]]. If you have renamed “Cal TP-Index” and “Fiscal TPIndex,”, then you have to specifically name the members in two additional parameters and the syntax is [[CYTD(memberName, calTpIndexName, fiscalTPIndexName)]]. This method is really easy to use and looks like this.

Fix (NOV, Actual, Working, FY15, P_000, "112")
   "5800" = [[CYTD("6100")]];
ENDFIX

If the default names are changed, it would look a little different.

Fix (Dec, Actual, Working, FY15, P_000, "112")
   "5800" = [[CYTD("6100", "Cal TP-Index", "Fiscal TPIndex")]];
ENDFIX

Get ID for String

This doesn’t solve all the problems around Smart Lists and text accounts, but it is a step in the right direction. If you don’t know, both of these are held in the Planning repository. Essbase ONLY stores numbers. Look at this as the index to the value you see in Planning. The repository has the map from index to value. In an Essbase calculation, you can’t set a text account to a text value. Well, actually, you can. The syntax for this function, which assigns a text value, is [[PlanningFunctions.getIdForString("text")]]. This allows you to set the value of a text account to a string.

In Planning, you have an account named “acct1 text” that is of type text. You want to copy your values from FY16 Dec to FY17 Mar, and change the text account to “Not Budgeted,” it would look like this.

FIX (Actual, Working, P_000, "210")
   DATACOPY FY16->Dec TO FY17->Mar;
   Mar(
      "acct1 text"->FY17 = [[PlanningFunctions.getIdForString("Not Budgeted")]];
   )
ENDFIX

That’s A Wrap

One last thing. If you use any of these in a member formula, for some reason you have to remove a bracket on each side. So, instead of two, you just need one. I will say I have not tested all of these, but the ones I have tested/used do follow this pattern. Hopefully Oracle keeps expanding these. Although they aren’t as helpful as they were prior to Groovy, they are simpler to use than implementing a Groovy solutions for some of these needs. For you lifers, it is things like this that a newb tells you. Don’t ever think you can’t learn from somebody that “knows nothing.”




Adventures in Groovy – Part 51: Dynamically Changing Storage Properties When Using Hybrid

With hybrid being used more and more there is a need to manage the storage methods of different levels of sparse dimensions.  Whether it is a staggered hierarchy or not, getting the storage method from the source can sometimes be challenging.  More often times than not, you may want to own it on the Planning side so you can change it at will and not have to go through the typical IT change order process that may take weeks, or even months, to go through the full development cycle.



Managing this manually would not be fun, especially if the hierarchy is loaded more often than monthly.  Yes, you could use the Smart View admin option, but it is manual and let’s face it, you have a ton going on and you will make mistakes. 

In Comes Groovy

With Groovy, a calculation can be written to update metadata.  I have talked about this in several other posts, but I am going to walk through a couple specific examples that are for specific situations.  I think this will spark some interest of taking this further for situation similar, or completely different. 

Reusable Concepts

Before I jump into the situations and examples, there are a couple techniques that will be reused in all the examples.  Rather than repeatedly explain them, let me first introduce them.

First, this situation assumes that the storage methods are different for the plan types.  This might be more unique, but it is easy to deal with.  If this isn’t the case, the properties in the example can be change to “Data Storage”

It is always a good idea to start every Groovy script off with the RTPS tag.  To understand more about why this is important, read Part 49  This will be used in each example.

/*RTPS: */

Each example requires methods that have to have the dababase passed to it.  The easiest way to get the cube the rule runs on is to use rule.cube.  There are other ways to accomplish it, but this is the shortest and most dynamic.

List<Member> products = operation.application.getDimension("Product",rule.cube).getEvaluatedMembers("Descendants(Product)", rule.cube)

Each example gets the dimension and holds it in a variable.  The method requires a pointer to a cube, or cubes.  Often it is easier to pass the cubes in the application, rather than one cube, to make sure all artifacts are available and not hard coded.  operation.application.cubes as Cube[] returns all the plan types as an array of variables that are of type cube.

A note about the parameters that can be used.  It is much faster to use the same parameters that are used in planning, like the options in a data map.  You CAN use most of the Essbase function.  Oracle doesn’t recommend them.  They are slower, but if you are not iterating and running the request numerous times, I haven’t noticed a difference.  In this example, it is executed once, so the performance degradation is minimal.

Dimension objDim = operation.application.getDimension('Product',operation.application.cubes as Cube[])

To get and set the properties of a member, the toMap method is used.  This will return all the properties of the member and I wrote a summary of the use of this method in a prior post found in this post – Part 11 – Accessing Metadata

Map<String,Object> memberProps = it.toMap()

Lastly, if you aren’t familiar with regular expressions, they can be of great use.  I have a module dedicated to this in xxxxx.  I struggled understanding regular expressions for years.  But I promise you, if you take 4 hours and focus on learning them, it will click.  To use it in Groovy, using the matches method allows this.  Briefly, here are some basic concepts.  A ^ means starts with.  A $ means ends with. A dot means any character, and following that with an asterisk means many.  So .* means one to many characters of any type

.matches("^.*Region$") || it.name.matches("^District.*$")

Setting All Parents To Dynamic

If you have a smaller hierarchy, one with maybe only a few levels, it might be advantageous to just set all the parents to dynamic.  The following script iterates through all the product members and sets every parent to dynamic.

/*RTPS: */
List products = operation.application.getDimension("Product",rule.cube).getEvaluatedMembers("Descendants(Product)", rule.cube)
List<Member> lev0Products = operation.application.getDimension("Product",rule.cube).getEvaluatedMembers("ILvl0Descendants(Product)", rule.cube)
Dimension objDim = operation.application.getDimension('Product',operation.application.cubes as Cube[])
products.each{
try{
       Map<String,Object> memberProps = it.toMap()
       if(lev0Products.contains(it)){
           memberProps["Data Storage (${rule.cube})".toString()] = 'never share'
    }
    else{
           memberProps["Data Storage (${rule.cube})".toString()] = 'dynamic calc'
    }
    objDim.saveMember(memberProps)
}
catch(Exception e) {
    println("Exception: ${e}")
    println it.name
}
}

Use Patterns To Set Parent Storage Property

In some situations, there are patterns to the levels of your hierarchy.  Maybe you have regional levels that are definable and unique that can be used to set different levels to dynamic.  Assume the following naming convention for this example

  • Total Products
    • West Region (everything ends in Region)
      • District 1 (everything starts with District)
/*RTPS: */
// Get every product in the hierarchy
List<Member> products = operation.application.getDimension("Product",rule.cube).getEvaluatedMembers("IDescendants(Product)" , rule.cube)
// Assign the producdt dimension to a variable
Dimension objDim =  operation.application.getDimension('Product',operation.application.cubes as Cube[])
// Loop through each product
products.each{
       // if the product matches these expressions, change the app setting to dynamic
       if(it.name.matches("^.*Region$") || it.name.matches("^District.*$") || it.name == 'Total Products'){
              Map<String,Object> memberProps = it.toMap()
        memberProps["Data Storage (${rule.cube})".toString()] = 'dynamic calc'
        objDim.saveMember(memberProps)
       }
       // otherwise change it to never share
    else{
              Map<String,Object> memberProps = it.toMap()
        memberProps["Data Storage (${rule.cube})".toString()] = 'never share'
        objDim.saveMember(memberProps)
    }
}

More Complicated Possibilities

There are a bunch of other possible needs.  Let’s say you have a need to make everything above level 3 dynamic.  First, if the hierarchy is staggered, the same level can be a 1 and 5.  You would have to decide how to handle that.  I would lean toward if it was a level 1 and a 5, I would make it dynamic because that might also mean your level 5 and 9 in that portion of the hierarchy would be a pretty deep hierarchy to make 9 levels dynamic.  Every situation is different, and performance would have to be evaluated, but the complexity of identifying how to set the storage in these situations is what I am trying to explain.

If you want to use patterns, you may also want to ensure that the pattern isn’t replicated at a parent and level 0, so there may be a need to check for both a pattern and the level of the member.

Obviously, there are an infinite amount of possibilities and each one could introduce complexity.  Just understand that almost anything can be defined by patterns and levels and can be accomplished, but the level of complexity of your logic or your regular expression may increase.

That’s A Wrap

The bottom line is that we now have the ability to do a lot of things we relied on the source system to do.  Or, maybe external scripts were run using Perl, or VBScript, or PowerShell.  We can use metadata properties, dynamic levels, any other repeatable pattern that might possibly come up.  It is fast and reliable, and completely in our control so we don’t rely on others when it is broken or needs changed.




Convert a Planning Load File to an Essbase Load File

There are a ton of reasons to convert a planning load file to an Essbase load file.  Maybe you are migrating a file from one environment to another, or simple want to load the file faster, but there are reasons to use the Essbase format.

Oracle is working on an enhancement that should be released in the next month or two that will use a load rule to load data to the app using the Essbase load format, which means the logging will be much improved, it won’t stop at the first failed line, and it will log all the errors, just like the Planning load format.  That is great news for those of us that use the planning format purely for the logging.

Performance

Before I get into the script, I want to touch on the speed of this method.  The file I used, based on a real situation, was over 89 million lines (yes, that is correct, million), and took over 5 hours to load as a Planning file.  It had to be split into three files to be under the 2GB limit, but it successfully loaded.  The file was received late in the morning and had to be loaded before the start of the day, so a 5 to 6-hour processing time was unacceptable.  By the way, yes, the file was sorted appropriately based on the sparse and dense settings.

I was able to build a unix/linux script using awk to convert this file to an Essbase load format and it only took about 9 minutes to convert.  The improved load time was pretty drastic.  It finished in under 15 minutes.

For testing, it was great, and it was perfect to improve the processing until the source system could rebuild the export in the Essbase format.  Just to reiterate, I added less than 10 minutes to convert the file, and reduced the load time by 4.5 hours, so it was worth the effort.

The Catch

Before I continue, if you are unfamiliar as to why the two load formats, here is the difference.  Essbase loads the data directly to Essbase.  The Planning load will bounce the file off the Planning repository to convert any smart list string account to the appropriate number, which is what is stored in Essbase.  This process creates a new file on the server, in an Essbase load format, with the numeric representation of each smart list account.  If you have no smart list conversions, this entire process is done for no reason, which was the case in this situation.  So, this isn’t the answer in every situation.

The Script

Before I get into the script, if you know me you know I love my Mac.  One of the reasons is because I have the performance of a Mac, I can obviously can run Windows whenever I want, and I have the ability to run Bash scripts through the terminal.  I am not a Bash scripting expert, but it is extremely powerful for things like this, and I am learning more as I need to build out functionality.

If you are a Windows user, you can install and use Linux Bash scripting in Windows 10.  You can read about it here.

There are several languages that can be used, but I chose AWK, which is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. It is a standard feature of most Unix-like operating systems.

First the script.  Here is it.  I put the awk on multiple lines so it was a little more readable, but this is one command.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";
HEADERMBR=$(head -1 $FILE | cut -d ',' -f2)

awk -v var="$HEADERMBR" 
     'BEGIN {FS=","; OFS="\t"} 
     NR>1 
     {gsub(/"/, ""); 
          print "\""$1"\"", "\""$3"\"", "\""$4"\"", 
                "\""$5"\"", "\""$6"\"", "\""$7"\"", 
                "\""$8"\"", "\""var"\"", $2}' 
     $SOURCEFILE > $LOADFILE;

There are a few things you will need to change to get this to work.  Update the source file and the load file to reflect the file to be converted, and the file name of the converted file, respectfully.  Inside the AWK script, I have 8 fields, 1 through 8.  This represents the 8 columns in my Planning file, or the dimensions and the data.  Your file might have a different count of dimensions.  If your file has more or less delimited columns (ignore the POV field quotes and assume that each delimited field in that is an additional field), update the script as needed

In this example is a planning file example and each arrow represents a field.  The print section of the awk command changes the column order to fit what the Essbase load format requires.

Breaking down AWK

This won’t teach you everything there is to know about AWK, as I am still learning it, but it will explain the pieces used in this command so you can get started.

This piece is simply creating two variables, the source file and the converted file name, so there aren’t multiple places to make these changes when the script needs updated.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";

The head command in Linux grabs specific lines, and -1 grabs the first line of the file.  I pipe that with the cut command to grab the second field of the header line, which is the dimension member I need to add to every row.  That gets stored in the HEADERMBR variable for later use.

HEADERMBR=$( head -1 $FILE | cut -d ',' -f2)

The example file above is repeated here.  You can see that the second field is the member and HEADERMBR is set to source_SAP.

Now the AWK command.  Before I jump into it, the AWK looks like this.

awk 'script' filenames

And in the script above, the awk script has the following form.

/pattern/ { actions }

You can also think of pattern as special patterns BEGIN and END.  Therefore, we can write an awk command like the following template.

awk '
     BEGIN { actions }
     /pattern/ { actions }
     /pattern/ { actions }
     ……….
     END { actions }
' filenames

There are also a number of parameters that can be set.

This script starts with a variable.  The -v allows me to create a variable.  The first part of this command creates a variable named var and set it equal to the HEADERMBR value.  I have to do this to use the variable in the script section.

-v var="$HEADERMBR"

The BEGIN identifies the delimiter as a comma and sets the output delimiter to a tab.  FS and OFS are short for Field Separator and Outbound Field Separator.

'BEGIN {FS=","; OFS="\t"}

Since the file has a header file, and I don’t want that in my Essbase load file, I only include the lines greater than 1, or skip the first line.  NR>1 accomplishes that.

NR>1

Gsub allows me the ability to create substitutions.  The source file has quotes around the POV field.  AWK ignores the quotes, so it interprets the field with the start quote and the field with the end quote as a field with a quote in it.  These need to be removed, so the gsub replaces a quote with a blank.  The first parameter is a literal quote so it has to be “escaped” with a /.

gsub(/"/, "");

The next piece is rearranging the columns.  I want to have the second column, or the column with the data, at the end.  I have 8 columns, so I put then in the order of 1, skip 2, 3 through 8, then the variable that was created that has the dimension member in the header line, then 2(the data field).  It looks a little clumsy because I append a quote before and after each field, which is required for the Essbase load format.  But, this is just printing out the fields surrounded by quotes (except for field 2, the data field) and separated by columns.

print "\""$1"\"", "\""$3"\"", "\""$4"\"", "\""$5"\"", "\""$6"\"", "\""$7"\"", "\""$8"\"", "\""var"\"", $2

The last piece is identifying the file I want to do all this work to.

$SOURCEFILE

I want to send the results to a file, not the screen, and the > tells the command to send the results to a new file.

> $LOADFILE

The Result

The outcome is a file that is slightly larger due to the additional quotes and replicating the member from the header in every row, normalizing the file.  It is converted to a tab delimited file rather than a comma delimited file.  The header is removed.  The app name is removed.  And the columns are slightly different as the data column was moved to the end.

That’s A Wrap

I am not ashamed to say this simple, basically one line script, took me forever to build and get to work.  By forever, I don’t mean days, but definitely hours.  That is part of the learning process though, right?  It was still quicker than waiting 6 hours for the file to load!  So now you have basically a one line awk command that converts a Planning load file (or an export from Planning) to an Essbase load file and you can get home to have dinner.




EPMAutomate And Apple – Overcoming Installation Woes

I love my Mac and I am getting really close to not needing my Windows VM. I don’t think Smartview for Office 365 is going to be a replacement for the Windows version in my lifetime, but that is the only thing really making me keep my VM current.

Installation Woes

First, I an not a UNIX guy.  I love some of the functionality in terminal, especially manipulating files.  But I have tried to get EPMAutomate installed and working for a year and just about given up.  Every few months I try again and fail.  For all you Unix/Linux people, I am going to embarrass myself.  For everybody else having the same challenges, I think this will get you over the hump.

Choose Your Poison

You probably know you can run commands in terminal and you probably assume Bash is the default scripting language.  As of Catalina, Apple is using Z shell as the default.  You can change this if you want.  I am not an expert but everything I read, people really like Z shell and prefer it over Bash.

But if you want to change it or see what your default is, open up your system preferences and go to Users and Groups.  Here is a trick.  If you hold Control and click on your profile name, an option for Advanced Options appears.  Click that and you will see the Login Option is probably set to /bin/zsh, which is Z shell.  You can change this to /bin/bash if you want to use bash.

It is important to know the above because you will want to set some environment variable defaults, which I will get to shortly, and you have to know the default to update the appropriate file.

Installing EPMAutomate

First, download EPMAutomate through the normal way to download the utilities.  For cloud users, click on your use name when you log in and you will have a download options  Download the Linux/Mac version.  It doesn’t matter where you download it, but if you download it to your Home directory (the parent of downloads, documents, etc), it is easier to install in that location.  This is my preferred area because my path to the tool is shorter and it is separated from my other content.  The download is a zip file so unzip it to your preferred location.  There will be an epmautomate folder.  Inside that folder will be a bin folder, and the shell command is inside that.

You are finished with the easy part.

Configuring EPMAutomate

This is where I struggled and gave up a number of times.  Thank goodness I finally figured it out because I was really tired of using my VM when I needed to run quick commands to do basic things.  There are a few things that have to be done to get it to work effectively.  First, for me, my java home directory was not set and I edited the epmautomate.sh to include it.  Every time I updated the version, I had to go back in and do it again.  The other frustrating thing was that I couldn’t figure out how to make my default profile include the bin folder noted above in my path.  So, I always had to execute epmautomate with a path or drill into the path to make it work.  I also had to prefix it with a dot to get it to run.

You may have known enough to know you had to set your environment variables.  You may have known that a file existed, or should exist ,named .profile in your home directory.  You may have even known how to edit that file, and that it is hidden (anything with a dot in front is a hidden file).  If you went down this path, tried to update your profile and it didn’t work, here is why.

If your default scripting language is Z shell, or zsh, it looks to a profile in .zprofile, not .profile or .bash_profile!  There are several editors, but the easiest one for me to use is nano.  Open terminal and enter the following command to edit/create the profile

nano .zprofile

Update .zprofile

When you open the file, it will likely be blank.  There are two things we need to add.  First create your JAVA_HOME variable.  It will look something like this.

export JAVA_HOME='/library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home’

If you don’t have a Java SDK installed, download and install the latest JDK.  Once downloaded, type the following into terminal, which will provide the path of the most recent version if multiple versions are installed.

/usr/libexec/java_home

If you want a specific version, add the version to the end of the command, like this.

/usr/libexec/java_home -v 1.8

Next, add the path of where you unzipped EPMAutomate to your path.  This should include the bin directory.  Your path may be different than what I chose.  The following will update the path variable to append your EPMAutomate path.

export PATH=$PATH:~/epmautomate/bin

Your .zprofile will look something like this.

Hold the Control key down and hit X.  This will exit and ask you to save your changes.

Test Your Variables

Close your terminal window and open a new one.  Your profile should load now.  You can check that the variables are loaded by entering the following.  Each should return the variable. The Java_Home should return your java path, and  the Path should include your EPMAutomate directory.

echo $JAVA_HOME
echo $PATH

At this point, if you enter epmautomate.sh, it should return the current version installed.

That’s A Wrap

There you go, use it just like on Windows!  Rather than epmautomate or epmautomate.bat, you enter epmautomate.sh.  Other than that, everything is pretty much the same.  It seems simple, but it took me forever to figure out how to add my variables.  All my script templates are updated and I am loving it!

 




Get Groovy FREE for 12 months!

Oracle is providing free access to Oracle Financial Statement Planning, including Strategic Modeling, to all existing Oracle Planning Cloud customers for the next 12 months.  This includes the use of Groovy.

You would be able to purchase Financial Statement Planning and Scenario Modeling if you choose to continue using it. Alternatively, your right to use Financial Statement Planning and Scenario Modeling will expire after April 30th, 2021.

And yes, Hackett Consulting is a preferred provider to help you take advantage of this.

The Oracle FAQ will provide answers to your most of your questions.  If you would like to take advantage of this, let’s get the ball rolling!  If you are a do-it yourselfer, the following classes will kickstart your ability to implement Groovy.

Try it out for free getting a few samples.