Over my journey through the Hyperion toolset, there are a plethora tools I have developed that make repetitive tasks easier, make some tasks unnecessary, and provide functionality that doesn’t exist otherwise.  I am going to devote time to add these to the in2hyperion BLOG so that others may benefit.  You can find a new menu item that reads tools that will take you to the tools section of the BLOG.

The first free tool added is a utility that will convert Hyperion Essbase business rules’ exports, which are in the XML format, to a more readable format for documentation.  This tool will extract the name, description, and syntax/formula for all the business rules in the export.

 

 

“Installation and Configuration”

In installment #1 and #2 of this guide, we reviewed the architecture considerations and pre-installation requirements.  If you haven’t read the two previous post or haven’t read the Hyperion “Installation Start Here” guide, you’ll want to be sure to do that.

With this installment I’ll review the Installation and Configuration activities necessary for a Hyperion 11.x environment.  The installation and configuration are separate items.  The installation can takes place first and it only lays out the files to run the system.  The configuration ties everything together, creates repositories, deploys applications, and creates services.  This will cover both including the following items:

  • Hyperion Fusion Installer and How it Works
  • Preparing the Fusion Installer
  • Using the Fusion Installer
  • Hyperion Configuration Utility

The companion Hyperion Documentation for this post is either of the following documents found in the Oracle Documentation Library:
Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide Release 11.1.1.x
Oracle Hyperion Enterprise Performance Management System Manual Deployment Guide Release 11.1.1.x

You probably are not going to read them in their entirety since they are rather lengthy but they are very useful in fully understanding what is going on and priceless for complex environment or when things don’t go well.

Hyperion Fusion Installer and How it works.

So let’s get started on this installation already.  One of the great features of Release 11.x Fusion Edition is the Fusion Installer.  It is a nice application for guiding you through the installation.  The first thing to do is download the Fusion Installer and copy it to each server in your architecture.  The Fusion Installer is only the shell for the rest of the installation.  Under the Fusion Installer create a folder called “assemblies”.

Preparing the Fusion Installer

You’ll next need to download the remaining Foundation Services as well as any other applications you are using.  For our example we are going to assume the client is using Foundation, Planning, and HFM.  You are probably looking at something in the neighborhood of 4GB to download.  Each download, when unzipped contains a group of folders looking something like this:

Each server will need the appropriate assemblies copied to its own \<FusionInstaller>\assmblies directory.  This way, whenthe Fusion Installer starts, it knows what is available to install.  Some of the common components are needed on each server.  If you are missing something, the Fusion Installer will let you know in the status window at the bottom application.  For details on which assemblies are required for each application, refer to the Installation and Configuration Guide.

Using the Fusion Installer

As you start the Fusion Installer you will see something like this:

 

I like to choose “Choose Components Individually” since it feels like I have a little more granularity.  At this point I’ll select all of the components I want to install on each server.  Once again, this is run on every server in the architecture.  The Fusion Installer only lays out the application files; it doesn’t need any information so the sequence of installation can occur in any order.  It seems to work pretty well when all the components on a server are chosen together.

The last thing to do is to review all the install logs for any errors.  It is much easier to catch them now than after the configuration is started before anything is written specific is written to registries and relational databases.  Once the configuration starts, you are committed.

Configuration

The first thing to do is to configure Shared Services.  After the installation is complete, each server will have a Configuration Application.  It can be launched on a Windows Server from Start >Oracle EPM Applications > Foundation Services > EPM System Configurator.  This application will guide you through the configuration with such things as creating and distributing Java applications, creating relational repositories, and building the Windows Services.  The EPM System Configurator displays the installed components and then you can select which components to configure.  It looks something like this

The first thing to do is configure Shared Services.  This needs to be done by itself and before any other components are configured.  As soon as this is complete, launch Shared Services and verify that it is working appropriately.  If it isn’t, it’s will be a long day.  If you are able to log in to Shared Services, it is also probably best to go ahead and configure any external authentication provider at this time.

When Shared Services is complete and verified, you can move from server to server configuring all the components.  The documentation says that you can configure all the components at once but this will attempt to configure all the selected products in the same relation schema/table.  The documentation also says that some of the repositories need to be separate.  I prefer to do it one at a time to be certain I can keep all the relational repositories separate and I can validate each component as it is competed.  I usually start with all the Foundation Services and then make sure Workspace functions before moving on to the EPM application like Planning and Financial Management.  The last thing to do is to redeploy Workspace so it is configured to proxy all the remaining Web Applications.

You will want to be careful with each screen to make certain every component is configured as you planned.  It is easy to keep hitting ‘NEXT’ only to find out you mixed your Calculation Manager Repository in with your Shared Services repository.

As with the installation, I like to review all the configuration logs on each server very carefully.  Better to catch an error now than later.  When I’m comfortable with the configuration, I shut everything down and bring it back up.  The start order is quite finicky.  The Oracle Installation and Configuration Guide has specifics regarding the start order but I usually do something like this:
1.    Shared Services OpenLDAP
2.    Shared Services Application Server
3.    Hyperion Annotation Service
4.    EPM Workspace Agent (CMC Agent)
5.    EPM Workspace UI (CMC UI)
6.    EPM Workspace Web Server
7.    EPM Workspace Application Server
8.    Hyperion RMI Registry
9.    Performance Management Architect Services

Process Manager automatically starts the following services:

  •   Hyperion EPM Architect – Engine Manager
  • Hyperion EPM Architect – Event Manager
  • Hyperion EPM Architect – Job Manager
  • Hyperion EPM Architect – .NET JNI Bridge

10.    Performance Management Architect Web Services
11.    Essbase Server
12.    Administration Services Application Server
13.    Smart Search Application Server
14.    Essbase Studio Server
15.    Provider Services Application Server
16.    Hyperion Financial Reporting – Java RMI Registry
17.    Hyperion Financial Reporting – Print Server
18.    Hyperion Financial Reporting – Report Server
19.    Hyperion Financial Reporting – Scheduler Server
20.    Web Analysis Application Server
21.    Performance Management Architect Application Server
22.    Performance Management Architect Data Synchronizer Application Server
23.    Financial Reporting – Web Application
24.    Calculation Manager
25.    Planning Application Server
26.    Financial Management
27.    Hyperion Financial Management DME Listener
28.    Hyperion Financial Management Web Service Manager
29.    Hyperion Financial Data Quality Management – Task Manager

Assuming everything starts, we’ll discuss validation in the next part.

 

 

 

 

The EPM Reformation

Enterprise Performance Management (EPM) is undergoing the same transformation that Enterprise Resource Management (ERP) systems brought about in the early 90s.  As complimentary solutions such as Asset Management, Payroll, and General Ledger converged into one consolidated, modular system, so too are the solutions that comprise EPM (Financial Consolidation, Budgeting/Forecasting, Strategic Planning, and Reporting).   Along with the obvious benefits and economies of scale that accompany this transition, we must be aware of the pitfalls associated with the design, implementation, deployment, and support of these mission critical applications.

EPM as a Compliment to ERP

Just as Enterprise Resource Planning (ERP) solutions are an essential component of the Back Office operations of every Fortune 500 Company, Enterprise Performance Management systems are complimentary in nature and provide insight into the operational and financial effectiveness of the organization.  Metaphorically speaking – If the organization was an automobile, ERP would be the engine and EPM would be the gauges.  Carrying the analogy forward, nothing prevents us from operating a car without a speedometer, gas gauge, or heat indicator.  Furthermore, when the car is running well (at least in so far as we perceive) we have little interest in these instruments.  But, what about when we hear the first ping in the engine, or the car doesn’t respond when we hit the accelerator.  Worse yet – A seize up (aka Recession).  In the absence of information conjecture prevails and we are forced to speculate as to the cause of the problem. Without EPM, organizations are essentially operating their business in a similar fashion; reactive at best, “from the gut” at worst.

The Problem

From a business (functional) perspective, EPM solutions are categorized by the convergence of Analytic Application such as Financial Consolidation, Budgeting/Forecasting, and Strategic Planning with traditional Business Intelligence Solutions hallmarked by Query & Reporting, Key Performance Indicator (KPI) Dashboards, and Enterprise Scorecards.  As EPM has evolved from it’s siloed upbringing as a departmental solution to the Enterprise-class solutions of today, the underlying technology required to support these applications has become more broad and in recent years increasingly more complex.  This evolution is both natural and expected.   Given the expansive use of EPM based solutions, technical constructs such as multidimensional databases, data marts, enterprise data warehouses, workflow engines, web services, SOA, Calculation Scripts, ETL packages, and Master Data (Hierarchies) have all become vital components to the architecture.

In so far as organizations appreciate the criticality of EPM solutions to their organization, there is a gross under estimate of the effort associated with deploying and supporting these mission critical applications.  This similar lack of effort appreciation is shared by the ERP implementation of the 90s.  How often did we hear about the $2 million dollar ERP solution that came in at $20 million or more?

Gaining an Appreciation

Few would argue that the ERP solutions of today have not brought about a degree of integration and consistency throughout the business.  The ability to integrate key operational back-office systems up and down the organization with the capacity to exchange data between functional modules without fear of inconsistency is certainly a hallmark of the ERP promise.  But, this integration did not come without a price.  The same can be said for EPM.

ERP and EPM are both the harbingers of consistency, transparency, and audit ability.  As such, they force the institution of standards and controls where they have not historically existed.   Furthermore, there is an illusion that these disciplines run contradictory to loosely coupled legacy processes that are thought to be more flexible, nimble, and sufficient for supporting the business.  Whereas this may appear to be true when viewing each process as a stand-alone, siloed operation (forecasting separate from budgeting, separate from financial consolidation, separate from operational reporting).  It is important to have the right perspective here.  As with traditional ERP solutions, to gain an understanding of the EPM value proposition, you must first rise above the individual business solutions that encompass performance management (i.e. Financial Consolation, Budgeting/Forecasting, Reporting, etc).  Only by viewing these applications from a holistic, integrated business perspective can you appreciate the business and technology economies of scale that accompany Enterprise-class EPM solutions.

The Point

EPM solutions if approached correctly must be seen as the acronym implies, “Enterprise” in scope.  Similar to their ERP counterpart, EPM solutions can and in many cases should be implemented modularly, but under the auspice of an overall Solution Deployment Strategy.  Notice the term “Solution” not “Application”.  Applications are but one component of the EPM strategy.  Others include: Technical Infrastructure, Data Management/Governance, Process Integration, Communication/Change Management, and Administration & Support.  When you view EPM solutions from this perspective it is hard not to appreciate the level of involvement required from Executive Leadership, The Business, and Information Technology.  Organizations must think of their EPM solutions as “ERP Projects”; enterprise enabling solutions that require the establishment of well documented and endorsed strategy that align with the corporate directive.   In this vain, EPM requires a realistic investment of resources, time and capital to be successful.   Then again, you could pull away from the car lot with a 1971 Pinto and hope no one hits you from behind…

 

I had the honor of presenting at the September 2009 user group in the Hyperion track for those who attended the Ohio Valley Oracle Application User Group in Louisville.  The presentation focused on Maxl best practices, and how to integrate the results of maxl into other technologies.  The presentation was driven from a project completed late last year.  A recent client spent a tremendous amount of time verifying the results of daily and monthly processes.

Adding some consistency in the Maxl scripting, I integrated the results of the scripts, including the error and process logs, with .NET to produce a website that summarized the state of nearly 50 processes.  Administrators were able to view a web page that showed real time status of all their applications, including links to error logs.  The increased productivity of the administrative staff created a positive ROI in the first month of use.

 

Many people use Custom Lists in Excel – sometimes without even knowing.  If you have ever typed January into a cell and used autofill (click the dark plus sign, and drag across other cells) to create February through December, you have used Custom Lists.

Excel has a few Custom Lists setup for users when it is installed. Select the Tools / Options menu, and display the Custom Lists tab to view them.  Users can create their own Custom Lists in this dialog box by entering a list separated by commas or importing a range of cells that already includes a list.

For Essbase users who use the Hyperion Spreadsheet Add-In or SmartView, this can become a valuable tool.  Many times Essbase users will want to display a specific list of accounts, measures, products, etc.  Rather than selecting these from the member selection, or typing them, Custom Lists can be created and used to reduce the effort.

Let’s assume a user is responsible for a subset of the existing products and those products are only sold in a few of the markets.  The user may spend a lot of time creating the market list every time they create a new retrieve.  If the user creates a Custom List, they can automate this selection process.  A Custom List might include the following members.

Columbus,Cincinnati,Los Angeles,Tempe,Dallas,Austin,Seattle,Denver,Nashville

All the user has to do now is type Columbus in the first cell and use the autofill to list the rest of the markets.  This function can save those who frequently create add hoc reports a lot of time.

Custom Lists can be created for just about anything, are easy and quick to create, and are useful in a variety of situations.  www.In2Hyperion.com is not just for those in a technical capacity.  User related ideas, such as using Custom Lists, will become more prevalent on this site.  Sign up for our newsletter and receive notifications when more Excel tips for Essbase users become available.

 

There are a host of new features in version 11.  As with most product releases, there are the typical improvements related to memory, scripting, and stability.  But, there are some other, very notable, functional additions that might peak your interest.

Lifecycle Management

Shared Services now provides a consistent way to manage environments.  This console gives administrators the ability to compare applications, search for artifacts, and perform artifact migrations. It comes with a command line tool to automate tasks, as well as a full API for those who want to customize the process even further.

Typed Measures

Essbase now stores text!  Well, somewhat.  Text measures give administrators a way of storing a value other than a number in a data intersection.  Technically, it still stores numbers, but it represents a string.  A member in the measures dimension can have a text attribute.  This member is associated with an enumerated list.  Each member in that list has an index number, which is what is in the database.  When reporting is done, that number is converted to the associated text value in the enumerated list.  Members can also be tagged as Date, which changes the formatting to; you guessed it, a date.

Varying Attributes

Attributes have been around for a while now in Essbase.  Some people hate them and some love them.  They definitely have their place in the design of a database.  One limitation has been the inability to walk forward attributes over time.  For example, assume we have an attribute that identifies our customers into tiers based on their credit score.  If a customer’s score changes such that they move to a higher or lower tier, the history is lost because their attribute is the same for all time periods.  Not anymore.  Varying attributes adds the capability of Essbase to store, and calculate measures for attributes that vary over multiple dimensions.

Backup and Recovery

I have seen many methods to making sure Essbase applications are secured.  In version 11, there are some new options for BSO databases.  First, an option in EAS exists to backup the entire database, including its data and all of its objects, to one file.  When changing things rapidly through the day, this is a nice feature to ensure you don’t lose valuable work.  The entire database can easily be restored.  This is much quicker than manually archiving all the objects (calc scripts, load rules, outlines, and reports) and keeping data exports.

Secondly, Essbase now includes the option to log transactions and replay them.  With this option turned on, Essbase applications can be restored with the option to replay all transactions that occurred after the backup occurred.  Now, a database can be restored to a specific point in time.

ASO Data Management

ASO now includes Maxl scripting to enable administrators to clear data from regions of a database in two ways.  The first and most obvious is to remove the values from the database.  The second is the ability to copy the data into another member as the inverse, resulting in a total of zero.

The use of Environment Variables

If your process management uses variables to decrease maintenance tasks from, this might be something that will intrigue you.  Version 11 has access to not only Essbase variables, but operating system environment variables as well.

Monitoring Environment Reponses

Many environments take advantage of partitioning.  Now, there is a way to evaluate the cost of using partitions.  Using the ENABLE_DIAG_TRANSPARENT_PARTITION configuration setting in the essbase.cfg file, administrators can log transaction response times.

Common Log Locations

Version 11 organizes all log files in one location.  This is a very nice improvement.  Rather than searching through each products’ directory tree for the area logs are stored, they are now located in one common folder, with a folder for each of the Hyperion products.

Override Implied Shares

Essbase now includes an option in the outline management section to ignore the default setting for implied shares.  This can be very helpful when using partitions, as well as a host of other situations.

Notable Calculations Additions

Now that members can carry a text or date value, there are a host of functions that open up a whole new realm of possibilities.  DATEROLL will increase a value based on a specific time interval.  DATEDIFF will take the difference between two dates at the interval designated.  DATEPART will pull the time period (week, month, day, etc) from any date.  These operations were difficult at best, in previous releases of Essbase.

 

Users of Essbase have some control over the performance of a database and how responsive it is when retrieving data.  With a basic understanding of how Essbase stores data, users can optimize performance by changing the order of the dimensions and members in a report.

It might be helpful to read our article on sparse and dense dimensions.  Here is a brief overview:

An Essbase database is comprised of thousands, if not millions or billions, of data blocks.  Each block of data, and its size, is defined by the dense dimensions in the Essbase outline.  The volume of blocks is dictated by the unique combinations of sparse dimension members.  If Time and Accounts are dense, each block created would hold all the months for every account.  If Organization and Product are sparse dimensions, there would be a block for each unique combination of Organization and Product.  A block would exist for Center 10 / Product A, as well as Total Organization / Total Product.  If the outline has 20 members in Organization and 15 members in Products, the database could have up to 300 independent blocks.

If a report is written to show an entire income statement for all 12 months for Total Product and Total Organization, how many blocks would have to be queried?  Remember, there is a block for each unique member combination of Organization and Product.  The answer is one, because there is a block for Total Organization/Total Product that includes every account and every member in the time dimension.

How many blocks would be accessed if a report pulled Total Sales (a member in the Accounts dimension) in January for every product?  Since the Product dimension is sparse and there are 15 products, 15 blocks would have to be opened to return the results.

Here is where your understanding of what sparse and dense represents will help you improve your reports.  Opening a data block, reading the contents, and closing it, is similar to opening, reading, and closing a spreadsheet.  It is much faster to open one spreadsheet, or block, than 15 spreadsheets.  So, if the database retrieves are written in such a way to minimize the number of blocks that need to be accessed, or the order in which they are accessed, performance can improve.

I will agree that if data for all 15 products is needed for the report, all 15 blocks have to be opened.  There is no way around that.  That said, often times users will build one worksheet for income statement and one worksheet for balance sheet.  This means that the report is making two passes on the same blocks.  In theory, it takes twice as long to open/read/close a data block 2 times than it does once.  It is faster to have the income statement and the balance sheet accounts in one worksheet, which only makes one pass on the required blocks.  One worksheet for Income Statement and one for Balance Sheet can be created with cell references to the worksheet that has the retrieved data, if 2 separate reports are required.

I frequently see another example of a report requiring multiple passes to the same data block.  Using our example dimensions above, assume product information is required in a report for multiple accounts.

    Jan Feb Mar
Income Product A      
Income Product B      
Income Product C      
Income Product D      
Expense Product A      
Expense Product B      
Expense Product C      
Expense Product D      

The Essbase retrieve above would start from the top of the spreadsheet and move down the rows to retrieve the data from Essbase.  This cycle would open the Product A block, then B, C, and D, and retrieve the associated income for each.  It would then have to reopen the same 4 blocks to access expenses.

The following example, again going from top to bottom, would access both income and expense while the block is open.  The way this retrieve is setup, it eliminates the need to access the same block multiple times, yet still pulls the required information.

    Jan Feb Mar
Income Product A      
Expense Product A      
Income Product B      
Expense Product B      
Income Product C      
Expense Product C      
Income Product D      
Expense Product D      

These examples are very small.  In a real world example, a report of this size would not produce significant variances in the time it takes to retrieve them.  Users often have spreadsheets that are hundreds of rows long and take minutes to retrieve.  In these situations, eliminating the need to access the same block multiple times can produce notable improvements in the time it takes to retrieve data from Essbase.

With a basic understanding of how your database is setup, users of Essbase can help themselves with some simple changes to the format of the retrieve worksheet.  If access to the dimension properties in your database is unavailable, ask your system administrator to supply them for you.

 

 

When I am introduced to business segments that use Hyperion Essbase, I always get asked the same question: “Can you explain what sparse and dense mean?”  Although I agree that users don’t HAVE to understand the concept, I contend that it is extremely valuable if they do.  It will not only help them become more efficient users, it goes a long way in helping them understand why something simple in Excel isn’t always simple in Essbase.  If users understand what a block is, and what it represents, they have a much better experience with Essbase.

If you are a relational database developer or a spreadsheet user, you tend to view data in 2 dimensions.  An X and Y axis is equivalent to the rows and columns in your spreadsheet or database table.  Essbase is a little different in that it stores data in 3 dimensions, like a Rubik’s Cube, so it has a Z axis.  Essbase databases refer to these “Rubik’s Cubes” as blocks.  An Essbase database isn’t one giant Rubik’s Cube; it could be millions of them.  The size and number of possible blocks a database has is determined by the sparse/dense configuration of the database.

An Essbase outline has a number of dimensions.  The number of dimensions can range in quantity and size, but each dimension is identified as a dense or sparse dimension.  The dense dimensions define how large each block will be in size (the number of rows, columns and the depth of the Z axis).  The sparse dimensions define the number of possible blocks the database may hold.  Assume the following scenario:  a database exists with 3 dense dimensions and 2 sparse dimensions.  The dense dimensions are as follows:

Net Income
Income
Expenses

Qtr 1
Jan
Feb
Mar

Version
~ Actual
~ Budget
~ Forecast

Remember, the dense dimensions define the size of blocks.  These dimensions would produce a block that looks like the image below.  Every block in the database would be the same.

For those more knowledgeable with Essbase design, this example assumes that no member is dynamically calculated or is tagged as a label to reduce complexity.

 

The sparse dimensions are below.

Total Product
Shirts
Pants

Total Region
North
South
East
West

The unique combinations of each sparse dimension has its own block.  There will be a block for Pants – North, one for Shirts – North, and so on.  Since there are 3 members in the Total Products dimension and 5 members in the Total Region dimension, there will be a total of 15 (3 x 5) blocks.  If a database has 5 sparse dimensions, all with 10 members, it would have a total possible number of blocks equal to 100,000 (10 x 10 x 10 x 10 x 10).  Below is a representation of the possible blocks for Shirts.

 

When I am introduced to business segments that use Hyperion Essbase, I always get asked the same question: “Can you explain what sparse and dense mean?”  Although I agree that users don’t HAVE to understand the concept, I contend that it is extremely valuable if they do.  It will not only help them become more efficient users, it goes a long way in helping them understand why something simple in Excel isn’t always simple in Essbase.  If users understand what a block is, and what it represents, they have a much better experience with Essbase.

If you are a relational database developer or a spreadsheet user, you tend to view data in 2 dimensions.  An X and Y axis is equivalent to the rows and columns in your spreadsheet or database table.  Essbase is a little different in that it stores data in 3 dimensions, like a Rubik’s Cube, so it has a Z axis.  Essbase databases refer to these “Rubik’s Cubes” as blocks.  An Essbase database isn’t one giant Rubik’s Cube; it could be millions of them.  The size and number of possible blocks a database has is determined by the sparse/dense configuration of the database.

An Essbase outline has a number of dimensions.  The number of dimensions can range in quantity and size, but each dimension is identified as a dense or sparse dimension.  The dense dimensions define how large each block will be in size (the number of rows, columns and the depth of the Z axis).  The sparse dimensions define the number of possible blocks the database may hold.  Assume the following scenario:  a database exists with 3 dense dimensions and 2 sparse dimensions.  The dense dimensions are as follows:

Net Income
Income
Expenses

Qtr 1
Jan
Feb
Mar

Version
~ Actual
~ Budget
~ Forecast

Remember, the dense dimensions define the size of blocks.  These dimensions would produce a block that looks like the image below.  Every block in the database would be the same.

For those more knowledgeable with Essbase design, this example assumes that no member is dynamically calculated or is tagged as a label to reduce complexity.

 

The sparse dimensions are below.

Total Product
Shirts
Pants

Total Region
North
South
East
West

The unique combinations of each sparse dimension has its own block.  There will be a block for Pants – North, one for Shirts – North, and so on.  Since there are 3 members in the Total Products dimension and 5 members in the Total Region dimension, there will be a total of 15 (3 x 5) blocks.  If a database has 5 sparse dimensions, all with 10 members, it would have a total possible number of blocks equal to 100,000 (10 x 10 x 10 x 10 x 10).  Below is a representation of the possible blocks for Shirts.

 

I started my career as an accountant and never had any aspirations of doing the same thing all day, every day.  While I struggled through what I considered monotonous job functions, I developed a knack for finding ways to automate my job.  As a result, I didn’t have to do repetitive tasks and I had more time to learn the business. Don’t get me wrong, accountants possess a unique set of skills and talent that I respect trumendously. It is a critical function of any business.  So, kudos to you accountants!

When I get involved with building new applications with Hyperion, or updating existing models, it pains me to see accounting, finance, and the staff who support Hyperion continue to perform repetitive tasks that dominate their time.  It can drive talented people to look for employment elsewhere.  It inflates salaries and jeopardizes credibility with an increase in human error. It also deteriorates the quality of business analysis, introducing a greater risk of poor decisions.  Inflated expenses and poor management decisions can be catastrophic to any business.

Automation in accounting and finance areas is critical to productivity.  Being able to support the constant push from management to become better and faster with less resources is always challenging.  If your Hyperion environment is supported outside of finance, IT areas are under just as much scrutiny.  How much of your time, or staff, is spent generating reports?  How much more time could be spent helping analyze the business and adding value to management decisions?  From an IT prospective, how much of your time is spent supporting the environment and responding to requests where answers could be automatically generated?  If 20% of your reparative tasks were eliminated, how much more effective you would you be?  How much more experience would you gain?  How much more marketable would you be both internally and externally?

Many of the possibilities for automation are never discussed.  Most people don’t even realize how much time they spend performing repetitive tasks that could be automated. Some think it would be impossible to automate and others think it would be too expensive.  The examples below were both accomplished in a matter of weeks.  The investment had a positive return within months.  The non-monitory gain was felt immediately.

Don’t think of why it can’t be done.  Think of a solution without constraints and ask, “How can we get there?”  With the proper guidance and background, massive improvements can be accomplished with minimal effort.

To spark some thought, think about these situations.

Monitoring Essbase jobs and keeping users informed of system status

Are you responsible for managing all the jobs that run on Essbase server(s) and are constantly asked if something has completed, or when something will complete, by your users?  Some organizations have a person dedicated to managing this information flow.

I implemented a solution at a large financial institution to conquer this problem.  The result was a solution that required zero effort to maintain and provided a summary of over 50 processes in one web page.  It gave the status of the process, when it last executed, if there were any errors, and a link to the log and error files if they were required.  Access was granted to all the Essbase administrators.  Another page was available for all users that displayed the status of the application, when it was last loaded, when it was last calculated, and several other useful sources of information.

The days of searching through folders on multiple servers are now long gone for system administrators.  Users are more informed and support tickets diminished substantially.  The estimated time savings was 4-6 hours per day.

This solution was built using existing technologies, limited to Maxl, Windows scripting, ASP.NET, and access to an IIS Server to host the website.  It was 100% maintenance free and built dynamically enough so that new applications could be added and applications could be renamed or deleted.  All this is possible without changing any code or processes.

Distribution of reports

A large international organization distributed over 150 reporting templates to an equal amount of people in the US and abroad.  These templates were distributed daily through the monthly close of business.  The daily adjustment cycle finished updating the reporting Essbase application around 2 AM.  When a finance staff member arrived around 8 AM, the work began.  The template was refreshed and saved for each of the 150 business entities.  Emails were then sent to each of the 150 people with their respective report.  This process took about 6 hours every day it was performed.

Using existing technology, a process was created to traverse through a spreadsheet that had 2 columns, which was maintained by finance.  The first was the business unit, followed by the email the report was to be sent to.  Using the Essbase toolkit and Excel, a process was initiated as soon as the database was updated that opened a spreadsheet that included the template, changed the business unit, refreshed the template, saved it, and emailed to the intended recipient.  This process took less than 1 hour and all the reports were distributed before 4 AM.  Customers received their reports earlier (those in Asia a day early), no human errors were made, and the finance staff now had an additional 6 hours to add value.