When deploying an on-prem Hyperion planning cube, a native Essbase cube, or refreshing planning cloud database, you may get the following Essbase error.
“Cannot restructure cube. Essbase Error(1010004): Total possible blocks of database [xxx] exceeds the limit, which is [2^52], or [4.5036e+15], for each of two dimension groups.”
If you check Oracle support tickets, the solutions are typically telling you that, you have a very large outline with many Sparse dimensions/members. Try reducing the number of members or changing the Sparse/Dense configuration. If this does not solve the problem then you may want to consider using transparent partitioning to split the members into smaller outlines.
Well, this solution is not wrong. But it doesn’t explain what the error really means. We surely know the number of potential blocks is possible exceeds the maximum limit in the code. But what are the “two dimension groups” and is there any other easier fix? Let’s dig into a bit deeper this time.
What is the real issue?
The block storage databases, the BSO cube, there is a limit for the number of possible spare blocks, based on the Oracle documentation, the limit is 2^104.
https://docs.oracle.com/cd/E57185_01/EDBAG/limits.html
However, there is a nuance with this 2^104 overall limit. It is actually, two groups of 2^52 which can be less than 2^104 as Essbase rounds off the first group to the nearest dimension and starts again with remaining dimensions for another set of 2^52. In an ideal case, where the first group actually ends up with 2^52, you do get all of the 2^104 combined limit. Otherwise, you will lose some combination in the first 2^52 which can explain why you are not seemingly exceeding 2^104.
What is the potential number of data blocks?
The potential number of data blocks is the maximum number of data blocks possible in the database. Multiply the number of stored members of the first sparse dimension by the number of stored members of the second sparse dimension by the number of stored members of the third sparse dimension, and so on to get the potential number of data blocks.
For example,
What are the two dimension groups?
Now, it comes the interesting part, what are the two dimension groups.
1st dimension group
Based on the outline sparse dims performance setting, multiply the number of stored sparse members for each dim until the results reach to (but not greater than) 2^52. These dimensions are group 1 dims.
2nd dim group
The rest of the dimensions are group 2 dims.
If the 2nd dimension group dims combinations are greater than 2^52. Then you will be getting the error message.
For example, note 2^52 = 4.5E+15
There are 9 sparse dimensions. The first 5 sparse dimensions are group1, otherwise, it will reach the max potential block limits. The rest of the 4 sparse dimensions are group 2, and it errors out due to the limitation.
What is the fix?
The advice here is to reorder the dimensions in one stage and squeeze in more into another 2^52 block group. The best practice is to follow a small-to-big sparse dimension setup. Using the same application as an example. The two dimension groups do not exceed the limit this time.
Hope this topic gives you an easier fix on the exceeding the max potential blocks issue. See you next time.
Only you need to remember, that it is not just stored members, that counts towards potential limit