Applied Dimensionality

Notes on security setup in Planning Analytics

Posted at — Feb 27, 2025
Notes on security setup in Planning Analytics

I’ve been tweaking a few security models in PA recently, so it’s a good opportunity to jot down some thoughts. Here’s a list of ideas in no particular order.

Testing security

First of security is very boring and quite hard to test & verify, so it often gets overlooked. You need an ability to ‘see’ things as a user and PAW has no built-in impersonation feature (although there’s a REST API call for it, so it’s possible), so having a few dummy accounts you can login to is a must. A simple process of copying groups from a target user to a dummy user is very helpful.

TM1 security groups

Data vs object groups

I usually separate secuirty groups in TM1 into 2 categories:

  1. groups that define access to TM1 objects (cube, dimension, process security) – used to call them “application” groups, sometimes “functional roles”, sometimes “object” ones.
  2. groups that define access to TM1 data (element security) - something along the lines of “data groups”

This split allows you to keep security definitions contextually split and you can sync the type 1 groups between different environments, whereas type 2 groups are environment specific.

PAW groups usually map quite nicely to ‘application’ or ‘functional’ ones, as they define books that people have access to. So you have ‘Revenue forecasting’ PAW group that will translate into a set of ‘what objects should they see in TM1’ group. Syncing PAW groups to TM1 streamlines this step (still waiting for PAW security API, sigh).

Groups by user or groups by object?

I try to keep number of groups in TM1 to some reasonable amount, so if I’m looking at a design that entails thousands of groups, I’m starting to question whether it’s easier to pivot it around and have a group by user and assign security that way. A large number of groups makes everything security related slow(er), so it’s better to keep it low. If you end up with groups by user approach – include a process to delete old groups as they will accumulate.

You should try to have a group per some entity (cost centre, project) and avoid creating combination groups, like CC_X_Account_Y – that’s usually a sign of some odd assumptions.

Cell security

I still run into ‘Cell security rules make things slow’ concept every now and then. Overall CellSecurity make things slow in 2 cases:

  1. when you’re using input data to define cell security – this drives extra calculations in PAW as I wrote about here
  2. when you have users with a lot of groups as PA server needs to evaluate Cell Security for every group before calculating the final level of access that user will have. This is where those ‘object/functional/application’ security groups come to help again – writing rules against only this group (['Revenue forecasting']=S:...) will make the rules very specific and fast.

Avoiding SecurityRefresh

I try to avoid SecurityRefresh as much as I can, especially if there’s a chance it’ll be ran during the normal hours (e.g. users assigning security). SecurityRefresh is a global locking operation (it’s quite bad when the system takes an unexpected breather for all users without any clear reason - it erodes trust) and it will get slower over time, so having a system without it just alleviates the need ot think about it.

The common reasons for needing a SecurityRefresh:

  1. Using rules for Element Security – you can and should use the TI with ElementSecurityPut instead, there’s rarely anything that dynamic in the security that requires rules and moreover they won’t update until someone presses a button that triggers a SecurityRefresh. This button might as well do ElementSecurityPut instead :)
  2. You’re populating ElementSecurity or ClientGroups via a TI and want to be able to handle ‘removal’ of security. E.g. say user A is in a group G now, but in your security source this blank intersection. This is commonly done by doing a ViewZerout of a security cube, writing data to it (or doing AssignClient’s / ElementSecurityPut’s) and then running a SecurityRefresh. SecurityRefresh is required because otherwise PA server wouldn’t ‘see’ that security was removed during ZeroOut.

An alternative approach (only if you really need refreshes during the day), can be to create a cube that would calculate the required changes (including removals) and use it a source for applying security. Using }ClientGroups as an example, you’d create a

And then a TI using Apply changes measure as source to AssignClientToGroup or RemoveClientFromGroup

Other

SecurityOverlays

I haven’t used SecurityOverlays until a recent project, where they were used to improve CellSecurity performance. Overlays were used to define security on some dimensions and therefore allow a reduced number of dimensions in CellSecurity. Rewriting those CellSecurity rules made it Overlays redundant, so we removed the overlays :)

I do see the potential use cases for Overlays, but it’s adding another thing to think of in regards to security and it’s hard enough as it is in most cases.

DataReservation

I had a lot of fun with DataReservation functions on every model that was using TM1 Applications / Contributor and never had a use for it outside of TM1 Applications. Despite most people thinking that they need ’exclusive’ access to data, they almost never do and having it creates an extra layer of complexity whilst transferring data ownership (e.g. handling people being on leave, last minute changes, etc). Having a transaction log is enough to trace ‘who changed my data’ in most cases. This opens up a whole ‘how complicated your worklow really needs to be’ topic, but this post is way too long already :)

comments powered by Disqus