<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on Applied Dimensionality</title>
    <link>https://ykud.com/posts/</link>
    <description>Recent content in Posts on Applied Dimensionality</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 24 Nov 2025 01:59:54 +0000</lastBuildDate>
    
	<atom:link href="https://ykud.com/posts/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Planning analytics modeling - Allocations</title>
      <link>https://ykud.com/blog/pa-modeling-allocations/</link>
      <pubDate>Mon, 24 Nov 2025 01:59:54 +0000</pubDate>
      
      <guid>https://ykud.com/blog/pa-modeling-allocations/</guid>
      <description>&lt;p&gt;Another one of the &amp;lsquo;what do a I think about when doing X in PA&amp;rsquo; posts (see the previous one on &lt;a href=&#34;../../blog/pa-modeling-gl/&#34;&gt;General Ledger&lt;/a&gt;), this time covering everyone&amp;rsquo;s favourite topic: Allocations!&lt;/p&gt;
&lt;p&gt;I rarely encounter a planning system without an allocation component in it, it&amp;rsquo;s such a fundamental step to understanding profitability or &amp;rsquo;true costs including overheads&amp;rsquo; for a product / cost centre / project / process or any other object.&lt;/p&gt;
&lt;p&gt;A few design considerations I usually think of when discussing allocations:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;What is the &amp;lsquo;grain&amp;rsquo; / &amp;lsquo;dimensionality&amp;rsquo; of the allocation? Are we allocating cost centre expenses to a product or, say, freight costs across different products? What are the source / target dimensions?&lt;/li&gt;
&lt;li&gt;Are we talking about multi step allocations (i.e. Cost Centre A allocates to B and then B transfers the full cost to C)? Ideally you don&amp;rsquo;t want multiple steps, but they can be required, especially anywhere around manufacturing&lt;/li&gt;
&lt;li&gt;Who is initiating the allocation process (sender, receiver, schedule?) and whether there&amp;rsquo;s a workflow required to notify / approve by receivers? The more complicated approval process is, the harder you need to think about overrides you need to build-in to circumvent it (think people being on holiday / in a meeting when change needs to be made, last minute board meeting adjustments, etc). It&amp;rsquo;s worth having a separate &amp;lsquo;sandpit&amp;rsquo; scenario if the approval process is fairly stringent to allow for quicker modeling. Notification + tracking of allocations is my default suggest and I push-back on building an approval process, as it almost always gets removed or disabled later.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&amp;rsquo;s how I see an allocation model blueprint:
&lt;img src=&#34;https://ykud.com/images/2025/allocation_diagram.png&#34; alt=&#34;allocation diagram&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No rule based allocations :) Everyone starts with allocating via rules (rules are powerfull and traceable, yay, processes are complex, boo), but it&amp;rsquo;s never going to be fast&lt;/li&gt;
&lt;li&gt;Allocation is based on the input of % of from / to &amp;ndash; even if it&amp;rsquo;s driver based (for example, rent is allocated proportionally to area occupied, or based on, say, revenue). I always start with building with % input and upload and any driver based allocation is added on top by prepopulating the %s&lt;/li&gt;
&lt;li&gt;Aim to create a full record of to and from allocation details. It&amp;rsquo;s a lot of data, but totally invaluable for reconciling the results and building trust in the system. So, for example, if we&amp;rsquo;re doing a cost centre to cost centre there would be a cube with cost centre from, account from, cost centre to, account to breakdown. Allocation step is another dimension in the multi-step allocation scenario. No rules in such cubes.&lt;/li&gt;
&lt;li&gt;It&amp;rsquo;s worth to discuss the &lt;code&gt;threshold of significance&lt;/code&gt; with the end users and build some logic to stop allocating below this threshold. Having 0.0000001$ allocations only slows things down and brings no real value.&lt;/li&gt;
&lt;li&gt;I&amp;rsquo;m trying to make allocation process to run as fast as possible (performance is my quirk, maybe because it&amp;rsquo;s the easiest to measure), which is mainly by limiting the amount of source data read and &amp;rsquo;tighten&amp;rsquo; the allocation loops (by pre-populating &amp;lsquo;possible&amp;rsquo; intersection subsets), all the tips from the &lt;a href=&#34;https://ykud.com/blog/cognos/tm1-cognos/tm1_ti_performance/&#34;&gt;performance post&lt;/a&gt; apply. Running a process for allocating each source &amp;lsquo;record&amp;rsquo; is an anti-pattern I unfortunately see way too often, the overheads of starting a process adds up fast.&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Planning analytics modeling - General Ledger and surrounds</title>
      <link>https://ykud.com/blog/pa-modeling-gl/</link>
      <pubDate>Wed, 23 Jul 2025 01:59:54 +0000</pubDate>
      
      <guid>https://ykud.com/blog/pa-modeling-gl/</guid>
      <description>&lt;p&gt;&amp;ldquo;I&amp;rsquo;ve had this post in draft for years, and it&amp;rsquo;s a format I haven&amp;rsquo;t tried before: a set of notes on how I approach designing General Ledger (GL) and Finance models. This isn&amp;rsquo;t a &amp;lsquo;how-to guide,&amp;rsquo; but rather a collection of my personal considerations. I hope some of you find these insights useful.&lt;/p&gt;
&lt;p&gt;When starting a project with the Office of Finance, it&amp;rsquo;s often best to begin with the objects like the General Ledger (GL), P&amp;amp;L, or Trial Balance. Because these are so closely tied to actuals, you can learn a great deal by examining how the financial module is structured within the ERP system. It&amp;rsquo;s a really good starting point, because you can get to a production ready result in days, show value by enabling finance office to capture forecast and provide consolidated results, get everybody onboard overall journey and go onto harder / more interesting bits.&lt;/p&gt;
&lt;p&gt;My overaching philosophy these days is that I&amp;rsquo;m trying to build the systems that &amp;rsquo;last&amp;rsquo;, so they won&amp;rsquo;t be used by the people I&amp;rsquo;m currently working with and won&amp;rsquo;t be supported by our team :) This means that things have to be as simple as possible so that they&amp;rsquo;re easy to understand and have fewer points of failure.&lt;/p&gt;
&lt;h1 id=&#34;main-gl-cube&#34;&gt;Main GL cube&lt;/h1&gt;
&lt;p&gt;With this in mind, the main cube in finance module always closely mimics the actual GL object in ERP, typically with the following dimensions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Version - surely we want multiple scenarios&lt;/li&gt;
&lt;li&gt;GL Account - more on it below&lt;/li&gt;
&lt;li&gt;Currency - this is one exception to the main &amp;lsquo;keep it very simple&amp;rsquo; rule. Even if you don&amp;rsquo;t need multi-currency now, add it with everything being &amp;lsquo;Local Currency&amp;rsquo; as adding it in future is a lot more cumbersome. I&amp;rsquo;ve personally had to work around the absence of a currency dimension in these cubes multiple times just this year :)&lt;/li&gt;
&lt;li&gt;Cost Centre - or any other dimension(s) that identify the actual posting in your source system (e.g., Company, Profit Centre, or Cost Centre). These can often be multiple dimensions.&lt;/li&gt;
&lt;li&gt;Measure - this is the most interesting dimension, I try to make it meaningfull by having a separate element for every data &amp;lsquo;source&amp;rsquo; that we have (e.g. an &amp;lsquo;HR&amp;rsquo; element if there&amp;rsquo;s a HR portion of the model, capex, revenue planning, etc). The goal of having separate elements is to be able to quickly identify source of data and allow reconciliations. Having an additional &lt;code&gt;Adjustment&lt;/code&gt; (or multiple such elements) allows overlays on top of the other modules data for the last minute changes or future postings in a month-end reporting process. TM1&amp;rsquo;s greatest strength is the ability to create a purpose-built model for a part of revenue and expenses components and this dimension is where all these models &amp;lsquo;plug in&amp;rsquo;.&lt;/li&gt;
&lt;li&gt;Time dimension - single year and month or week dimension with virtual hierarchies to provide different time consolidations for reporting&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Planning / forecasting is often done on a more aggregated level of data than actuals. For example chart of accounts has 20 accounts under &lt;code&gt;Office supplies&lt;/code&gt;, and nobody has the time to forecast pens and pencils separately (or same story around cost centres).
This leads to a few potential solutions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Selecting a &amp;lsquo;placeholder&amp;rsquo; account to house forecast data (e.g., all forecasts are recorded on the &amp;lsquo;pens&amp;rsquo; account, while actuals are assigned correctly).&lt;/li&gt;
&lt;li&gt;Creating dedicated &amp;lsquo;forecasting&amp;rsquo; accounts within the chart of accounts specifically for forecast data (e.g., Office Supplies - Forecasting).&lt;/li&gt;
&lt;li&gt;Allocating consolidated input based on an actuals profile (e.g., last month, last year) to distribute data among the more granular accounts under &amp;lsquo;Office Supplies&amp;rsquo;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I prefer to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;have actuals at the granularity that allows easy reconciliation and limits the number of trips to the ERP system for reporting. For example: if we agree that &lt;code&gt;Office supplies&lt;/code&gt; is the forecasting level and limit the Account dimension in TM1 to this level and it means that actuals vs forecast reporting will require people to pull detailed by account actual breakdowns from ERP &amp;ndash; that&amp;rsquo;s a design fail in my book.&lt;/li&gt;
&lt;li&gt;capture all user input &amp;lsquo;unmodified&amp;rsquo; to allow to trace &amp;lsquo;who input what&amp;rsquo;, which rules out allocating on input option. I like the &amp;lsquo;forecasting&amp;rsquo; accounts approach a lot and advocate for it as it allows to create a &amp;rsquo;lowest&amp;rsquo; granularity forecast vs actuals reporting and is easy to trace / reason about.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id=&#34;gl-breakdown&#34;&gt;GL breakdown&lt;/h1&gt;
&lt;p&gt;And another cube that I always put it in is a &amp;lsquo;Line-Item&amp;rsquo; breakdown for any GL cube cell, that would normally have a slightly reduced set of dimensions (reducing the number of dimensions is the only reason to separate if from the main cube), for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Version&lt;/li&gt;
&lt;li&gt;Cost centre or anything else identifying the actual posting in your source (can be multiple dimensions, i.e. a company and a Profit centre or Cost centre)&lt;/li&gt;
&lt;li&gt;Line item - 1, 2, 3, you got it&lt;/li&gt;
&lt;li&gt;Time + inputs dimension - a set of picklist inputs (for example Account dropdown) to populate in GL cube, comments, descriptions + all the months / weeks as in the usual time dimension&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This cube allows capturing all the extra details that you want for a single account in GL cube, for example, specific trips for travel budget or different projects for the consultancy costs in a very simple way. The data from this cube will populate GL cube (in a special &amp;lsquo;breakdown&amp;rsquo; measure element), so you can tie them beack together.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s important to note that this cube typically does not contain actuals, as line items generally cannot be directly linked to actual data items. While loading actuals might be feasible if one of the dropdowns acts as a project code or similar identifier, at that point, it would likely be more efficient to create a dedicated dimension and a separate cube for that purpose.&lt;/p&gt;
&lt;p&gt;If the main cube we&amp;rsquo;re building involves balance movements, you can easily integrate debit/credit dropdowns to generate movements based on a single line of input.&lt;/p&gt;
&lt;p&gt;This breakdown cube is incredibly powerful. You can literally start with only this cube and then analyze the inputs from your initial forecast to identify which accounts or areas are most frequently used or carry the largest monetary values, these are the prime candidates for more detailed modules.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Cognos Analytics - Everyone has an Admin license?</title>
      <link>https://ykud.com/blog/ca-licensing-notes/</link>
      <pubDate>Thu, 03 Jul 2025 01:59:54 +0000</pubDate>
      
      <guid>https://ykud.com/blog/ca-licensing-notes/</guid>
      <description>&lt;p&gt;To everyone&amp;rsquo;s surprise all users were flagged up as Administrators based on the assigned capabilities in the last couple Cognos Analytics license audits I was part of, so I&amp;rsquo;d thought I&amp;rsquo;d write this up.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s as easy (or as obscure) as 1,2,3:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An obscure &lt;a href=&#34;https://www.ibm.com/docs/en/cognos-analytics/12.1.0?topic=security-user-capabilities#reportnetsecuredfunctionsandfeatures__Capability_Specification_Execution__title__1&#34;&gt;&lt;code&gt;Specification Execution&lt;/code&gt;&lt;/a&gt; capability is treated as an Administrator-level capability although it&amp;rsquo;s not under Administration group of capabilities:&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Specification Execution&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;This secured function allows a user or Software Development Kit application to use an inline specification. The Specification Execution secured function is counted as an Analytics Administrators licence role.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;A built-in Cognos Role called &lt;code&gt;Data Manager Authors&lt;/code&gt; is &lt;a href=&#34;https://www.ibm.com/docs/en/cognos-analytics/12.1.0?topic=objects-specification-execution&#34;&gt;granted access to this capability by default&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Data Manager Authors&lt;/code&gt; &lt;a href=&#34;https://www.ibm.com/docs/en/cognos-analytics/12.1.0?topic=objects-data-manager-authors#gen_ref_initial_security_settings_Predefined_Objects_predefinedObjectsRoleDataManagerAuthors&#34;&gt;includes &lt;code&gt;All Authenticated Users&lt;/code&gt; role by default&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Hey presto, All Authenticated Users are counted as Administrators.&lt;/p&gt;
&lt;p&gt;I doubt it&amp;rsquo;s intentional, more likely that this capability was required to run the Data Manager ETL jobs and this setup was never removed, even though Data Manager is long gone (I did like it, btw, a very simple tool). But nonetheless: please check your capability assigments and disable all the roles you don&amp;rsquo;t use (like this &lt;code&gt;Data Manager Users&lt;/code&gt;) to make your audits boring.&lt;/p&gt;
&lt;p&gt;While we&amp;rsquo;re at it, the built-in &lt;a href=&#34;https://www.ibm.com/docs/en/cognos-analytics/12.1.0?topic=access-managing-user-licenses&#34;&gt;License usage report&lt;/a&gt; report is very usefull, bearing in mind that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it records the user license based on the capabilities based on the time they last logged in, so capabilities adjustment wouldn&amp;rsquo;t be reflected in this report untill users re-login&lt;/li&gt;
&lt;li&gt;if a user is no longer active &amp;ndash; the only way to reset the license count is to remove their user profile from Cognos Analytics, which will remove all personal objects / schedules&lt;/li&gt;
&lt;li&gt;&lt;em&gt;whispers&lt;/em&gt; &lt;code&gt;content_store.CMOBJPROPS33&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Simple planning analytics benchmarking model - tm1bench</title>
      <link>https://ykud.com/blog/tm1bench/</link>
      <pubDate>Tue, 29 Apr 2025 01:59:54 +0000</pubDate>
      
      <guid>https://ykud.com/blog/tm1bench/</guid>
      <description>&lt;p&gt;Been playing lately with understanding some performance aspects of PA (obviously comparing v11 vs v12, but this post won&amp;rsquo;t be about this).&lt;/p&gt;
&lt;p&gt;Ideally you should be always be comparing your specific TM1 model on different software versions, hardware, etc, but sometimes it&amp;rsquo;s worth having a very simple model that doesn&amp;rsquo;t contain any sensitive data, but is &amp;rsquo;larger&amp;rsquo; than the sample models.&lt;/p&gt;
&lt;p&gt;So that&amp;rsquo;s exactly what &lt;a href=&#34;https://github.com/ykud/tm1bench&#34;&gt;tm1bench&lt;/a&gt; is: a very simple model that can have a lot of data.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a very simple model with just 3 cubes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sales by Customer, Product, Month, Version with some rule based calculations&lt;/li&gt;
&lt;li&gt;Price by Product, Month, Version&lt;/li&gt;
&lt;li&gt;Discount by Customer, Month, Version&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The number of elements in dimensions and amount of data is parametrised in &lt;code&gt;tm1bench Setup&lt;/code&gt; process, so you can set up a sample model with 10m, 50m, 100m or more cells in Sales cube to run your tests against.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s a couple of tests included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;loading data single threaded - &lt;code&gt;tm1bench Test sales data load&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;loading data in multiple threads - &lt;code&gt;tm1bench Test sales data load parallel&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;reading data from Sales cube to check how fast rule calculations work - &lt;code&gt;tm1bench Test sales data read&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Notes on security setup in Planning Analytics</title>
      <link>https://ykud.com/blog/pa-security/</link>
      <pubDate>Thu, 27 Feb 2025 01:59:54 +0000</pubDate>
      
      <guid>https://ykud.com/blog/pa-security/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been tweaking a few security models in PA recently, so it&amp;rsquo;s a good opportunity to jot down some thoughts. Here&amp;rsquo;s a list of ideas in no particular order.&lt;/p&gt;
&lt;h1 id=&#34;testing-security&#34;&gt;Testing security&lt;/h1&gt;
&lt;p&gt;First of security is very boring and quite hard to test &amp;amp; verify, so it often gets overlooked. You need an ability to &amp;lsquo;see&amp;rsquo; things as a user and PAW has no built-in impersonation feature (although there&amp;rsquo;s a REST API call for it, so it&amp;rsquo;s possible), so having a few dummy accounts you can login to is a must.
A simple process of copying groups from a target user to a dummy user is very helpful.&lt;/p&gt;
&lt;h1 id=&#34;tm1-security-groups&#34;&gt;TM1 security groups&lt;/h1&gt;
&lt;h2 id=&#34;data-vs-object-groups&#34;&gt;Data vs object groups&lt;/h2&gt;
&lt;p&gt;I usually separate secuirty groups in TM1 into 2 categories:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;groups that define access to TM1 objects (cube, dimension, process security) &amp;ndash; used to call them &amp;ldquo;application&amp;rdquo; groups, sometimes &amp;ldquo;functional roles&amp;rdquo;, sometimes &amp;ldquo;object&amp;rdquo; ones.&lt;/li&gt;
&lt;li&gt;groups that define access to TM1 data (element security) - something along the lines of  &amp;ldquo;data groups&amp;rdquo;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This split allows you to keep security definitions contextually split and you can sync the type 1 groups between different environments, whereas type 2 groups are environment specific.&lt;/p&gt;
&lt;p&gt;PAW groups usually map quite nicely to &amp;lsquo;application&amp;rsquo; or &amp;lsquo;functional&amp;rsquo; ones, as they define books that people have access to. So you have &amp;lsquo;Revenue forecasting&amp;rsquo; PAW group that will translate into a set of &amp;lsquo;what objects should they see in TM1&amp;rsquo; group. &lt;a href=&#34;https://www.ykud.com/blog/paw-security/&#34;&gt;Syncing PAW groups to TM1 streamlines this step&lt;/a&gt; (still waiting for PAW security API, sigh).&lt;/p&gt;
&lt;h2 id=&#34;groups-by-user-or-groups-by-object&#34;&gt;Groups by user or groups by object?&lt;/h2&gt;
&lt;p&gt;I try to keep number of groups in TM1 to some reasonable amount, so if I&amp;rsquo;m looking at a design that entails thousands of groups, I&amp;rsquo;m starting to question whether it&amp;rsquo;s easier to pivot it around and have a group by user and assign security that way. A large number of groups makes everything security related slow(er), so it&amp;rsquo;s better to keep it low.  If you end up with groups by user approach &amp;ndash; include a process to delete old groups as they will accumulate.&lt;/p&gt;
&lt;p&gt;You should try to have a group per some entity (cost centre, project) and avoid creating combination groups, like CC_X_Account_Y &amp;ndash; that&amp;rsquo;s usually a sign of some odd assumptions.&lt;/p&gt;
&lt;h1 id=&#34;cell-security&#34;&gt;Cell security&lt;/h1&gt;
&lt;p&gt;I still run into &amp;lsquo;Cell security rules make things slow&amp;rsquo; concept every now and then. Overall CellSecurity make things slow in 2 cases:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;when you&amp;rsquo;re using input data to define cell security &amp;ndash; this drives extra calculations in PAW &lt;a href=&#34;https://www.ykud.com/blog/cognos/tm1-cognos/tm1-cell-security/&#34;&gt;as I wrote about here&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;when you have users with a lot of groups as PA server needs to evaluate Cell Security for every group before calculating the final level of access that user will have. This is where those &amp;lsquo;object/functional/application&amp;rsquo; security groups come to help again &amp;ndash; writing rules against only this group (&lt;code&gt;[&#39;Revenue forecasting&#39;]=S:...&lt;/code&gt;) will make the rules very specific and fast.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1 id=&#34;avoiding-securityrefresh&#34;&gt;Avoiding SecurityRefresh&lt;/h1&gt;
&lt;p&gt;I try to avoid &lt;a href=&#34;https://www.ibm.com/docs/en/planning-analytics/2.1.0?topic=stf-securityrefresh&#34;&gt;SecurityRefresh&lt;/a&gt; as much as I can, especially if there&amp;rsquo;s a chance it&amp;rsquo;ll be ran during the normal hours (e.g. users assigning security). SecurityRefresh is a global locking operation (it&amp;rsquo;s quite bad when the system takes an unexpected breather for all users without any clear reason - it erodes trust) and it will get slower over time, so having a system without it just alleviates the need ot think about it.&lt;/p&gt;
&lt;p&gt;The common reasons for needing a &lt;code&gt;SecurityRefresh&lt;/code&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Using rules for Element Security &amp;ndash; you can and should use the TI with &lt;a href=&#34;https://www.ibm.com/docs/en/planning-analytics/2.1.0?topic=functions-elementsecurityput&#34;&gt;ElementSecurityPut&lt;/a&gt; instead, there&amp;rsquo;s rarely anything that dynamic in the security that requires rules and moreover they won&amp;rsquo;t update until someone presses a button that triggers a SecurityRefresh. This button might as well do ElementSecurityPut instead :)&lt;/li&gt;
&lt;li&gt;You&amp;rsquo;re populating ElementSecurity or ClientGroups via a TI and want to be able to handle &amp;lsquo;removal&amp;rsquo; of security. E.g. say user A is in a group G now, but in your security source this blank intersection. This is commonly done by doing a ViewZerout of a security cube, writing data to it (or doing AssignClient&amp;rsquo;s / ElementSecurityPut&amp;rsquo;s) and then running a SecurityRefresh. SecurityRefresh is required because otherwise PA server wouldn&amp;rsquo;t &amp;lsquo;see&amp;rsquo; that security was removed during ZeroOut.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;An alternative approach (only if you really need refreshes during the day), can be to create a cube that would calculate the required changes (including removals) and use it a source for applying security.
Using &lt;code&gt;}ClientGroups&lt;/code&gt; as an example, you&amp;rsquo;d create a&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;cube like &lt;code&gt;ClientGroupsDetectChanges&lt;/code&gt; with dimensions like:
&lt;ul&gt;
&lt;li&gt;}Clients&lt;/li&gt;
&lt;li&gt;}Groups&lt;/li&gt;
&lt;li&gt;Measure
&lt;ul&gt;
&lt;li&gt;Apply changes - consolidation with +1 / -1 weights to define whether to add or remove user
&lt;ul&gt;
&lt;li&gt;Current - write from }ClientGroups&lt;/li&gt;
&lt;li&gt;Future - write from your security source&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And then a TI using &lt;code&gt;Apply changes&lt;/code&gt; measure as source to &lt;code&gt;AssignClientToGroup&lt;/code&gt; or &lt;code&gt;RemoveClientFromGroup&lt;/code&gt;&lt;/p&gt;
&lt;h1 id=&#34;other&#34;&gt;Other&lt;/h1&gt;
&lt;h2 id=&#34;securityoverlays&#34;&gt;SecurityOverlays&lt;/h2&gt;
&lt;p&gt;I haven&amp;rsquo;t used &lt;a href=&#34;https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=developers-security-overlay&#34;&gt;SecurityOverlays&lt;/a&gt; until a recent project, where they were used to improve CellSecurity performance. Overlays were used to define security on some dimensions and therefore allow a reduced number of dimensions in CellSecurity.
Rewriting those CellSecurity rules made it Overlays redundant, so we removed the overlays :)&lt;/p&gt;
&lt;p&gt;I do see the potential use cases for Overlays, but it&amp;rsquo;s adding another thing to think of in regards to security and it&amp;rsquo;s hard enough as it is in most cases.&lt;/p&gt;
&lt;h2 id=&#34;datareservation&#34;&gt;DataReservation&lt;/h2&gt;
&lt;p&gt;I had a lot of fun with &lt;a href=&#34;https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=functions-data-reservation-turbointegrator&#34;&gt;DataReservation&lt;/a&gt; functions on every model that was using &lt;a href=&#34;https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=resources-tm1-applications&#34;&gt;TM1 Applications&lt;/a&gt; / Contributor and never had a use for it outside of TM1 Applications. Despite most people thinking that they need &amp;rsquo;exclusive&amp;rsquo; access to data, they almost never do and having it creates an extra layer of complexity whilst transferring data ownership (e.g. handling people being on leave, last minute changes, etc). Having a transaction log is enough to trace &amp;lsquo;who changed my data&amp;rsquo; in most cases. This opens up a whole &amp;lsquo;how complicated your worklow really needs to be&amp;rsquo; topic, but this post is way too long already :)&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>