top of page

How to Recreate Table Calculations with LODs & Nested LODs in Tableau

Updated: May 11, 2023

Table calculations are an integral and valuable tool in Tableau. For those unfamiliar with them, table calculations are special types of calculations performed after aggregations and filtering. They are calculated based on what is currently in the visualization and do not consider any measures or dimensions that are filtered out of the visualization. Common use cases for these include functions like Percent of Totals, Ranking, Running Totals and these just scratch the surface of what you and do with them.

These have come a long way since I began using Tableau about 10 years ago. They are now highly customizable and can even be nested within other calculations as long as they include Table calculations themselves which affords some powerful data transformations. Though they are powerful, they can also be limiting in instances when you want to apply filters to your viz but not impact the table calculations themselves. This is where LODs come in handy as you can use them to achieve the same results as Table calculations but without the same limitations.


In theory, any table calculation programmed in Tableau should be reproducible in Tableau. Whether or not it is worth it for you to invest your time into figuring it out how to make it work and verifying the results is a question you need to answer on your own.


Weighing out the utility provided by doing this and the amount of time it takes are the two biggest determinants, and they vary with each use case.


When to Recreate Table Calculations


Regarding utility, in recreating table calculations in Tableau, you end up with a more flexible calculation that can interact with filters in different ways. With Table calcs, Context, Dimension, and Measure Filters all happen prior to the table calculations themselves; using LODs allows you to have more influence over which filters you allow to impact the calculation and can enable you to do things such as filters values from a field from the viz while the calculation still performs as if they are still there.


For the time commitment, it is something that is more of a gut feeling, but also depending on your ability and familiarity with Level of Detail expressions.


The simpler your visualization is, the easier these will be to create. Something simple like this:

{Fixed [Date]: MAX(Sales)} is the equivalent of the WINDOW_MAX(Sales) 

with Date unchecked in the table calculation window. As a rule of thumb, the more dimensions you have, and the more complicated your table calc is, the more complex your LOD expressions will be.


The added complexity doesn’t come from the extra dimensions themselves; rather the additional complexity I’m referencing is the need to use nested LOD expressions to account for the increased dimensionality of your viz.


When Nesting LODs there are few things under-the-hood that are very useful to be aware of and keep top of mind. These Nesting LOD rules are entirely based off of my personal experiences through many years of Tableau use. There may be better resources out there for learning the ins and outs of working with LOD expressions, but these are what we need to keep in mind when trying to recreate a table calc using LODs:

  1. There is recursive inheritance of the innermost LOD from its immediate parent LOD.

  2. Using a Fixed LOD causes all nested calculations to function as if they had the fields fixed on added in an INCLUDE LOD.

  3. Using a Fixed LOD inside another Fixed LOD will override the outer fixing and help prevent carrying the dimensions included in the outer fixed LOD to the LODs further in.

Also there doesn’t seem to be a limit to how much nesting you can do, and this makes it possible to break your calculations up into smaller pieces to get the calculation you need.

Example of using nested LOD in Tableau code block

Now that we have a more nuanced understanding of working with these LODs, I can show you how I recreated the Standard Deviation Table Calculation using these.


Standard Deviation Example and Calculating P-Value in Tableau


It wasn’t that long ago I had to make the decision to recreate a Table calculation using LOD expressions. I built a dashboard to show significant occurrences of a certain action being taken.


There were a few nuances that led to a requirement for something more flexible than a table calculation. While this may look like a monster, it actually wasn’t that hard to build out. Probably the hardest thing was making sure the order of operations was happening correctly. I originally had these separated out into their individual components to make it easier to troubleshoot individual pieces of this equation as needed.


The Standard Deviation calculation starts where the “SQRT” expression is right above Box 1.

Code block showing how to use standard deviation in tableau

Nesting LODs is the only way this works. If you look at Box 1, it is fixing on the dimensions chosen. Now look at box 2. If the Fixed Dimensions weren’t reset with an empty FIXED expression, the COUNTD on the Dimension Selector would return 1 because it is being fixed on that same dimension. However, adding another FIXED LOD allows the COUNTD of the Dimension Selector field to return the distinct number of members in whatever dimension is being used there.


At this point, there is no additional need to use LODs. Although I have my SD, and as a result of the rest of the calculation, the Z-score calculated, I still needed to get the p-value to determine from significance.


Going from Z-Score to P-Value:


Normally when you have a z-score, you look up the significance on a z-table based on the type of test you are doing for a corresponding p-value which isn’t really that feasible in Tableau. Of course, the creation of the z-table had to come from somewhere; sure enough I was able to find numerous ways to approximate the p-value based on a given z-score.

I found this on forums and found it referenced in white papers. Although can’t find the exact white paper I found the cleanest explanation of the formula, I believe it was linked on here (https://stats.stackexchange.com/questions/7200/evaluate-definite-interval-of-normal-distribution) I was also able to validate it on the dataset using a local implementation of tabpy (restrictions prevented us from deploying tabpy on the server as that would have eliminated the need for this). The formula that I ended up using is as follows:


Copy and Paste-able Calculation for Tableau

IF [Z score] < 0 then
   IF [Z score] < -10 then -10 ELSE
1/(1+exp(-sqrt(PI())*((-0.0004406*([Z Score]^5))+(0.0418198*([Z score]^3))+(.9*[Z score]))))
   END
ELSE 
   IF [Z score] > 10 then 10 ELSE
1/(1+exp(sqrt(PI())*((-0.0004406*([Z score]^5))+(0.0418198*([Z score]^3))+(.9*[Z score]))))
   END
END

Slightly more readable version. Replace “y” with “p-value” and “x” with z-score.

Formula used to calculate p-value based on z-score in tableau

END ELSE IF [Z score] > 10 then 10 ELSE

Formula used to calculate p-value based on z-score in tableau

END END

This is slightly modified versus what I found in the white paper because when z was less than 0, the denominator would evaluate to a number between 0 and 1 which is fine for a one tailed test but I needed a two-tailed test. The other difference you’ll see that I do some weird hard-coding when the z score is above 10 or less than -10; The p-value looks like it is going to asymptote to 1 until your z score gets close to -11 or positive 11 and then you see some “crazy” behavior. Of course, to validate that I wasn’t mis-using the approximation formula I found, I used desmos (https://www.desmos.com/), an online graphic calculator, to graph out these functions to visualize what was going on.


Graph Legend:

  • Dashed orange line shows how the denominator should evaluate if the Z score is positive

  • Dashed blue line shows how the denominator should evaluate if the Z score is negative

  • Solid orange line shows how the entire expression evaluates (p-score) for a positive Z score

  • Solid blue line shows how the entire expression evaluates (p-score) for a negative Z score

line graph of above functions to visualize their behavior to better understand z-score range

After graphing these functions and seeing their behavior for myself, I was comfortable capping positive Z-scores at 10 and negative Z-scores at -10.


Example Use Case:


I mentioned the main benefit above but I want to provide more detail before moving to the example being provided. Since I used only Fixed LODs with the Z-score calculation, I can filter on dimensions and measures without them impacting the calculation at all, though I still have the option of setting a context filter to have it apply before the calculation if I desire. When using a Table calc our calculation won’t evaluate until after the context, dimension, and measure filters are applied and we have no control over that.


I’ve included an example of this in action (albeit a contrived one) in the attached workbook. In the tab appropriately named “Example” is a dashboard which shows customers sorted descending by their sales amount. Those colored orange are the most profitable or least profitable. Hint - You can remove the condition for the negative Z score calculation if you only want to flag the most profitable, which would make more sense in this fictitious situation. Let’s say you wanted to look at who your most impactful customers are that had orders in 2022 but still evaluate them relative to the lifetime value of all customers. This is a situation where not using a table calculation is advantageous because we can filter to only their sales from 2022 and not have it impact our calculation.


The top viz shows the customers that are most or least profitable with the underlying Standard Deviation and Z-Score calculation performed with LODs and the bottom viz shows the same view but with calculations based on table calculations. When there is no data being filtered out, you can see that the p-value calculated and the coloring between the two vizs is identical.

The top viz shows the customers that are most or least profitable with the underlying Standard Deviation and Z-Score calculation performed with LODs and the bottom viz shows the same view but with calculations based on table calculations

As you uncheck the years in the filter, you’ll see that the customer names and their order remain aligned between the two vizs, but the colors which represent the most and least profitable customers do not. The viz on top that is using the LOD calculations remains consistent in its calculation and in the coloring while the bottom changes based on the years that are filtered out.

The viz on top that is using the LOD calculations remains consistent in its calculation and in the coloring while the bottom changes based on the years that are filtered out.

Closing Thoughts


That’s all I have to show for this example. While it may not represent a situation you’ve encountered yet, hopefully it does its job of conveying how this technique can be leveraged.

Recreating table calculations with Level of Detail expressions in Tableau can provide significant advantages in terms of flexibility and control over the interaction with filters. By employing nested LODs and understanding the rules that govern their behavior, you can break down complex calculations into more manageable components, allowing for easier troubleshooting and fine-tuning.


The ability to leverage LOD expressions when working with table calculations opens up new possibilities for your data visualizations and empowers you to create more tailored and insightful analyses. While it may require an investment of time and effort, the benefits gained from improved flexibility and control can make it well worth the undertaking. As you continue to work with Tableau and LOD expressions, you'll gain greater proficiency, making the process of recreating table calculations even more efficient and effective in the future.

bottom of page