top of page
Writer's pictureDevi Vanukuru

CRM Analytics Dashboard Optimization Best Practices

Updated: Sep 27, 2022

Overview:


In dashboards, visualizations are built on top of queries, and the queries are built on top of the data. Therefore, we must consider optimization at all three layers for optimal dashboard performance.


The Data Layer:


CRM Analytics uses a proprietary data format with inverted indexes that’s optimized for volume and speed. Each feature is stored as its own file. Measures are listed in order, with one for each record. Dimension values are not stored in the same way. They’re stored as key-value pairs, with each unique value of the dimension as a key, and the value is a list of the indexes of the rows it occurs in. This greatly helps compress large datasets and speeds up queries because the indexes for each value are already stored. Date formats are broken down into dimensions and measures for epoch values and stored similarly.

The Visualization Layer:


The visualizations are rendered on the client-side, meaning on our web browser. So if we have a low-performance computer, we may see a decrease in performance. The query engine runs on the dataset on the server-side. If we’re working with large amounts of data, the servers can dump a lot of data onto our computers, which can take longer to render depending on our computer and browser version. This should be taken into consideration when designing the dashboard. Reducing the number of queries on each page can speed up visualization rendering time.


To check for issues on the client-side, we can right-click on the website showing the dashboard and click inspect, then find the network tab and search for the word query. This will show the load time for each of them. A waterfall chart shows how the queries run over time. And on the latest browsers, up to six queries can be run at a time. We can also check network and browser performance at <your org url>/speedtest.jsp. However, most of the time there’s a performance issue, it’s because of queries, not the visualizations.


The Query Engine:


It’s a good idea to check the performance of each query. While in preview mode, click More (the three dots), then click Dashboard Inspector. This will show you the time each query takes to run. We can also check the performance of the whole dashboard. While in the dashboard inspector, click Performance, then click Run Performance Check. This will check for redundant queries, the number of queries, whether any queries can be combined, and the initial load time.


All queries are executed in SAQL. Queries written in compact form, like those created in a lens, are automatically optimized when converted to SAQL. When we manually edit the SAQL, it may not be optimized.


It’s best practice to move as many calculations as possible to the data layer, like field generations using case statements in SAQL. That is, generate them in a recipe or dataflow when creating the dataset. But there are some situations where the SAQL must be edited. If that’s the case, just keep in mind the query structure and general optimization principles, like post-projection vs. pre-projection filters, etc.


If possible, reduce the size of the dataset. If post-projection filters are needed, try to reduce the amount of data going to them. When possible, bucket measures into dimensions and filter those instead of filtering measures because dimensions are already indexed. Avoid redundant filter logic. And avoid using the matches operator by creating Boolean flag variables in the data layer instead.


Concurrency:


There can be as many as 100 queries running at any moment within an org. Any over that are put into a queue. So instead of having many queries on one page, split the dashboard into multiple pages with links to each other. Minimizing the number of queries on each page can help prevent running into this limit when multiple users load the page simultaneously.


Other Tips:


· Try to avoid results binding on large datasets. Most use cases can be solved by creating common fields and using connect data sources instead.


· Change complex filter conditions to Boolean flag variables for when that complex condition is true, and filter based on that.


· For values tables querying large datasets, use only one generate statement.


· For large external datasets, consider grouping the data by its most used grouping, applying any filters that you can, then uploading it.


· Try to reuse queries as much as you can. If a query exists on multiple pages, it’s only run on the first page it’s loaded on, not the remaining pages.


· Use global filters instead of list selectors. List selectors require a query when they’re loaded, and if a selection is made, another query is run. This can greatly increase the loading time of the dashboard. However, targeted faceting and bindings can’t be implemented with global filters, but static filters can be used instead. And by default, global filters are multi-select enabled, so there are no selection criteria.


· Finally, when possible, try to power multiple widgets with a single query, which is usually done with KPIs.


References:


“Tableau CRM - Dashboard Best Practices.” Tableau CRM Learning Days. Accessed May 21, 2021. https://tableaucrm.hubs.vidyard.com/watch/Xt89jzVy2Aa2TLZSjZGQdE.

47 views0 comments

Recent Posts

See All

Comments


bottom of page