我需要计算一个日期范围内的滚动总和。为了说明,使用AdventureWorks 示例数据库,以下假设语法将完全满足我的需要:
SELECT
TH.ProductID,
TH.TransactionDate,
TH.ActualCost,
RollingSum45 = SUM(TH.ActualCost) OVER (
PARTITION BY TH.ProductID
ORDER BY TH.TransactionDate
RANGE BETWEEN
INTERVAL 45 DAY PRECEDING
AND CURRENT ROW)
FROM Production.TransactionHistory AS TH
ORDER BY
TH.ProductID,
TH.TransactionDate,
TH.ReferenceOrderID;
遗憾的是,RANGE
窗框范围当前不允许 SQL Server 中的间隔。
我知道我可以使用子查询和常规(非窗口)聚合来编写解决方案:
SELECT
TH.ProductID,
TH.TransactionDate,
TH.ActualCost,
RollingSum45 =
(
SELECT SUM(TH2.ActualCost)
FROM Production.TransactionHistory AS TH2
WHERE
TH2.ProductID = TH.ProductID
AND TH2.TransactionDate <= TH.TransactionDate
AND TH2.TransactionDate >= DATEADD(DAY, -45, TH.TransactionDate)
)
FROM Production.TransactionHistory AS TH
ORDER BY
TH.ProductID,
TH.TransactionDate,
TH.ReferenceOrderID;
给定以下索引:
CREATE UNIQUE INDEX i
ON Production.TransactionHistory
(ProductID, TransactionDate, ReferenceOrderID)
INCLUDE
(ActualCost);
执行计划是:
虽然不是非常低效,但似乎应该可以仅使用 SQL Server 2012、2014 或 2016(到目前为止)支持的窗口聚合和分析函数来表达此查询。
为清楚起见,我正在寻找一种对数据执行单次传递的解决方案。
在 T-SQL 中,这很可能意味着子句将OVER
完成工作,而执行计划将包含 Window Spools 和 Window Aggregates。所有使用该OVER
子句的语言元素都是公平的游戏。SQLCLR 解决方案是可以接受的,只要它保证产生正确的结果。
对于 T-SQL 解决方案,执行计划中的哈希、排序和窗口假脱机/聚合越少越好。随意添加索引,但不允许使用单独的结构(例如,没有预先计算的表与触发器保持同步)。允许参考表(数字表、日期表等)
理想情况下,解决方案将以与上述子查询版本相同的顺序产生完全相同的结果,但任何可以说是正确的也是可以接受的。性能始终是一个考虑因素,因此解决方案至少应该是相当有效的。
专用聊天室:我创建了一个公共聊天室,用于与此问题及其答案相关的讨论。任何拥有至少 20 个声望点的用户都可以直接参与。如果您的代表少于 20 并且想参加,请在下面的评论中联系我。
Great question, Paul! I used a couple different approaches, one in T-SQL and one in CLR.
T-SQL quick summary
The T-SQL approach can be summarized as the following steps:
Using
SET STATISTICS IO ON
, this approach reportsTable 'TransactionHistory'. Scan count 1, logical reads 484
, which confirms the "single pass" over the table. For reference, the original loop-seek query reportsTable 'TransactionHistory'. Scan count 113444, logical reads 438366
.As reported by
SET STATISTICS TIME ON
, the CPU time is514ms
. This compares favorably to2231ms
for the original query.CLR quick summary
The CLR summary can be summarized as the following steps:
Using
SET STATISTICS IO ON
, this approach reports that no logical I/O has occurred! Wow, a perfect solution! (Actually, it seems thatSET STATISTICS IO
does not report I/O incurred within CLR. But from the code, it is easy to see that exactly one scan of the table is made and retrieves the data in order by the index Paul suggested.As reported by
SET STATISTICS TIME ON
, the CPU time is now187ms
. So this is quite an improvement over the T-SQL approach. Unfortunately, the overall elapsed time of both approaches is very similar at about half a second each. However, the CLR based approach does have to output 113K rows to the console (vs. just 52K for the T-SQL approach that groups by product/date), so that's why I've focused on CPU time instead.Another big advantage of this approach is that it yields exactly the same results as the original loop/seek approach, including a row for every transaction even in cases where a product is sold multiple times on the same day. (On AdventureWorks, I specifically compared row-by-row results and confirmed that they tie out with Paul's original query.)
A disadvantage of this approach, at least in its current form, is that it reads all data in memory. However, the algorithm that has been designed only strictly needs the current window frame in memory at any given time and could be updated to work for data sets that exceed memory. Paul has illustrated this point in his answer by producing an implementation of this algorithm that stores only the sliding window in memory. This comes at the expense of granting higher permissions to CLR assembly, but would definitely be worthwhile in scaling this solution up to arbitrarily large data sets.
T-SQL - one scan, grouped by date
Initial setup
The query
The execution plan
From the execution plan, we see that the original index proposed by Paul is sufficient to allow us to perform a single ordered scan of
Production.TransactionHistory
, using a merge join to combine the transaction history with each possible product/date combination.Assumptions
There are a few significant assumptions baked into this approach. I suppose it will be up to Paul to decide whether they are acceptable :)
Production.Product
table. This table is freely available onAdventureWorks2012
and the relationship is enforced by a foreign key fromProduction.TransactionHistory
, so I interpreted this as fair game.AdventureWorks2012
; if they did, generating the full set of product/date combinations would no longer be possible without first taking a pass over the transaction history.NumOrders
column to indicate how many sales occurred. See the following screenshot for a comparison of the results of the original query vs. the proposed query in cases where a product was sold multiple times on the same date (e.g.,319
/2007-09-05 00:00:00.000
)CLR - one scan, full ungrouped result set
The main function body
There isn't a ton to see here; the main body of the function declares the inputs (which must match the corresponding SQL function), sets up a SQL connection, and opens the SQLReader.
The core logic
I've separated out the main logic so that's easier to focus on:
Helpers
The following logic could be written inline, but it's a little easier to read when they are split out into their own methods.
Tying it all together in SQL
Everything up to this point has been in C#, so let's see the actual SQL involved. (Alternatively, you can use this deployment script to create the assembly directly from the bits of my assembly rather than compiling yourself.)
Caveats
The CLR approach provides a lot more flexibility to optimize the algorithm, and it could probably be tuned even further by an expert in C#. However, there are also downsides to the CLR strategy. A few things to keep in mind:
TRUSTWORTHY
and grantingEXTERNAL_ACCESS
to the CLR assembly. So there is some hassle and potential security implication, but the payoff is a streaming approach that can better scale to much larger data sets than those on AdventureWorks.Bonus: T-SQL #2 - the practical approach I'd actually use
After trying to think about the problem creatively for a while, I thought I'd also post the fairly simple, practical way that I would likely choose to tackle this problem if it came up in my daily work. It does make use of SQL 2012+ window functionality, but not in type of groundbreaking way that the question was hoping for:
This actually yields a fairly simple overall query plan, even when looking at the both of the two relevant query plans together:
A few reasons I like this approach:
900ms
on the provided data set, rather than the2700ms
of the original loop-seekA couple potential caveats:
这是一个很长的答案,所以我决定在这里添加一个摘要。
ProductIDs
每个产品的日期范围列表,汇总每天的成本(因为有多个交易具有相同的日期),将结果与原始行连接。ProductIDs
.我将使用AdventureWorks2014数据库和 SQL Server Express 2014。
对原始数据库的更改:
[Production].[TransactionHistory].[TransactionDate]
从更改datetime
为date
。无论如何,时间分量为零。[dbo].[Calendar]
[Production].[TransactionHistory]
.
MSDN 关于
OVER
条款的文章有一个链接,指向Itzik Ben-Gan的一篇关于窗口函数的优秀博客文章。在那篇文章中,他解释了如何工作,和选项OVER
之间的区别,并提到了计算一个日期范围内的滚动总和的问题。他提到当前版本的 SQL Server 没有完全实现,也没有实现时间间隔数据类型。他解释了两者之间的区别并给了我一个想法。ROWS
RANGE
RANGE
ROWS
RANGE
没有间隔和重复的日期
如果
TransactionHistory
表包含的日期没有间隔且没有重复,则以下查询将产生正确的结果:事实上,一个 45 行的窗口正好可以覆盖 45 天。
有间隔没有重复的日期
不幸的是,我们的数据在日期上有差距。为了解决这个问题,我们可以使用一个
Calendar
表来生成一组没有间隙的日期,然后将LEFT JOIN
原始数据添加到这个集合中并使用相同的查询ROWS BETWEEN 45 PRECEDING AND CURRENT ROW
。仅当日期不重复时(在同一日期内ProductID
),这才会产生正确的结果。有重复的空白日期
不幸的是,我们的数据在日期和日期之间都存在差距,并且日期可以在同一个
ProductID
. 为了解决这个问题,我们可以通过GROUP
原始数据ProductID, TransactionDate
生成一组没有重复的日期。然后使用Calendar
table 生成一组没有间隙的日期。然后我们可以使用查询 withROWS BETWEEN 45 PRECEDING AND CURRENT ROW
来计算 rollingSUM
。这将产生正确的结果。请参阅下面查询中的注释。我确认此查询产生的结果与使用子查询的问题的方法相同。
执行计划
第一个查询使用子查询,第二个 - 这种方法。您可以看到这种方法的持续时间和读取次数要少得多。这种方法中的大部分估计成本是最终的
ORDER BY
,见下文。子查询方法具有嵌套循环和
O(n*n)
复杂性的简单计划。计划这种方法扫描
TransactionHistory
几次,但没有循环。如您所见,超过 70% 的估计成本Sort
用于最终的ORDER BY
.顶部结果 -
subquery
,底部 -OVER
。避免额外的扫描
上面计划中最后的Index Scan, Merge Join and Sort,是由于finally
INNER JOIN
与原表的关系,使得最终的结果与子查询的slow approach完全相同。返回的行数与TransactionHistory
表中相同。TransactionHistory
同一产品在同一天发生多笔交易时存在行。如果可以在结果中只显示每日摘要,则JOIN
可以删除这个final,查询变得更简单、更快。上一个计划中的最后一个索引扫描、合并连接和排序被替换为过滤器,它删除了添加的行Calendar
。Still,
TransactionHistory
is scanned twice. One extra scan is needed to get the range of dates for each product. I was interested to see how it compares with another approach, where we use external knowledge about the global range of dates inTransactionHistory
, plus extra tableProduct
that has allProductIDs
to avoid that extra scan. I removed calculation of number of transactions per day from this query to make comparison valid. It can be added in both queries, but I'd like to keep it simple for comparison. I also had to use other dates, because I use 2014 version of the database.Both queries return the same result in the same order.
Comparison
Here are time and IO stats.
The two-scan variant is a bit faster and has fewer reads, because one-scan variant has to use Worktable a lot. Besides, one-scan variant generates more rows than needed as you can see in the plans. It generates dates for each
ProductID
that is in theProduct
table, even if aProductID
doesn't have any transactions. There are 504 rows inProduct
table, but only 441 products have transactions inTransactionHistory
. Also, it generates the same range of dates for each product, which is more than needed. IfTransactionHistory
had a longer overall history, with each individual product having relatively short history, the number of extra unneeded rows would be even higher.On the other hand, it is possible to optimize two-scan variant a bit further by creating another, more narrow index on just
(ProductID, TransactionDate)
. This index would be used to calculate Start/End dates for each product (CTE_Products
) and it would have less pages than covering index and as a result cause less reads.So, we can choose, either have an extra explicit simple scan, or have an implicit Worktable.
BTW, if it is OK to have a result with only daily summaries, then it is better to create an index that doesn't include
ReferenceOrderID
. It would use less pages => less IO.Single pass solution using CROSS APPLY
It becomes a really long answer, but here is one more variant that returns only daily summary again, but it does only one scan of the data and it doesn't require external knowledge about range of dates or list of ProductIDs. It doesn't do intermediate Sorts as well. Overall performance is similar to previous variants, though seems to be a bit worse.
The main idea is to use a table of numbers to generate rows that would fill the gaps in dates. For each existing date use
LEAD
to calculate the size of the gap in days and then useCROSS APPLY
to add required number of rows into the result set. At first I tried it with a permanent table of numbers. The plan showed large number of reads in this table, though actual duration was pretty much the same, as when I generated numbers on the fly usingCTE
.This plan is "longer", because query uses two window functions (
LEAD
andSUM
).An alternative SQLCLR solution that executes faster and requires less memory:
Deployment Script
That requires the
EXTERNAL_ACCESS
permission set because it uses a loopback connection to the target server and database instead of the (slow) context connection. This is how to call the function:Produces exactly the same results, in the same order, as the question.
Execution plan:
Profiler logical reads: 481
The main advantage of this implementation is that it is faster than using the context connection, and it uses less memory. It only keeps two things in memory at any one time:
This minimal caching should ensure this method scales well; certainly better than trying to hold the entire input set in CLR memory.
Source code
If you are on 64-bit Enterprise, Developer, or Evaluation edition of SQL Server 2014 you can use In-Memory OLTP. The solution will not be a single scan and and will hardly use any window functions at all but it might add some value to this question and the algorithm used could possibly be used as inspiration to other solutions.
First you need to enable In-Memory OLTP on AdventureWorks database.
The parameter to the procedure is an In-Memory table variable and that has to be defined as a type.
ID is not unique in this table, it is unique for each combination of
ProductID
andTransactionDate
.There are some comments in the procedure that tell you what it does but overall it is calculating the running total in a loop and for each iteration it does a lookup for the running total as it was 45 days ago (or more).
The current running total minus the running total as it was 45 days ago is the rolling 45 days sum we are looking for.
Invoke the procedure like this.
Testing this on my computer Client Statistics reports a Total execution time of about 750 millisecond. For comparisons the sub-query version takes 3.5 seconds.
Extra ramblings:
This algorithm could also be used by regular T-SQL. Calculate the running total, using
range
not rows, and store the result in a temp table. Then you can query that table with a self join to to the running total as it was 45 days ago and calculate the rolling sum. However, the implementation ofrange
compared torows
is quite slow due to the fact that is needs to treat duplicates of the order by clause differently so I did not get all that good performance with this approach. A workaround to that could be to use another window function likelast_value()
over a calculated running total usingrows
to simulate arange
running total. Another way is to usemax() over()
. Both had some issues. Finding the appropriate index to use to avoid sorts and avoiding spools with themax() over()
version. I gave up optimising those things but if you are interested in the code I have so far please let me know.Well that was fun :) My solution is a bit slower than @GeoffPatterson's but part of that is the fact that I'm tying back to the original table in order to eliminate one of Geoff's assumptions (i.e. one row per product/date pair). I went with the assumption this was a simplified version of a final query and may require additional information out of the original table.
Note: I'm borrowing Geoff's calendar table and in fact ended up with a very similar solution:
Here is the query itself:
Basically I decided that the easiest way to deal with it was to use the option for the ROWS clause. But that required that I only have one row per
ProductID
,TransactionDate
combination and not just that, but I had to have one row perProductID
andpossible date
. I did that combining the Product, calendar and TransactionHistory tables in a CTE. Then I had to create another CTE to generate the rolling information. I had to do this because if I joined it back the the original table directly I got row elimination that threw off my results. After that it was a simple matter of joining my second CTE back to the original table. I did add theTBE
column (to be eliminated) to get rid of the blank rows created in the CTEs. Also I used aCROSS APPLY
in the initial CTE to generate boundaries for my calendar table.I then added the recommended index:
And got the final execution plan:
EDIT: In the end I added an index on the calendar table that sped up performance by a reasonable margin.
I have a few alternate solutions that don't use indexes or reference tables. Perhaps they could be useful in situations in which you don't have access to any additional tables and cannot create indexes. It does appear to be possible to get correct results when grouping by
TransactionDate
with just a single pass of the data and just a single window function. However, I could not figure out a way to do it with just one window function when you cannot group byTransactionDate
.To provide a frame of reference, on my machine the original solution posted in the question has a CPU time of 2808 ms without the covering index and 1950 ms with the covering index. I am testing with the AdventureWorks2014 database and SQL Server Express 2014.
Let's start with a solution for when we can group by
TransactionDate
. A running sum over the last X days can also be expressed in the following way:In SQL, one way to express this is by making two copies of your data and for the second copy, multiplying the cost by -1 and adding X+1 days to the date column. Computing a running sum over all of the data will implement the above formula. I'll show this for some example data. Below is some sample date for a single
ProductID
. I represent dates as numbers to make the calculations easier. Starting data:Add in a second copy of the data. The second copy has 46 days added to the date and the cost multiplied by -1:
Take the running sum ordered by
Date
ascending andCopiedRow
descending:Filter out the copied rows to get the desired result:
The following SQL is one way to implement the above algorithm:
On my machine this took 702 ms of CPU time with the covering index and 734 ms of CPU time without the index. The query plan can be found here: https://www.brentozar.com/pastetheplan/?id=SJdCsGVSl
One downside of this solution is that there appears to be an unavoidable sort when ordering by the new
TransactionDate
column. I don't think that this sort can be resolved by adding indexes because we need to combine two copies of the data before doing the ordering. I was able to get rid of a sort at the end of the query by adding in a different column to ORDER BY. If I ordered byFilterFlag
I found that SQL Server would optimize out that column from the sort and would perform an explicit sort.Solutions for when we need to return a result set with duplicate
TransactionDate
values for the sameProductId
were much more complicated. I would summarize the problem as simultaneously needing to partition by and order by the same column. The syntax that Paul provided resolves that issue so it's not surprisingly that it's so difficult to express with the current window functions available in SQL Server (if it wasn't difficult to express there would be no need to expand the syntax).If I use the above query without grouping then I get different values for the rolling sum when there are multiple rows with the same
ProductId
andTransactionDate
. One way to resolve this is to do the same running sum calculation as above but also to flag the last row in the partition. This can be done withLEAD
(assumingProductID
is never NULL) without an additional sort. For the final running sum value, I useMAX
as a window function to apply the value in the last row of the partition to all rows in the partition.On my machine this took 2464ms of CPU time without the covering index. As before there appears to be an unavoidable sort. The query plan can be found here: https://www.brentozar.com/pastetheplan/?id=HyWxhGVBl
我认为上述查询还有改进的余地。当然还有其他方法可以使用 Windows 函数来获得所需的结果。