In SSIS, how do I get the number of rows returned from the Source that SHOULD be processed
Asked Answered
T

5

7

I am working on a project to add logging to our SSIS packages. I am doing my own custom logging by implementing some of the event handlers. I have implemented the OnInformation event to write the time, source name, and message to the log file. When data is moved from one table to another, the OnInformation event will give me a message such as:

component "TABLENAME" (1)" wrote 87 rows.

In the event that one of the rows fails, and lets say only 85 rows were processed out of the expected 87. I would assume that the above line would read wrote 85 rows. How do I track how many rows SHOULD HAVE processed in this case? I would like to see something like wrote 85 of 87 rows. Basically, I think I need to know how to get the number of rows returned from the Source's query. Is there an easy way to do this?

Thank you

Taut answered 29/1, 2013 at 20:50 Comment(0)
E
8

You can use the Row Count transaformation after the Data source and save it the variable. This is going to be number of rows to be processed. Once it got loaded into the Destination, you should use the Execute SQL Task in Control flow and use Select Count(*) from <<DestinationTable>> and save the count into the Other variable[You should use the Where clause in your query to identify the current load]. So you will have number rows processed for logging.

Hope this helps!

Enrollee answered 30/1, 2013 at 8:17 Comment(4)
I tried using Row Count Transformation which worked for me. The only issue I have with it is that it only populates the variable after the Data Flow task is complete. So I can't write the number to my log file until after it is finished. If the Data flow task fails then my Script Task to do the logging is never hit. So I can only write to the log if the task is successful which defeats the purpose. If the Data flow task has an error, I want a log entry that states "50 of 75" rows processed.Taut
Is it possible to get the row count before it submits to the destination rather than having to wait until the entire Data flow task completes?Taut
You should use your log insertion in onFailure event as well. You should call RowCount transformation after the dataflow task and Precedence constraints should be set to "Completion", so that logging will happen even there are failures with respective row count.Enrollee
I got it to work by using the RowCount Transformation with the OnTaskFailed event. Initially, I was placing my Script Task to do the logging in the Control Flow following the Data Flow Task. The package never made it to that script task if there was an error in the data flow task. By placing the script task in the OnTaskFailed Event, I am able to log the expected number of rows obtained from the row count transformation. Thanks all for your help!!!Taut
E
5

Not enough space in comments to provide feedback. Posting an incomplete answer as I need to leave for the day.

You are going to have trouble accomplishing what you are asking for. Based on your comments in Gowdhaman008's answer, the value of a variable is not visible outside of a Data flow until after the finalizer event fires (OnPostExecute, I think). You can cheat and get that data out by making use of a script task to count rows through and firing off events, custom or predefined, to reporting package progress. In fact, just capture the OnPipelineRowsSent event. That will record how many rows are passing through a particular juncture and time surrounding it. SSIS Performance Framework Plus, you don't have to do any custom work or maintenance on your stuff. Out of the box functionality is a definite win.

That said, you aren't really going to know how many rows are coming out of a source until it's finished. That sounds stupid and I completely agree but it's the truth. Imagine a simple case, an OLE DB Source that is going to send 1,000,000 rows straight into an OLE DB Destination. Most likely, not all 1M rows are going to start in the pipeline, maybe only 10k will be in the first buffer. Those buffers are pushed to the destination and now you know 10k rows out of 10k rows have been processed. Lather, rinse, repeat a few times and in this buffer, a row has a NULL where it shouldn't. Boom goes the dynamite and the process fails. We have had 60k rows flow into the pipeline and that's all we know about because of the failure.

The only way to ensure we have accounted for all the source rows is to put an asynchronous transformation into the the pipeline to block all downstream components until all the data has arrived. This will obliterate any chance you have of getting good performance out of your packages. You'd still be subject to the aforementioned restrictions on updating variables but your FireXEvent message would accurately describe how many rows could have been processed in the queue.

If you started an explicit transaction, you could do something ugly like an Execute SQL Task just to get the expected count, write that to a variable and then log rows processed but then you're double querying your data and you increase the likelyhood of blocking on the source system because of the double pump. And that's only going to work for something database like. The same concept would apply for a flat file except now you'd need a script task to read all the rows first.

Where this gets uglier is for a slow starting data source, like a web service. The default buffer size might cause the entire package to run much longer than it'd need to simple because we are waiting on the data to arrive Slow starts

What I'd do

I'd record my starting and error counts (and more) using the Row Count. This will help you account for all the data that came in and where it went. I'd then turn on the OnPipelineRowsSent event to allow me to query the log and see how many rows are flowing through it RIGHT NOW.

enter image description here

Edington answered 30/1, 2013 at 23:2 Comment(2)
Would you be able to briefly explain how to turn on OnPipelineRowsSent?Concierge
@NateS.In the Control flow, right click and select Logging. Add a logging provider (SQL Server works nicely) and assuming you have a data flow task on the canvas, you will have a host of OnPipeline* events available to you. Queries here for breaking down times weblogs.sqlteam.com/jamesn/archive/2008/02/11/60504.aspx Feel free to start a new question if you have something specific in mindEdington
S
2

What you want is the Row Count transformation. Just add that to your data flow after your source query, and assign its output to a variable. Then you can write that variable to your log file.

Sydelle answered 29/1, 2013 at 22:21 Comment(4)
I tried using Row Count Transformation which worked for me. The only issue I have with it is that it only populates the variable after the Data Flow task is complete. So I can't write the number to my log file until after it is finished. If the Data flow task fails then my Script Task to do the logging is never hit. So I can only write to the log if the task is successful which defeats the purpose. If the Data flow task has an error, I want a log entry that states "50 of 75" rows processed.Taut
Is it possible to get the row count before it submits to the destination rather than having to wait until the entire Data flow task completes?Taut
I am having issues with writing the variable to the log file, even after it runs successfully. I'd like to use this to compare the number of rows received to the number of rows added to my staging table. How do you get the variable out of the data flow?Pinder
@Pinder You can add a OnPostExecute event to the Row Count Transformation that specifically writes the value to the log, but you'd probably be better off writing it directly to a database table to quickly compare the rowcount vs. the variable value. See this blog post for steps: sqlerudition.com/…Sydelle
B
0

Here is what I currently do. It's super tedious, but it works.

1) SSIS Method

2) I have a constant "1" value on all of the records. They are literally all the same.

3) Using a multicast step, I send the data flow off in 2 directions. Despite all being the same, we still have to sort by that constant value.

4) Use an aggregate step to aggregate on that constant and then resort it in order to join with the bottom data flow (it holds all of the actual data records with no aggregation).

Doing this allows me to have my initial row count.

  1. Later on, shown below, is use a conditional split step and do the same thing again after applying your condition. If the row count is the same, everything is fine and there are no problems.

If the row count is not the same, something is wrong.

Checking for Success

This is the general idea for the approach for solving your problem without having to use another data flow step.

TLDR:

Get a row count for 1 of the conditions by using a multicast, sort by some constant value, and aggregation step.

Do a sort and merge to grab the row count.

Use a conditional split and do it again.

If the pre and post row counts are the same, do this.

If the pre and post row counts are not the same, do that.

Brunel answered 6/2, 2019 at 15:39 Comment(0)
Z
-2

This MAY help if you have a column which has no bad data . Add a second Flat File Source to the package. Use the same connection as your existing File source. Choose the first column only and direct the output to a Row Count.

Zoba answered 18/5, 2016 at 11:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.