Wednesday, April 30, 2014

How to Optimize the Performance of SSIS 2012


In this post I write about some of the most important ways to optimize the performance of SSIS. Most of these ideas I got t from one of the training via internet.
Buffer Sizing
What are these DefaultBufferMaxRows and DefaultBufferSize?
The default value is 10MB (10485760 B) and its upper and lower boundaries are constrained by two internal non configurable properties of SSIS which are MaxBufferSize (100MB) and MinBufferSize (64KB).  It means the size of a buffer can be as small as 64KB and as large as 100MB.

Although SSIS engine does a good job in tuning for these properties in order to create an optimum number of buffers, if the size exceeds the DefaultBufferSize then it reduces the row in the buffer. For the better performance follow one of these.
  • You can remove unwanted columns and set the data type in each column appropriately.
In source extract you can use query conversion to reduce the data length if possible.
Also you can find the unused column form warning messages.
Data type can be find appropriately, by data profiling such as if a column has max character length is 50, then define Varchar(50).
  • Based on the system memory, you can tune these properties to have small number of large buffer, which could improve your performance.
You can get the idea about the buffer size tuning, using Log package with BufferSizeTuning to see adjustment of buffers.
Enable the Log Event Window, before running package; this will help to get the log event description, without got to the log file or other source.
To read the Log file, enable Log Events windows as shown below.

Once you run the package and stop, you will see the details of log in Log Event Window:


This log detail gives information to tune the buffer size or Max number of Rows.
Parallelism
Increase the MaxConcurrentExecutables on dedicated server: By default it is set to -1, in this case number of concurrent executables = number of processors + 2.
This can be changes based on the server hardware configuration.
C:\Users\UMASHA~1\AppData\Local\Temp\SNAGHTML401846f.PNG
Use parallel pipelined in data flow with separate data sets
Transformations:
Choose transformation carefully, use Non-blocking data flow task, as much as possible.
Reduce rows with conditional split where possible.
Optimize the Lookup cache
Source Data:
You have to take data which you required only. Rather than select all the column in the source table use SQL Command, for that try to use Data Access Mode: SQL Command for relational source
Make it FastParse option for flat file source
OLE DB Destination:
Use fast load option, this will do the Bulk Insert instead of insert row by row.
  • Also Table Lock makes the faster performance.
If we set Rows per Batch, when we dealing with Clustered Index, integration have short the rown before insert, that can take some time. If define the Rows Per Batch, we can limit the number of shorting and increase the performance of insert. If not we can drop the index before start insert and load rows asd rebuild the Index.
Other than this Memory, CPU and Network configuration also can increase the performance. Mainly make sure the Buffer spooled always be in 0, if not indication for that memory is not sufficient to hold the data.


Thursday, April 17, 2014

How to find the currently in use process and KILL the process in SQL Server 2012

How to manage when you get the message like “because it is currently in use”?
For Example, when I try to Drop database, getting following error message.


In this situation, first you have to check which process is currently running on particular database and KILL the process.
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('RegularDB')


KILL is commonly used to terminate a process that is blocking other important processes with locks, or a process that is executing a query that is using necessary system resources. System processes and processes running an extended stored procedure cannot be terminated. Use KILL very carefully, especially when critical processes are running.
KILL { session ID | UOW } [ WITH STATUSONLY ]

KILL WITH STATUSONLY does not terminate or roll back the session ID or UOW; the command only displays the current progress of the rollback.
To kill the process run: KILL 57
To check the status of KILL process: KILL 57 WITH STATUSONLY;

Another alternative approach is making the database as Single User. This process closes all the existing connections.
USE master;
GO
ALTER DATABASE RegularDB
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
DROP DATABASE RegularDB;

Cheers!

Cache Connection Manager SSIS 2012 with Example


This post is about Cache Connection Manager and Cache Transform. The cache connection manager reads data from cache transformation or from a cache files (.caw) and can save the data to a cache file. Using the Cache Connection Manager (CCM) is a new option for the Lookup transform in SSIS 2008. The CCM provides an alternative to doing lookups against a database table using an OLEDB connection. Whether you configure the cache connection manager to use a cache file, the data is always stored in memory. The cache connection manager does not support the Binary large Object (BLOB) data type DT_TEXT, DT_NTEXT and DT_IMAGE.

The following images shows that how to create New Cache Connection.

Cache connection can be made by file or in memory.
Option 1: Create cache directly in memory.

Option 2: Create cache use file cache.

Here I am going to demonstrate without use file cache. As I am going to use Product Number as mapping column so that make the index for Product Number as 1, and leave others as default.

The Index Position column if set to something greater than 0 simply denotes an index, let’s say a key or Primary Key if you wish. But this mapped column should be indexed, otherwise you will get error message as below:

Error message: Cannot map the Lookup column, 'ProductNumber', to an input column because the Lookup column is not an index column. Only index columns can be mapped when the Lookup transformation is configured to use a Cache connection manager.

You can use any source format such as csv, excel and etc.



The following images show that how to do the Case Transformation using Cache Transform component. Here I am using OleDBConnection to get the data from table.




Now let see how to use this case transformation under lookup.


If you want you can use same cache under any package under this project, you need to make it as Cache as Project Connection Manager.
Cheers!

Tuesday, April 15, 2014

Read data from XML file and Load to Tree control C# Code

This post is showing, how to read the data form XML file and load to Tree Control in ASP.NET using C#.
I created a XML file as below:

Create ASP.NET empty web application and under the form just drag and drop the Tree Control which is under navigation tag.

The sample code shows below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Xml;

namespace XMLDemo
{
    public partial class WebForm1 : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {

        }

        protected void Button1_Click(object sender, EventArgs e)
        {
            //Create a doc Document and load the XML data into it.
            XmlDocument doc = new XmlDocument();
            doc.Load("E://Testing.NET//StudentList.xml");
            XmlNodeList studentList = doc.SelectNodes("/");

            // SECTION 2. Initialize the TreeView control.
            TreeView1.Nodes.Clear();
            TreeView1.Nodes.Add(new TreeNode(doc.DocumentElement.Name));
            TreeNode tNode = new TreeNode();
            tNode = TreeView1.Nodes[0];

            // SECTION 3. Populate the TreeView with the DOM nodes.
            AddNode(doc.DocumentElement, tNode);
            TreeView1.ExpandAll();

        }

        private void AddNode(XmlNode inXmlNode, TreeNode inTreeNode)
        {
            XmlNode xNode;
            TreeNode tNode;
            XmlNodeList nodeList;
            int i;

            // Loop through the XML nodes until the leaf is reached.
            // Add the nodes to the TreeView during the looping process.
            if (inXmlNode.HasChildNodes)
            {
                nodeList = inXmlNode.ChildNodes;
                for (i = 0; i <= nodeList.Count - 1; i++)
                {
                    xNode = inXmlNode.ChildNodes[i];

                    inTreeNode.ChildNodes.Add(new TreeNode(xNode.Name));
                    tNode = inTreeNode.ChildNodes[i];
                    AddNode(xNode, tNode);
                }
            }
            else
            {
                // Here you need to pull the data from the XmlNode based on the
                // type of node, whether attribute values are required, and so forth.
                inTreeNode.Text = (inXmlNode.OuterXml).Trim();
            }
        }
    }
}

The output will be like as below.

Writing XMLA Query to retrieve or manipulate data from Microsoft SQL Server Analysis Services


Microsoft SQL Server Analysis Services uses the XML for Analysis (XMLA) protocol to handle all communication between client applications and an Analysis Services instance. At their most basic level, other client libraries such as ADOMD.NET and AMO construct requests and decode responses in XMLA, serving as an intermediary to an Analysis Services instance, which uses XMLA exclusively.

To support the discovery and manipulation of data in both multidimensional and tabular formats, the XMLA specification defines two generally accessible methods, Discover and Execute, and a collection of XML elements and data types.

Because XML allows for a loosely coupled client and server architecture, both methods handle incoming and outgoing information in XML format. Analysis Services is compliant with the XMLA 1.1. specification, but also extends it to include data definition and manipulation capability, implemented as annotations on the Discover and Execute methods.

As a programmer, you can use XMLA as a programmatic interface if solution requirements specify standard protocols, such as XML, SOAP, and HTTP. Programmers and administrators can also use XMLA on an ad hoc basis to retrieve information from the server or run commands.

Discover Method (XMLA)

Retrieves information, such as the list of available databases or details about a specific object, from an instance of Microsoft SQL Server Analysis Services. The data retrieved with the Discover method depends on the values of the parameters passed to it.Namespace   urn:schemas-microsoft-com:xml-analysis
SOAP Action   "urn:schemas-microsoft-com:xml-analysis:Discover"

Now let see how to get the databases from Analysis Services is fairly easy with a simple XMLA query like the following:


<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
                <RequestType>DBSCHEMA_CATALOGS</RequestType>
                <Restrictions />
                <Properties />

</Discover>

Once you run, you will get the output and save it as XML file, you will see the output will be look like below.


You would think that getting a list of cubes would involve another fairly simple XMLA query like the following:
<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
                <RequestType>MDSCHEMA_CUBES</RequestType>
                <Restrictions />
                <Properties />
</Discover>

 In these query no any conditions apply under RESTRICTION, it retrieves the entire cube under all the databases.
Let see how to specify the particular database and retrieve only the cubes under the database.
<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
                <RequestType>MDSCHEMA_CUBES</RequestType>
                <Restrictions />
                <Properties>
                                <PropertyList>
                                                <Catalog>UmaTEST</Catalog>
                                </PropertyList>
                </Properties>
</Discover>

Execute Method (XMLA)

Sends XML for Analysis (XMLA) commands to an instance of Microsoft SQL Server Analysis Services. This includes requests involving data transfer, such as retrieving or updating data on the server.

Namespace   urn:schemas-microsoft-com:xml-analysis
SOAP Action   "urn:schemas-microsoft-com:xml-analysis:Execute"

The following code sample is an example of Execute method call that contains a Multidimensional Expression (MDX) SELECT statement.
<Execute xmlns="urn:schemas-microsoft-com:xml-analysis">
                <Command>
                                <Statement>
                                                SELECT [Measures].MEMBERS ON COLUMNS FROM [Adventure Works DW2012]
                                </Statement>
                </Command>
                <Properties>
                                <PropertyList>
                                                <Catalog>UmaTEST</Catalog>
                                                <Format>Multidimensional</Format>
                                                <AxisFormat>ClusterFormat</AxisFormat>
                                </PropertyList>
                </Properties>
</Execute>

Cheers!