Disclaimer: The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

Harnessing the Power of both worlds

November 16, 2014 Leave a comment
Categories: Uncategorized

System Patching: Steps need to follow on SSAS Server before applying Windows Patches

 

Generally when we apply System Level patches we tend to reboot Machine / Server multiple times based on Patch requirement. In this blog I am trying to cover points we need to keep in mind while applying System Level Patches:

Reboot can cause SSAS Database Corruption (refer this link for more details), before applying a OS / System level  Patch or Rebooting Server on which SSAS Service is running (MOLAP / Tabular or Power Pivot Instance) refer steps as follow:

SQL Server Configuration Manager (Recommended)

1) Go to start and  Open SQL Server Configuration Manager (Run as Administrator)

2) Look for SQL Server Services and look for SQL Server Analysis Services on the right window

image

3) Right Click on SQL Server Analysis Services and Stop Service

4) Once the service is stopped successfully then apply Patch or Reboot Server and after reboot open SQL Server Configuration Manager to check if service came online (generally it does if Service is configured for Automatic start) if it doesn’t come online then right click on SQL Server Analysis Services and start Service

Services.msc

If due to what so ever reasons you don’t find SQL Server Configuration Manager or not able to open it due to any errors, in that case follow steps as follow:

1) Open Services.msc

2) Look for SQL Analysis Services, right click on service and stop services

image

4) Once the service is stopped successfully then apply Patch or Reboot Server and after reboot open Services.msc to check if service came online (generally it does if Service is configured for Automatic start) if it doesn’t come online then right click on SQL Server Analysis Services and start Service

 

Note:

* You can follow these steps for all other SQL Services as well (Exception Clustered Services / Always On – follow SQL documentation for guidance)

*These steps can be followed for SSAS Service before applying any OS Patch or doing any Maintenance Task

Advertisement
Categories: Uncategorized

Analysis Services MOLAP Performance Guide for SQL Server 2012 and 2014

 

This white paper describes how business intelligence developers can apply query and processing performance-tuning techniques to their OLAP solutions running on Microsoft SQL Server Analysis Services.

This paper is based on the performance guide for 2008 R2 and has been reviewed and updated to validate performance on SQL Server 2012 and SQL Server 2014.

To read the white paper, please download the Word document Analysis Services MOLAP Performance Guide for SQL Server 2012 and 2014.

Categories: Uncategorized

Tabular: Cant see Databases in Management Studio

March 26, 2014 Leave a comment

 

Background

You have a Role which got only Read Permission for Tabular Database with users who can only query Databases which means they can use any client tool and should be able to query Tabular Databases from any tool which they prefer like SSMS, SSRS, PPS, Excel, etc.

Roles and Permission shown in Figure 1 and Figure 2

Figure 1: Role Permission

image s

Figure 2: Test User member of Role which got Read Permission

image

Issue

When users from Read Only Role are trying to connect SQL Management Studio they are not able to see Databases on which they have Read Permission but if they do the same thing in an Instance of MOLAP they are able the to see the Databases on left side under Databases tree of Management Studio as shown in Figure 3, if you notice databases are not displayed for Test_Karan user.

 

Figure 3: Databases are not Displayed

image

Assessment: From TechNet Article – Roles (SSAS Tabular) we understand its an expected behavior

image

So what’s an option you have when you want to query Tabular Database with DAX or MDX and Databases are not getting displayed in Management, answer is simple – just click on MDX Query in the Management Studio as show in Figure 4

Figure 4: Click MDX and you will find that you will be able to Query Database on which you have Read Permission as shown in Figure 5

image

Figure 5: Though you are not able to see all databases but you can change the database which you need to query.

image

 

Conclusion

In MOLAP when you had Read Permission you were able to see those Databases in Management Studio but in Tabular Instance you cant see any more its by deign, in this post I have tried to show something simple which we generally tend to miss.

Categories: Uncategorized

Tabular: .Net Provider supported by Tabular Mode

September 10, 2013 Leave a comment

 

Yes, tabular server supports .Net Provider but only for those Data Sources which are mentioned in this article –  Data Sources Supported (SSAS Tabular)

Take an example of Oracle, if you select Oracle in “Table Import Wizard” and click on Next, you will find Connect to an Oracle Database Windows -> on that windows select Advanced and it will provide  an option of Selecting Providers, if you notice in screenshot as follow you can see Oracle Data Provider for .Net is present.

clip_image001

What about other data sources which are not appearing in the Table Import Wizard Relational Databases List, for such sources you can use Others (Oledb/ODBC)

image

This option should work if you are able to connect to your Data Source using ODBC, I have tested with SQL Engine and it worked like charm and same manner I had tested with Hadoop (HDInsight) as well

2) If ODBC Providers supported by Tabular Server, then why its not working for me?

ODBC is supported by Tabular Server but you need to configure 2 DSN one for 32 BIT and second for 64 BIT but with same name, otherwise you will get an error message while importing data.

Why we need 2 ODBC DSN?

32 Bit is for SSDT and 64 BIT is used by Tabular Engine while importing data in Tabular Server, further explained in this post Post Link

Categories: Uncategorized

Introduction to SQL 2012 Tabular Modeling

August 6, 2013 Leave a comment

3rd August, 2013

Presented a Session about SQL 2012 Tabular Server for DBA’s in an Event Organized by http://sqlgeeks.com/.

There were around 250+ attendees majorly DBA’s came with an intent of learning SQL BI, as they are from Non BI background, so I kept it basic.

PPT for your reference……

Few Pictures from event…….

20186_10200870424581755_496206351_n946334_10200870422141694_981674322_n

Categories: Uncategorized

SQL 2012 TABULAR: Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services

August 2, 2013 Leave a comment

Most Awaited Tabular Performance Tuning Paper is out………….

Applies to: SQL Server 2012

Summary: Tabular models hosted in SQL Server 2012 Analysis Service provide a comparatively lightweight, easy to build and deploy solution for business intelligence. However, as you increase the data load, add more users, or run complex queries, you need to have a strategy for maintaining and tuning performance. This paper describes strategies and specific techniques for getting the best performance from your tabular models, including processing and partitioning strategies, DAX query tuning, and server tuning for specific workloads

To review the document, please download the Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services Word document.

Categories: Uncategorized

Tabular: Error while using ODBC data source for Importing Data

Issue

When using ODBC as a Data Source in the Table Import Wizard to add data to a Tabular Server, we are getting the following  error; however, preview shows the data correctly:

Error Message:

OLE DB or ODBC error: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application; IM014.

A connection could not be made to the data source with the DataSourceID of ‘7300a711-299c-4367-98eb-b2e01b7e4ba5’, Name of ‘Custom 4’.

An error occurred while processing the partition ‘DimAccount 1_e8a5233f-406a-450d-8b2e-8bf86a321d2f’ in table ‘DimAccount 1_e8a5233f-406a-450d-8b2e-8bf86a321d2f’.

The current operation was cancelled because another operation in the transaction failed.

Environment

SSAS Tabular Server 2012 x64

Assessment

The problem occurred because SQL Server Data Tools is 32-bit and the DSN that we used initially to import data (in Data Link Properties) is 32-bit ODBC, but the server is 64-bit.

Therefore you are able to use Test Connection successfully and are even able to select tables. However, when you try to import data, an error occurs.

Because the server build is x64, the Tabular Engine will look for a 64-bit  DSN when attempting to import data, and in the absence of a 64-bit DSN, import will fail with the following error:

clip_image002

Solution:

The Preview feature in SSDT uses a 32-bit ODBC DSN, so you can view the data but cannot import data.

To fix this issue, you need to configure two ODBC DSNs, one 32-bit and one 64-bit, and give them the same name.

How to Create Matching 32-bit and 64-bit ODBC DSNs:

A 64-bit version of the Microsoft Windows operating system includes the following versions of the Microsoft Open Database Connectivity (ODBC) Data Source Administrator tool (Odbcad32.exe):

o The 32-bit version of the Odbcad32.exe file is located in the %systemdrive%\Windows\SysWoW64 folder.

o The 64-bit version of the Odbcad32.exe file is located in the %systemdrive%\Windows\System32 folder.

Refer this link for full reference http://support.microsoft.com/kb/942976

Categories: Uncategorized

Power View for Multidimensional Model is Finally Here

I am pretty excited about this feature with SQL 2012 SP1 Cumulative Update 4,  you can create Power View reports against existing Cubes.

Refer this blog for further information: http://blogs.msdn.com/b/analysisservices/archive/2013/05/31/power-view-connectivity-for-multidimensional-models-released.aspx

Categories: Uncategorized

HDInsight Whitepapers –

April 15, 2013 1 comment

Informative Whitepapers covering operation of HDInsight including:

1) Compression in Hadoop

When using Hadoop, there are many challenges in dealing with large data sets. The goal of this document is to provide compression techniques that you can use to optimize your Hadoop jobs, and reduce bottlenecks associated with moving and processing large data sets.

In this paper, we will describe the problem of data volumes in different phases of a Hadoop job, and explain how we have used compression to mitigate these problems. We review the compression tools and techniques that are available, and report on tests of each tool. We describe how to enable compression and decompression using both command-line arguments and configuration files.

To review the document, please download the Compression in Hadoop Word document.

2) Hadoop Performance in Hyper-V

Compelling use-cases from industry leaders are quickly changing Hadoop from an emerging technology to an industry standard. However, Hadoop requires considerable resources, and in the search for computing power, users are increasingly asking if it is possible to virtualize Hadoop—that is, create clusters on a virtual machine farm—to build a private cloud infrastructure .

This paper presents the result of internal benchmarks by Microsoft IT, in which the performance of a private cloud using virtual machines was compared to the same jobs running on servers dedicated to Hadoop. The goal was to determine whether Hadoop clusters hosted in Microsoft Hyper-V can be as efficient as physical clusters.

The results indicate that the performance impact of virtualization is small, and that Hadoop on Microsoft Hyper-V offers compelling performance as well as other benefits.

To review the document, please download the Performance of Hadoop on Windows in Hyper-V Environments Word document.

3)Job Optimization in Hadoop

The Map/Reduce paradigm has greatly simplified development of large-scale data processing tasks. However, when processing data at the terabytes or petabyte scale in Hadoop, jobs might run for hours or even days. Therefore, understanding how to analyze, fix, and fine-tune the performance of Map/Reduce jobs is an extremely important skill for Hadoop developers.

This paper describes the principal bottlenecks that occur in Hadoop jobs, and presents a selection of techniques for resolving each issue and mitigating performance problems on different workloads. The paper explains the interaction of disk I/O, CPU, RAM and other resources, and demonstrates with examples why efforts to tune performance should adopt a balanced approach.

It includes the results of extensive experiments with performance tuning, which resulted in significant differences in the speed of the same Map/Reduce job before and after.

To review the document, please download the Hadoop Job Optimzation Word document.

4) Leveraging a Hadoop cluster from SQL Server Integration Services (SSIS)

With the explosion of data, the open source Apache™ Hadoop™ Framework is gaining traction thanks to its huge ecosystem that has arisen around the core functionalities of Hadoop distributed file system (HDFS™) and Hadoop Map Reduce. As of today, being able to have SQL Server working with Hadoop™ becomes increasingly important because the two are indeed complementary. For instance, while petabytes of data can be stored unstructured in Hadoop and take hours to be queried, terabytes of data can be stored in a structured way in the SQL Server platform and queried in seconds. This leads to the need to transfer data between Hadoop and SQL Server.

This white paper explores how SQL Server Integration Services (SSIS), i.e. the SQL Server Extract, Transform and Load (ETL) tool, can be used to automate Hadoop + non Hadoop job executions, and manage data transfers between Hadoop and other sources and destinations.

To review the document, please download the Leveraging a Hadoop cluster from SQL Server Integration Services (SSIS) Word document

Categories: Uncategorized