Sunday, July 31, 2011

What is Public Cloud

A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the general public over the Internet. Public cloud services may be free or offered on a pay-per-usage model.
  The main benefits of using a public cloud service are:
  • Easy and inexpensive set-up because hardware, application and bandwidth costs are covered by the provider.
  • Scalability to meet needs.
  • No wasted resources because you pay for what you use.
The term "public cloud" arose to differentiate between the standard model and the private cloud, which is a proprietary network or data center that uses cloud computing technologies, such as virtualization. A private cloud is managed by the organization it serves. A third model, the hybrid cloud, is maintained by both internal and external providers.
Examples of public clouds include Amazon Elastic Compute Cloud (EC2), IBM's Blue Cloud, Sun Cloud, Google AppEngine and Windows Azure Services Platform.
Ref: http://searchcloudcomputing.techtarget.com/definition/public-cloud

Identifying the blockers in SQL Server 2005 and 2008

Problem

In our SQL Server environment, we have frequent locking and blocking across a few different versions of SQL Server. How can I find blocking and blocked SPID’s in SQL Server 2005 and later versions?  Is there only one way to find out which spids are blocking?  Are there any commands that I can run against multiple SQL Server versions?  Check out this tip to learn more about locking and blocking.

Solution

Whenever a user contacts the DBA team indicating a processes looks hung or a process is not proceeding checking the applicable database blocking makes a great deal of sense. Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock. This forces the second connection to be blocked until the first connection completes. With this being said, locking is a natural occurrence in SQL Server in order to maintain data integrity.  For more information about locking and blocking review these tips: Understanding SQL Server Locking and Understanding SQL Server Blocking.
There are number of ways to find out the details of the system processes IDs (spids) involved in blocking. I have tried to cover some of the options in this tip to include:
  • sp_who2 System Stored Procedure
  • sys.dm_exec_requests DMV
  • Sys.dm_os_waiting_tasks
  • SQL Server Management Studio Activity Monitor
  • SQL Server Management Studio Reports
  • SQL Server Profiler

sp_who2 System Stored Procedure

The sp_who2 system stored procedure provides information about the current SQL Server processes with the associated users, application, database, CPU time, etc. The information returned can be filtered to return only the active processes by using the ‘active’ parameter.  Below is some sample code and a screen shot with showing process 55 being blocked by process 54.
USE Master
GO
EXEC sp_who2
GO
Additional resources:

sys.dm_exec_requests DMV

The sys.dm_exec_requests DMV provides details on all of the processes running in SQL Server. With the WHERE condition listed below, only blocked processes will be returned.
USE Master
GO
SELECT * 
FROM sys.dm_exec_requests
WHERE blocking_session_id <> 0;
GO
Additional resources:

sys.dm_os_waiting_tasks DMV

The sys.dm_os_waiting_tasks DMV returns information about the tasks that are waiting on resources. To view the data, users should have SQL Server System Administrator or VIEW SERVER STATE permissions on the instance.
USE Master
GO
SELECT session_id, wait_duration_ms, wait_type, blocking_session_id 
FROM sys.dm_os_waiting_tasks 
WHERE blocking_session_id <> 0
GO
Additional resources:

SQL Server Management Studio Activity Monitor

If you are more comfortable using SQL Server Management Studio to review locking and blocking as opposed to querying system objects or executing stored procedures, you are in luck.  There are even a few different tools in SQL Server Management Studio you can use.  The first option is the Activity Monitor, which can be accessed by navigating to the instance name | right click | select 'Activity Monitor'.  To view the Activity Monitor in SQL Server 2005 and SQL Server 2008, users should have SQL Server System Administrator or VIEW SERVER STATE permissions on the instance.
Additional resources:

SQL Server Management Studio Reports

The second option in SQL Server Management Studio to monitor blocking is with the standard reports, which can be accessed by navigating to the instance name | right click | Reports | Standard Reports | Activity - All Blocking Transactions.  Once again, users should have SQL Server System Administrator or VIEW SERVER STATE permissions on the instance.

Additional resources:

SQL Server Profiler

To capture blocking related data on a continuous basis, one option is to run SQL Server Profiler and save the data to a table or file for analysis purposes.  In order to configure Profiler to capture blocking related data, execute Profiler, configure the general properties then navigate to Event Selection tab | select Show all events | Errors and Warnings | check the Blocked process report and then run the application.  In addition, be sure to configure the 'blocked process threshold' before you start Profiler using this code:
sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO 
sp_configure 'blocked process threshold', 20
GO 
RECONFIGURE 
GO 
Ref: http://www.mssqltips.com/tip.asp?tip=2429

Thursday, July 28, 2011

WCF System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly

WCF System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly

If you get the above error when trying to return large numbers of items in a List from a WCF service, then ensure you have set the following behaviour for the dataContractSerializer in both your client and service configuration files..

<dataContractSerializer maxItemsInObjectGraph="2147483646"/>

As in this blog, but with larger number (i.e. max of int) http://processmentor.com/community/blogs/scott_middleton/archive/2007/06/08/169.aspx

Need the settings in service layer and client, set to this.
Service has this
<behavior name="MyServiceBehavior">
                                  <serviceDebug includeExceptionDetailInFaults="true" />
                                  <serviceMetadata httpGetEnabled="true" />
                                  <dataContractSerializer maxItemsInObjectGraph="2147483646"/>
                           behavior>
                     serviceBehaviors>
<service behaviorConfiguration="MyServiceBehavior"
                      name="MyService.ServiceImplementation.MyService">
                           <endpoint binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_Leads"
                                           bindingNamespace="http://MyService.ServiceContracts/2007/04"
                            contract="MyService.ServiceContracts.IMyService" />
                           <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
                     service>
              services>

Client has this …

<behaviors>
<endpointBehaviors>
<behavior name="LargeList">
<dataContractSerializer maxItemsInObjectGraph="2147483646" />
                           behavior>
                     endpointBehaviors>
              behaviors>
<endpoint address="http://localhost/myservice.svc" binding="basicHttpBinding"
                                    bindingConfiguration="BasicHttpBinding_IMyService" contract="LeadsServiceInternal.IMyService" name="BasicHttpBinding_IMyService"
                             behaviorConfiguration="LargeList"
                                    />
Note you must also set the following two settings on the client maxReceivedMessageSize and maxBufferSize

<binding name="BasicHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" allowCookies="false" bypassProxyOnLocal="false"
                                          hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483646" maxBufferPoolSize="524288" maxReceivedMessageSize="2147483646" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true">
                                  <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/>
                                  <security mode="None">
                                         <transport clientCredentialType="None" proxyCredentialType="None" realm=""/>
                                         <message clientCredentialType="UserName" algorithmSuite="Default"/>
                                  security>
                           binding>
Ref: http://consultingblogs.emc.com/merrickchaffer/archive/2007/09/19/WCF-System.Net.WebException_3A00_-The-underlying-connection-was-closed_3A00_-The-connection-was-closed-unexpectedly.aspx

Sql Server Deadlocks

Deadlocks can kill an application’s performance. Users will complain about the app being slow or broken. Developers will ask the DBA to fix the problem, DBAs will push the problem back on developers. The next thing you know, the office looks like Lord of the Flies.
What is a Deadlock?

A deadlock occurs when two queries need exclusive access to different tables and each query is waiting for the other to finish. Assume that there are two tables, tA and tB. There are also two queries, Q1 and Q2. The first query, Q1, takes an exclusive lock on tA at the same time that the second query, Q2, takes an exclusive lock on tB. So far, there’s nothing out of the ordinary happening. Q1 then requests exclusive access to tB. At this point we have a block. Q1 must wait for Q2 to release its lock before Q1 can finish. Q2 now requests an exclusive lock on tA. And here we have a deadlock.

Q1 won’t release its lock on tA until it can get a lock on tB. Q2 won’t release its lock on tB until it can get a lock on tA. In order for either query to finish, they need access to the other query’s resources. That’s just not going to happen. This is a deadlock.

In order for the database to keep responding, one of these queries has to go. The query that’s eliminated is called the deadlock victim.
Finding Deadlocks

What is the first sign of a deadlock? Queries that should be fast start taking a long time to respond. That’s the first sign of a deadlock, but that’s also the first sign of a lot of other problems. Another sign of a deadlock is an error (error 1205 to be precise) and a very helpful error message: Transaction (Process ID %d) was deadlocked on {%Z} resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

SQL Server is telling you exactly how to solve the problem – re-run your transaction. Unfortunately, if the cause of the deadlock is still running, odds are that your transaction will fail. You can enable several trace flags to detect deadlocks (trace flag2 1204 and 1222), but they output the deadlock to the SQL Server error log and produce output that is difficult to read and analyze.

Once deadlocks show up, your database administrator might reach for a script to pull deadlocks out of Extended Events. Extended Events are a great source of data for analysis. Although they’re a relatively new feature to SQL Server, they first appeared in SQL Server 2008, Extended Events already provide an incredibly rich set of tools for monitoring SQL Server. Event data can be held in memory (which is the default) or written out to a file. It’s possible that to build a set of monitoring tools that log all deadlocks to a file and then analyze that file after the events happen.

Yesterday's deadlocks are tomorrow's news!
Just like newspapers help us find out what happened yesterday, Extended Events provide a great way to investigate deadlocks that have already occurred. If you don’t have any other monitoring tools in place, Extended Events are a great place to start. Once you start seeing deadlocks, you’ll want to start gathering more information about them. It takes some skill to read the XML from a deadlock graph, but it contains a great deal of information about what happened. You can find out which tables and queries where involved the deadlock process, which process was killed off, and which locks caused the deadlock to occur.

The flip side of the coin is that Extended Events will give you very fine grained information about every deadlock that has already happened. There’s nothing in Extended Events to help you stop deadlocks from happening or even to detect them right when they are happening. Much like a microscope lets you look at a prepared slide in excruciating detail, Extended Events let you look at a single point in time in excruciating detail. You can only find out about things after they happen, not as they happen.
Deadlock Notifications

Wouldn’t be nice if you received notifications of deadlocks as they were happening? Good news! Deadlocks only happen when data is changed; it’s possible to wrap your modification statements inside a template to record any deadlocks that happen. This template takes advantage of some features in SQL Server to allow the deadlock notifications to get sent out asynchronously – the notifications won’t slow down any applications while they interact with SQL Server.

There is a big problem here: every stored procedure that modifies data needs this wrapper. If the wrapper is missed in one place, there won’t be any deadlock information collected from that query. If the wrapper needs to be changed, it has to be changed everywhere. This can be a good thing, of course, because you can target problem queries for reporting or different queries can respond in different ways. Like using Extended Events, this is a very fine grained mechanism for dealing with deadlocks. Action is taken at the level of a single execution of a query and not at the level of our entire application. If we’re going to take care of deadlocks, we want to do it once and fix things across the entire application.
Deadlocks by Design

Both the Extended Events and notification solution are very cunning ways to get information about deadlocks that have already happened. Neither solution helps applications respond to deadlocks as they happen.

Much like a director deciding to fix it in post, monitoring for deadlocks and trying to solve the problem is a reaction to something that should have been done right in the first place. Maybe budget constraints got in the way, maybe the software had to ship by a deadline, maybe there wasn’t expertise on the team to look into these problems. For whatever reason, something made it into production that causes deadlocks. It doesn’t matter what happened, the problem is there; deadlocks are happening.
Error 1205: Catching Deadlocks with Code

Application developers have tool they can use to cope with deadlocks. When SQL Server detects a deadlock and kills of a query, an error is thrown. That error makes its way back up to the software that made the database call. .NET developers can catch the exception and check the Number. (Deadlocks throw an error number of 1205.)

When a deadlock happens, SQL Server will kill off the cheapest transaction. The “cheapest” transaction is the transaction with the lowest cost. It’s getting rid of something that will be easy to run a second time around. Instead of having deadlocks cause problems, developers can easily check the errors that come back from the database server and try again. You can set the deadlock priority; if you don’t have time to fix to the code, you can specify which queries should run at a lower priority.

This is moving the problem up the chain. The users may not see that there is a deadlock, but the application code still needs to deal with it. Things can still be tricky, though. If there’s a long running transaction holding locks and causing deadlocks, no reasonable amount of re-tries will solve the deadlocking problem.
Reacting to Deadlocks with Architecture

The easiest way to eliminate deadlocks is to design the database to avoid deadlocks. It sounds facetious, doesn’t it? Of course the easiest way to avoid deadlocks is to design so they don’t happen!

There are a few architectural patterns to use in an application to avoid deadlocks.
Pattern 1: Using NOLOCK to Stop Deadlocks

NOLOCK for YESOUCH
A common way to stop deadlocks is to use the NOLOCK query hint. NOLOCK users advocate this approach because they believe it does what it says – it eliminates locking.

NOLOCK doesn’t get rid of all locks, just the ones that make your queries return the right results. You see, NOLOCK stops locking during read operations. In effect, it throws the hinted table or index into READ UNCOMMITTED and allows dirty reads to occur. Locks are still necessary for data modification; only one process can update a row at a time.

By using NOLOCK, you’re telling the database that it’s okay to avoid locking for read safety in exchange for still letting deadlocks happen.
Pattern 2: Indexing for Concurrency

In some cases, deadlocks are caused by bookmark lookups on the underlying table. A new index can avoid deadlocks by giving SQL Server an alternate path to the data. There’s no need for the select to read from the clustered index so, in theory, it’s possible to avoid a deadlock in this scenario.

Think about the cost of an index:
* Every time we write to the table, we probably end up writing to every index on the table.
* Every time we update an indexed value, there’s a chance that the index will become fragmented.
* More indexes mean more I/O per write.
* More indexes mean more index maintenance.

To top it off, there’s a good chance that the index that prevents a deadlock may only be used for one query. A good index makes a single query faster. A great index makes many queries faster. It’s always important to weight the performance improvement of a single index against the cost to maintain and index and the storage cost to keep that index around.
Pattern 3: Data Update Order

A simple change to the order of data modifications can fix many deadlocks. This is an easy pattern to say that you’re going to implement. The problem with this pattern is that it’s a very manual process. Making sure that all updates occur in the same order requires that developers or DBAs review all code that access the database both when it’s first written and when any changes are made. It’s not an impossible task, but it will certainly slow down development.

There’s another downside to this approach: in many scenarios, managing update order is simply too complex. Sometimes the correct order isn’t clear. Managing update order is made more difficult because SQL Server’s locking granularity can change from query to query.

In short, carefully controlling update order can work for some queries, but it’s not a wholesale way to fix the problem.
Common Patterns: Common Failures

One of the problems of all three patterns is that they’re all reactionary. Just like the two methods for detecting deadlocks, they get implemented after there is a problem. Users are already upset at this point. There has already been some kind of outage or performance problem that caused the users to complain in the first place. Of course, sometimes you inherit a problem and you don’t have the opportunity to get good design in place. Is there hope?

Whether you’re starting off new design, or combating existing problems, there is a way that you can almost entirely prevent deadlocks from occurring.
Using MVCC to Avoid Deadlocks

MVCC is a shorthand way of saying Multi-Version Concurrency Control. This is a fancy way of hinting at a much broader concept that can be summarized simply: by maintaining copies of the data as it is read, you can avoid locking on reads and move to a world where readers never block writers and writers never block readers.

This probably sounds like a big architectural change, right? Well, not really.

SQL Server 2005 introduced READ COMMITTED SNAPSHOT ISOLATION (RSCI). RCSI uses snapshots for reads, but still maintains much of the same behavior as the READ COMMITTED isolation level. With a relatively quick change (and about a 10 second outage), any database can be modified to make use of RCSI.
When Should You Use RCSI?

If you actually want my opinion on the subject: always. If you’re designing a new application, turn on RCSI from the get go and plan your hardware around living in a world of awesome. TempDB usage will be higher because that’s where SQL Server keeps all of the extra versions. Many DBAs will be worried about additional TempDB utilization, but there are ways to keep TempDB performing well.

The bigger question, of course, is why should I use RCSI?
Use RCSI to Eliminate Locking, Blocking, Deadlocks, Poor Application Performance, and General Shortness of Breath

RCSI may not cure pleurisy, but it’s going to future proof your application. Somewhere down the road, if you’re successful, you’ll have to deal with deadlocks. Turning on RCSI is going to eliminate that concern, or make it so minimal that you’ll be surprised when it finally happens.
A Snapshot of The Future: Looking Past RCSI

RCSI is probably all that most people think they going to need at the start of their architectural thinking. There will be circles and arrows and lines on a whiteboard and someone will say “We need to make sure that the DBAs don’t screw this up.” What they really mean is “Let’s talk to the data guys in a year about how we can make this greased pig go faster.”

Both of these versions can poop and bark.
During the early stages of an application’s life, a lot of activity consists of getting data into the database. Reporting isn’t a big concern because there isn’t a lot of data to report on and a few tricks can be used to make the database keep up with demands. Sooner or later, though, demand will outstrip supply and there will be problems. Someone might notice that long running reports aren’t as accurate as they should be. Numbers are close enough, but they aren’t adding up completely.

Even when you’re using RCSI, versions aren’t held for the duration of a transaction. The different isolation levels correspond to different phenomenon and those phenomenon, under a strict two-phase locking model, correspond to how long locks are held. When using one of the two MVCC implementations (RSCI or snapshots), the isolation levels and their phenomenon correspond to how long versions are kept around.

Using RCSI, or even READ COMMITTED, locks/versions are only held for a single statement. If a query has to read a table multiple times for a report, there’s a chance that there can be minor (or even major) changes to the underlying data during a single transaction. That’s right, even transactions can’t save you and your precious versions.

SNAPSHOT isolation makes it possible to create versions for the duration of a transaction – every time a query reads a row, it’s going to get the same copy of that row, no matter if it reads it after 5 seconds, 5 minutes, or 5 hours. There could be multiple updates going on in the background but the report will still see the same version of the row.
Getting Rid of Deadlocks in Practice

There are manual ways to accomplish eliminate deadlocks, but they require significant effort to design and implement. In many cases deadlocks can be eliminated by implementing either READ COMMITTED SNAPSHOT ISOLATION or SNAPSHOT isolation. Making the choice early in an application’s development, preferably during architectural decisions, can make this change easy, painless, and can be designed into the application from the start, making deadlocks a thing of the past.
Ref: http://www.brentozar.com/archive/2011/07/difficulty-deadlocks/

SQL Server, locking and hints

Over the past few years, SQL Server has blossomed from a small office data store to an enterprise-level database server. The number of users concurrently accessing a database also increased with this upgrade. SQL Server 2000's standard approach to locking resources often seems inefficient, but thankfully it provides features to override the standard locking. Locking hints may be used to tell the server how to lock resources, but let's examine locking before covering them.

What is a lock?
Relational database systems like SQL Server use locks to prevent users from stepping on each other's toes. That is, locks prevent users from making conflicting data changes. When one user has a particular piece of data locked, no other user may modify it. In addition, a lock prevents users from viewing uncommitted data changes. Users must wait for the changes to be saved before viewing. Data may be locked using various methods. SQL Server 2000 uses locks to implement pessimistic concurrency control among multiple users performing modifications in a database at the same time.

Deadlocks
A database deadlock can occur when there is a dependency between two or more database sessions for some set of resources. A deadlock is a condition that can occur on any system with multiple threads, not just on a relational database management system. A thread in a multithreaded system may acquire one or more resources (for example, locks). If the resource being acquired is currently owned by another thread, the first thread may have to wait for the owning thread to release the target resource. The waiting thread is said to have a dependency on the owning thread for that particular resource. The following listing shows the text of an exception where a deadlock occurred:
System.Data.SqlClient.SqlException: Transaction (Process ID 12) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

This exception was thrown when one SQL Server call conflicted with another resource that held a lock on the necessary resource. Consequently, one of the processes was terminated. This is a common error message for deadlocks with the process ID being unique to the system.

Types of locks
A database system may lock data items at one of many possible levels within the system hierarchy. The possibilities include:

Rows—an entire row from a database table
Pages—a collection of rows (usually a few kilobytes)
Extents—usually a collection of a few pages
Table—an entire database table
Database—the entire database table is locked


Unless otherwise specified, the database uses its own judgment to determine the best locking approach based upon the scenario. Locking is a resource-intensive activity (with respect to memory), so this is not always the best approach. Thankfully, SQL Server does provide a way to circumvent the default behavior. This is accomplished with locking hints.

Hints
There are times when you need to override SQL Server's locking scheme and force a particular range of locks on a table. Transact-SQL provides a set of table-level locking hints that you can use with SELECT, INSERT, UPDATE, and DELETE statements to tell SQL Server how you want it to lock the table by overriding any other system-wide or transactional isolation levels. The available hints include the following:

FASTFIRSTROW—The query is optimized to get the first row of the result set.
HOLDLOCK—Hold a shared lock until the transaction has been completed.
NOLOCK—Do not issue shared locks or recognize exclusive locks. This may result in data being returned that has been rolled back or has not been committed; therefore, working with dirty data is possible. This may only be used with the SELECT statement.
PAGLOCK—Locks the table.
READCOMMITTED—Read only data from transactions that have been committed. This is SQL Server's default behavior.
READPAST—Rows locked by other processes are skipped, so the returned data may be missing rows. This may only be used with the SELECT statement.
READUNCOMMITTED—Equivalent to NOLOCK.
REPEATABLEREAD—Locks are placed on all data used in queries. This prevents other users from updating the data, but new phantom rows can be inserted into the data set by another user and are included in later reads in the current transaction.
ROWLOCK—Locks the data at row level. SQL Server often locks at the page or table level to modify a row, so developers often override this setting when working with single rows.
SERIALIZABLE—Equivalent to HOLDLOCK.
TABLOCK—Lock at the table level. You may want to use this when performing many operations on table-level data.
UPDLOCK—Use update locks instead of shared locks while reading a table, and hold locks until the end of the transaction. This has the advantage of allowing you to read data without locking and to update that data later knowing the data has not changed.
XLOCK—Uses an exclusive lock on all resources until the end of the transaction.


Microsoft has two categories for the hints: granularity and isolation-level. Granularity hints include PAGLOCK, NOLOCK, ROWLOCK, and TABLOCK. On the other hand, isolation-level hints include HOLDLOCK, NOLOCK, READCOMMITTED, REPEATABLEREAD, and SERIALIZABLE. A maximum of one from each group may be used.

These hints allow the consultant to control the locking used by SQL Server, and they are included in the Transact-SQL statement. They are placed in the FROM portion of the statement preceded by the WITH statement. The WITH statement is an option with SQL Server 2000, but Microsoft strongly urges its inclusion. This leads many to believe that it may be mandatory in future SQL Server releases. Here is the hint syntax as it applies to the FROM clause:
[ FROM { < table_source > } [ ,...n ] ]
< table_source > ::=
table_name [ [ AS ] table_alias ] [ WITH ( < table_hint > [ ,...n ] ) ]
< table_hint > ::=
{ INDEX ( index_val [ ,...n ] )
| FASTFIRSTROW
| HOLDLOCK
| NOLOCK
| PAGLOCK
| READCOMMITTED
| READPAST
| READUNCOMMITTED
| REPEATABLEREAD
| ROWLOCK
| SERIALIZABLE
| TABLOCK
| TABLOCKX
| UPDLOCK
| XLOCK }

While this syntax does show its usage, it's easier to show a real example. The following Transact-SQL statement selects all data from the Employees table of the Northwind database:
SELECT *
FROM Employees WITH (nolock)

This gives me all data regardless of what other processes are currently doing with it, so the data may be dirty, but this is not important to my task. Another example updates all rows in a table, setting a field to a certain value:
UPDATE
Employees WITH (tablock)
SET Title='Test'

This example is updating every row in the table, so a table lock is utilized.

Alternate
At this point, I must stress the fact that even though a table-level hint is specified in code, the query optimizer may ignore the hint. Table-level hints are ignored if the query optimizer does not choose the table and used in the subsequent query plan. Also, the query optimizer will often choose an indexed view over a table. Lastly, a hint may be ignored if the table contains computed columns.

Use your discretion
Using table hints in your applications depends upon what is required. Whether you use them at all will depend upon your needs. For example, many consultants love to use the FASTFIRSTROW hint to return the first row quickly. This gives them something to work with while the rest of the query completes. When the data is unlikely to change (e.g., archived data), the NOLOCK hint is a good choice since the data is basically static. On the other hand, this approach would not be good when doing financial applications, when accuracy is a must.
Ref: http://www.techrepublic.com/article/control-sql-server-locking-with-hints/5181472

What is Cloud Computing

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.

A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.

A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

Infrastructure-as-a-Service like Amazon Web Services provides virtual server instanceAPI) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.
Reference: http://searchcloudcomputing.techtarget.com/definition/cloud-computing