Friday, May 28, 2010

SQL Server 2008 Editions and Downscaling

l'm having to look at downscaling the SQL infrastructure.

Not surprisingly, the cost of Enterprise edition is a large issue for a small company. Expecially when you factor number of processors and the need for multiple environments. Consolidation and downscaling are therefore the order of the day.

Cost aside, what FUNCTIONALITY will I stand to lose? (and miss, sob...)
  1. Data & Index Compression
  2. Backup Compression (although this is available in the STANDARD edition of SQL 2008 R2)
  3. Table Partitioning
  4. Online Index Rebuilds
  5. Resource Governor
ref : Features Supported by the Editions of SQL Server 2008

SQL University: Parallelism Week

Jorge Segarra aka SQLChicken (blog | twitter) hosts SQL University which is bringing experts together to write some quality learning content on SQL Server topics.

This week, it's Parallelism and the following pages are must reads on the subject...

SQL University: Parallelism Week - Introduction

SQL University: Parallelism Week - Part 2, Query Processing

SQL University: Parallelism Week - Part 3, Settings and Options

Thursday, May 27, 2010

SQL Date and DateTime Formats

A while ago (ok, 3 years ago) i posted a script to display all sql date formats.

The output of that script is below for quick reference (as I keep forgetting the codes)

To display a DATETIME in the desired format, use -

CONVERT(VARCHAR(30),GETDATE(), formatcode)

e.g. SELECT CONVERT(VARCHAR(30),GETDATE(),21)
which provides 2010-05-28 11:09:34.263


Format codes come from the table below. You can reproduce this results set by running this script

formatcodedatestring
105/28/10
210.05.28
328/05/10
428.05.10
528-05-10
628 May 10
7May 28, 10
810:34:48
9May 28 2010 10:34:48:687AM
1005-28-10
1110/05/28
12100528
1328 May 2010 10:34:48:687
1410:34:48:687
202010-05-28 10:34:48
212010-05-28 10:34:48.693
2205/28/10 10:34:48 AM
232010-05-28
2410:34:48
252010-05-28 10:34:48.693
100May 28 2010 10:34AM
10105/28/2010
1022010.05.28
10328/05/2010
10428.05.2010
10528-05-2010
10628 May 2010
107May 28, 2010
10810:34:48
109May 28 2010 10:34:48:703AM
11005-28-2010
1112010/05/28
11220100528
11328 May 2010 10:34:48:703
11410:34:48:703
1202010-05-28 10:34:48
1212010-05-28 10:34:48.703
1262010-05-28T10:34:48.703
1272010-05-28T10:34:48.703
13015 ????? ??????? 1431 10:34:48
13115/06/1431 10:34:48:703AM

Link : How to format datetime in SQL 2005

Wednesday, May 26, 2010

SQL 2008 : Row-Overflow Data Exceeding 8 KB

Prior to SQL 2008 tables where a single row exceeds a PAGE (8K) needed rethinking.

From MSDN >

" A table can contain a maximum of 8,060 bytes per row. In SQL Server 2008, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns. The length of each one of these columns must still fall within the limit of 8,000 bytes; however, their combined widths can exceed the 8,060-byte limit. This applies to varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns when they are created and modified, and also to when data is updated or inserted."

MSDN : Row-Overflow Data Exceeding 8 KB

Solace : Maximum row size

Friday, May 21, 2010

Bookmark : Automating Sliding Window Table Partitioning

This article is entitled 'Automating Sliding Window Table Partitioning' and describes 1 method (using SSIS) to implement horizontal partitioning on the fly.

I think it would need further work in terms of partition file placement and carries a small security consideration in that the package would need sufficient permissions to run the ALTER DATABASE and ALTER PARTITION SCHEME statements.

Still a good method and article though...

Link : Automatiing Sliding Window Table Partitioning

Bookmark : Scaling writes in mySql

Some interesting reads on Scaling writes in mySql.

INSERT IGNORE its a cool feature, I wish MS SQL had it...

Link : Scaling writes in mySql
Scaling mysql writes through partitioning

Wednesday, May 19, 2010

Calculating Maximum Row Size of a table

A quick script to calculate the maximum row size of a single row in a table.

MAX_LENGTH on the sys.columns table shows the maximum size in bytes
of the column. NB Strong Unicode (NVARCHAR) will take 2 bytes per character i.e.
will be twice the length of the column.

I use it here to show the size of the row.

Replace 'tablename' as appropriate.

SELECT
 SYS.OBJECTS.[NAME],
 SYS.OBJECTS.[OBJECT_ID],
 COUNT(SYS.COLUMNS.[NAME]) AS COLUMNCOUNT,
 SUM(SYS.COLUMNS.MAX_LENGTH) AS MAXLENGTH
FROM SYS.OBJECTS
INNER JOIN SYS.COLUMNS ON SYS.OBJECTS.OBJECT_ID = SYS.COLUMNS.OBJECT_ID
WHERE SYS.OBJECTS.[TYPE] = 'U'
--AND SYS.OBJECTS.[NAME] = 'tablename'
GROUP BY
 SYS.OBJECTS.[NAME],
 SYS.OBJECTS.[OBJECT_ID] 
ORDER BY 
 SYS.OBJECTS.[NAME] 

and the columns involved...

SELECT
  s.name AS schemaname
, o.name AS tablename
, c.name AS columnname
, c.max_length
FROM sys.objects o
INNER JOIN sys.columns c ON o.object_id = c.object_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id 
WHERE o.name  = 'tablename'

Friday, May 14, 2010

Bookmark : Who is Active? script

An excellent tool for DBAs, 'Who is Active'.
The script itself is quite impressive and is a lot more helpful than sp_who !

Create it in the master database, and run it by >

exec sp_WhoIsActive

Link : Adam Machanic : Who Is Active v9.57

Tuesday, May 11, 2010

HASHBYTES Function to implement a one way hash

Here I demonstrate using HASHBYTES to generate a hash of a string value.
I've chosen to use SHA1 for this demo which produces an 160-bit (20 byte) hash.

SELECT CAST(REPLICATE('Z',1024) AS NVARCHAR(1024))
-- returns me a string of 1024 'Z' characters

SELECT LEN(CAST(REPLICATE('Z',1024) AS NVARCHAR(1024)))
-- clarifies the length is 1024 

SELECT DATALENGTH(CAST(REPLICATE('Z',1024) AS NVARCHAR(1024)))
-- shows the real storage length is 2048 (unicode, i.e. Nvarchar requires double byte storage)

SELECT HASHBYTES('SHA1', CAST(REPLICATE('Z',1024) AS NVARCHAR(1024)))
-- shows the MD5 hash of that Nvarchar string

SELECT LEN(HASHBYTES('SHA1', CAST(REPLICATE('Z',1024) AS NVARCHAR(1024))))
-- confirms the length of that MD5 hash is 20

DECLARE @myhash VARBINARY(20)
SELECT @myhash = HASHBYTES('SHA1', CAST(REPLICATE('Z',1024) AS NVARCHAR(1024)))
-- assigns varbinary hash to variable
In the 'real world' , I'm using this to create indexable key values for long character strings that otherwise can't participate in indexes in this way.

CREATE TABLE [my].[SearchTerms](
 [SearchID] [bigint] IDENTITY(1,1) NOT NULL,
 [SearchTerm] [nvarchar](1024) NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [my].[SearchTerms] 
ADD SearchTermHash AS CAST(HASHBYTES('MD5',SearchTerm) AS VARBINARY(20))
GO 

solace : quindecillion

Moving Tables / Indexes across filegroups

How to move a table / index to a new filegroup
This can be a costly operation in terms of time and I/O

1) Create the filegroup

ALTER DATABASE myDatabase ADD FILEGROUP NewFileGroup
GO

ALTER DATABASE myDatabase
ADD FILE
( NAME = N'NewFileGroupData'
, FILENAME = 'D:\Data\NewFileGroupData.mdf'
, SIZE = 5000MB
, FILEGROWTH = 10%)
TO FILEGROUP NewFileGroup
GO


2A) If the table has a primary key or unique constraint, drop and recreate the constraint ...

USE myDatabase  
GO  
   
ALTER TABLE my.table DROP CONSTRAINT PK_Move WITH (MAXDOP =1,MOVE TO NewFileGroup)
GO  
   
ALTER TABLE my.table ADD CONSTRAINT PK_Move PRIMARY KEY(id) WITH(MAXDOP =1)  
GO 

2B) For a clustered / non-clustered index (outside of a constraint), recreate the index
using the CREATE INDEX statement, with DROP_EXISTING = ON

USE myDatabase  
GO  
CREATE NONCLUSTERED INDEX [ix_movethisindex] ON [my].[table] 
(
 [column1] ASC,
 [column2] ASC,
 [column3] ASC
)WITH (DROP_EXISTING = ON, ONLINE = ON, DATA_COMPRESSION = PAGE) ON [NewFileGroup]
GO

Note : I use;
  • ONLINE = ON for an ONLINE index build (Enterprise, Developer, and Evaluation editions only)
  • DATA_COMPRESSION = PAGE (Am on SQL 2008 Enterprise and am utilising compression functionality)


2C) To move heaps (tables with no clustered indexes) across filegroups, create a clustered index on the new filegroup, and then remove it again. This has the downside of temporarily ordering the table.

USE myDatabase  
GO  

CREATE CLUSTERED INDEX [ix_temp] ON [my].[table] 
(
 ID ASC
)WITH ( ONLINE = OFF, DATA_COMPRESSION = PAGE) ON [NewFileGroup]
GO

DROP INDEX [ix_temp] ON [my].[table] 

Friday, May 7, 2010

TSQL : Paging results

Adapted from SSC's Paging : The Holy Grail
How to page a results set with minimum server load >

DECLARE @startRow INT ; 
DECLARE @resultsSize INT ; 

SET @startrow =  50
SET @resultsSize = 50

;WITH cols
AS
(
    SELECT table_name, column_name,
        ROW_NUMBER() OVER(ORDER BY table_name, column_name) AS sequence,
        ROW_NUMBER() OVER(ORDER BY table_name DESC, column_name desc) AS totalRows
    FROM [INFORMATION_SCHEMA].columns
)
SELECT table_name, column_name, (totalRows + seq -1) as totalRows
FROM cols
WHERE sequence BETWEEN @startRow AND @startRow + (@resultsSize - 1)
ORDER BY sequence

Bookmark : APPLY operator

APPLY explained really well...

SSC : Understanding and Using APPLY (Part 1)

SSC : Understanding and Using APPLY (Part 2)

Monday, May 3, 2010

Bookmark : Sorting SQL Scripts in your SSMS Project (Solution Explorer)

A cool find today. A stored procedure that can help you reorder your SQL Solution by reordering the XML project file via XQuery.

It's a clever idea (and good example of XML manipulation), but I believe removing all my sql query files and adding them again has the same effect! (and is quicker than editing the xml solution file)

Sorting Query files in SQL Server Management Studio (SSMS) Solution/Project

Saturday, May 1, 2010

Manipulatable results from a stored procedure

This is a HACK i.e a tip/trick to help fetch data.
DO NOT use it in a production environment...

Step 1
Add the local server as a linked server use the alias [.] >
(I'm setting the authentication to be the currently logged in user).

USE [master]
GO

EXEC master.dbo.sp_addlinkedserver @server = N'.', @srvproduct=N'SQL Server'
GO

EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'.', @locallogin = NULL , @useself = N'True'
GO

Step 2
Use OPENQUERY to return a results set you can manipulate...

SELECT column1, column2, column3, COUNT(*) 
FROM OPENQUERY([.],'EXEC [database].schema.sp_mysproc
  @param1 = 450000
 ,@param2=10000') 
GROUP BY column1, column2, column3
ORDER BY COUNT(*) DESC