SharePoint 2010 Optimization: Start with SQL

If you need more than 4Gb of storage for your SharePoint 2010 environment and blob linking is not for you, working with SQL Server is on your plate. 2010 has a lot of databases, each and every one of which has a role and performance profile. Knowing these can help you squeeze out that extra bit out of your environment.

A typical, or maybe not so depending on your individual needs/budget/etc, SharePoint 2010 environment has the potential to look like this. If yours does not, and almost certainly it does not 🙂 , I’m sure you can see similarities.

Viewing this in a compartmentalized fashion skewed towards where you can gain literal and alluded performance improvements you have roughly three major sections.

A: The database
B: The WFEs
C: The UI / General payload

I’m going to focus on the databases, what they are, and what they do. With knowledge comes power as they say.

So starting with the basics…


System
A normal installation of SQL drops a couple of databases on the disk.
Expect to see:

  1. MASTER: Records all system level information for a SQL Server instance. E.g. logins, configuration, and other databases.
  2. MSDB: Records operators and is used by SQL Server Agent to schedule jobs and alerts.
  3. MODEL: Is used as the template for all the databases created in the instance.
  4. TEMPDB: All temporary tables, temporary stored procedures, and anything else that is ‘temporary’ is stored here. Like a cache this is recreated every time the SQL Server instance is started. 

Notes:
  • Of these four, only the last one TEMPDB really should be a candidate for its own spindle. 
  • Size wise, TEMPDB is the only one that you should expect any type of growth from.
  • All four scale vertically, as in you can only allow them to grow in size rather than add in extra databases.
Reporting Services
Reporting Services are often used with SharePoint 2010s Excel, Visio, and PerformacePoint Services but are not required. Access Services does require them, specifically the SQL Server 2008 R2 Reporting Services Add-in. 
Expect to see:
  1. ReportServer: This stores all the metadata for reports such as definitions, history, scheduling, and snapshots. When ReportServer is actually in use the report documents are stored in SharePoint Content databases.
  2. ReportServerTempDB: When reports are running temporary snapshots are found here.

Notes:
  • Both of these databases must coexist on the same database server. 
  • They have a heavy read load.
  • The ReportServerTempDB can grow if there is heavy use of cached snapshots. Just like with the System databases, all four scale vertically, as in you can only allow them to grow in size rather than add in extra databases.
Overview 1
Okay, so now we have a bunch of databases sitting on our server making it look a tad like this:
Now Lets start to add in SharePoint 2010…
Microsoft SharePoint Foundation 2010
Microsoft SharePoint Foundation 2010 drops a number of databases to your disks. (If you install Search Server 2010 Express you will see some of these databases. I have put a * next to the name if it is a factor in play for that kind of scenario.)
Expect to see:

  1. Configuration*: As inferred by its name, this is where information about the environment is stored. Databases, IIS sites, Web Applications, Trusted Solutions, Wb Part Packages, site templates, blocked file types, quotas, they’re all in here. There can only be one configuration database per farm and you can expect it to remain relatively small.
  2. Central Administration Content*: Just like the configuration database, there can only be one, it does really grow much, and as its name infers it stores the Central Administration sites content.
  3. Content*: All site content is stored in these guys. Site pages, documents, lists, web part properties, audit logs, user names and rights, they’re all in here. Office Web Application data is also stored in here.
  4. Usage*: The Usage and Health Data Collection Service Application uses this database. This write heavy database is unique in that it is the only database that can be (and should be) queried directly by external applications. There can only be one per farm. Health monitoring and usage data used for reporting and diagnostics is in here.
  5. Business Data Connectivity*: External Content Types and related objects are in here. It is typically a read heavy yet small sized database. 
  6. Application Registry*: You really only see this if backward compatibility with the BDC API is needed. Typically this can be deleted AFTER upgrading is complete.
  7. Subscription Settings: Features and settings for hosted customers gets put in here. It is not something you normally see as it has to be manually created using Powershell or SQL Server. If it is present you can expect it to remain small and busy only from reads.
Note:

  • Content databases can hold multiple site collections and their content. 
  • They can get BIG. Their size is dependent on size and count of both content and users.
  • 200GB should be your ceiling. Odds are you will see a performance hit as you pass that number. 1TB or more are an option but for Record Centers and the like. I.e. non collaborative usage and data retrieval (read only.) Add more content databases as you reach that 200GB level.
  • Due to its nature it is wise to move the Usage database to a separate spindle.
Overview 2
Okay, so now we have Foundation running with a bunch more databases sitting on our server making it look a tad like this:







Right so now lets move on to the Standard edition.

Microsoft SharePoint Server 2010 Standard
Compared directly to Foundation, Microsoft SharePoint Server 2010 Standard drops a number of extra databases to your disks. (If you install Search Server 2010 Express you will see some of these databases. I have put a * next to the name if it is a factor in play for that kind of scenario.)
Expect to see:
  1. Search Administration*: Used by the Search application the ACL and configuration information is in here. Growth can be vertical or horizontal – if you create new instances of the service application. 
  2. Crawl*: Again used by the Search application, the state of the crawled content/data and the historical crawl records are in here. This database can get a bit big and if you are crawling LARGE amounts of data it is a good idea to not have it on the same server as the next database, Property. It is also a good idea to be using the Enterprise version of SQL so that you can leverage the data compression functionality. This is a read heavy database.
  3. Property*: Again used by the Search application associated data to the crawls is stored in here such as history, crawl queues, and general properties. If you have large volumes of data, move this to its own server and you will see performance improvements with regard to search query result returns. This is a write heavy database.
  4. Web Analytics Reporting*: Used by the Web Analytics application, aggregated report tables, fact data grouped by site – date – asset, diagnostic information can be found in here. Depending on your policies this database can get massive. As it grows scale out by adding more databases. 
  5. Web Analytics Staging*: Temporary data (unaggregated) is in here. 
  6. State*: Used by the State Service application, InfoPath Forms Services, and Visio Services. Temporary state information for Exchange is also in here. More State databases can be added via PowerShell. 
  7. Profile: Used by the User Profile service application, users and their information is found and managed in here. Additional databases can be created if you create additional service instances. Expect lots of reads and medium growth.
  8. Synchronization: Again, used by the User Profile service application. It contains configuration and staging data for use when a directory service such as AD is being synched with. Scaling up or out is the same as with Profile. Its size can be influenced by not just the number of users and groups but their ratios as well. Expect fairly balanced read vs write activity.
  9. Social Tagging: Again, used by the User Profile service application. As users create social tags and notes they are stored, along with their URLs in here. Expect growth to be representative of how much your community embraces the functionality of social tagging. Scaling up or out is the same as with Profile. This is definitely more read than write heavy. 
  10. Managed Metadata Service: Syndicated content types and managed metadata is stored in here. Growth can be vertical or horizontal – if you create new instances of the Managed Metadata service application. 
  11. Secure Store*: Account names, passwords, and their mappings are stored in here. This is where you may have to collaborate with other internal groups as it is suggested that this database be hosted on a seperate database instane with limited access. Growth can be vertical or horizontal – if you create new instances of the service application. 

Note:
  • Watch the Web Analytics Reporting database. It can get very big.
  • Use compression for Crawl if you have the Enterprise version of SQL.
Overview 3
Okay, so now we have SharePoint Server 2010 Standard edition running with a bunch more databases sitting on our server making it look a tad like this:

Right so now lets move on to the Enterprise edition…

Microsoft SharePoint Server 2010 Enterprise
Microsoft SharePoint Server 2010 Enterprise drops extra databases on top of the Standard list to handle Word and PerformancePoint.
Expect to see:
  1. Word Automation Services: Used by the Word Automation service application information about impending and completed document conversions is to be found in here. This database can be expected to remain relatively small.
  2. PerformancePoint: Used by the PerformancePoint service application settings, persisted user comments, and temporary objects are stored in here. It can be expected to be read heavy and can be scaled up or out.
Overview 4
Okay, so now we have SharePoint Server 2010 Enterprise edition running on our server making it look a tad like this:

Where else can we go from here? Well there’s always Project Server , PowerPivot, and FAST… All are complicated products that require SharePoint Server 2010 Enterprise.





Microsoft Project Server 2010





Expect to see:

  1. Draft: Not accessible to end users this is used to store data used by the project queue. There can only be one as far as I know. 
  2. Published: When a project is published this is where it lives. Timesheets, resources, custom fields, and all the other metadata is stored in here. This database can get large.
  3. Archive: Backup data of projects, resources, calendars, etc. is in here. Growth can be limited, if it is not and is used it will get large.
  4. Reporting: This is the repository of the entire project portfolio. When you publish a project plan,  a copy will be put in here. Read heavy and large…





Microsoft FAST Search Server 2010 for SharePoint





Expect to see:

  1. Search Administration: Stores and manages search setting groups, keywords, synonyms, promotion/demotions, best bets, and search schema metadata. It should stay on the small side and remain read heavy. 




Microsoft SQL Server PowerPivot for SharePoint





Expect to see:

  1. PowerPivot Service Application: Cached and loaded PowerPivot files are in here along with usage data and schedules. You can expect this database to remain small.  
Note:
  • PowerPivot files can be rather big and PowerPivot stores data in content databases as well as the central administration database. 
So where does that leave us? If you have everything above installed you should expect to see something like this…
Looking at these databases and what could / should be 

candidates for having their own platters you should start to see something like this: 












Note that the Secure Store should be moved primarily for security purposes. 


It has to be mentioned as well that each and every environment can prove to be unique so you may not see value in moving anything other than your content databases to another platter.

So what does this look like in the wild? Viewing a screenshot of, an admittedly deliberately slightly duplicated environment,  you can see these  databases as they would appear in real life…

Their file system looks like this:

Clearly there is job security for DBAs going forward :). There is also a level of complexity that merits caution and planning when considering SharePoint 2010. The phrase a stitch in time continues to hold value…

SQL Server 2008 R2 released to manufacturing

Microsoft’s flagship database SQL Server 2008 R2 refresh was completed and released to manufacturing today. With improved scalability, support for up to 256 logical processors, and simplified centralized management of multiserver deployments it should be a boon to many. The BI aspect of it alone should make its embracement rate outrace that of previous versions. Master Data, richer reports, and improved analysis features all combine together to make a solid product even better.

Personally I look forward to testing out StreamInsight with real time log analysis from SharePoint 2010 farms and RFID deployments.

A random tidbit on non random data

I recently was talking with somebody who felt that TrueCrypt hidden volumes were the bee knees. The scenario they used, and which I myself have read ‘musings’ about, involved a laptop carrying sensitive corporate data being seized by customs. Laptop drive gets “reviewed”, secret container is not seen, and laptop passes as normal and uninteresting. Big deal. Bigger deal is if you have 007 style data and that guy in the uniform is pretty certain you have it as well. My colleagues version of the story ends with an almost hollywood style style exhalation of breath and cinematic zoom out to the hero walking out the door. That’s not how it would probably pan out…

Truecrypt volumes, which are essentially files, have certain characteristics that allow programs such as TCHunt to detect them with a high *probability*. The most significant, in mathematical terms, is that their modulo division by 512 is 0. Now it is certainly true that TrueCrypt volumes do not contain known file headers and that their content is indistinguishable from random, so it is difficult to definitively prove that certain files are TrueCrypt volumes. However their very presence can demonstrate and provide reasonable suspicion they contain encrypted data.

The actual math behind this is interesting. TrueCrypt volume files have file sizes that are evenly divisible by 512 and their content passes chi-square randomness tests. A chi-square test is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution* when the null hypothesis is true, or any in which this is asymptotically true. Specifically meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough.

So what does this all mean? Really nothing for us normal people. For those whom I have built custom STSADM containers for securing your backups and exports, your data is still secure and will stay that way indefinitely. For those running across the border. A forensic analysis will reveal the presence of encrypted data, TrueCrypt volumes or otherwise, but not much more. Sometimes that’s enough to start asking questions or poking further. With the forensic tools, not the dentistry kit.

* A skewed distribution whose shape depends on the number of degrees of freedom. As the number of degrees of freedom increases, the distribution becomes more symmetrical.

http://www.truecrypt.org/
http://16systems.com/TCHunt/

ZFS project

I recently began a storage project at home. Basically I intend to build a central NAS based on FreeNAS formatted with ZFS.

If you’re not aware of it, and shame on you if you indeed are not, under the Common Development and Distribution License (CDDL) in 2004, Sun released the Zettabyte File System as a means to bring advanced features like filesystem/volume management integration, snapshots and automated repair to its storage systems to its platforms. Since then, it has been fully integrated into OpenSolaris and Solaris 10, FreeBSD 7, and others. (Though I would steer clear of anything FUSE related for now…)

The challenge that I have been facing is how to get performance levels that are supposedly possible out of it. I have done the math on my hardware and know my goals. Getting them should be “fun.”

So far these links have been helpful.

http://wiki.freebsd.org/ZFS
http://www.freebsdnews.net/2009/01/21/setting-freebsd-zfs-only-system/
http://blogs.freebsdish.org/lulf/2008/12/16/setting-up-a-zfs-only-system/

http://www.freenas.org/

I have an old server at home that I wanted to use as a NAS (2TB) device. For basic home NAS use I have found FreeNAS to be satisfactory. It is limited, but has some really nice features such as support for Dynamic DNS which periodically updates DNS servers with the NAS’ IP address, a great feature for small home office users who use normal residential internet service which usually has a dynamic IP address…
If you have not heard of it, FreeNAS is a free NAS (Network-Attached Storage) server, supporting: CIFS (samba), FTP, NFS, AFP, RSYNC, iSCSI protocols, S.M.A.R.T., local user authentication, Software RAID (0,1,5) with a Full WEB configuration interface. FreeNAS takes less than 32MB once installed on Compact Flash, hard drive or USB key. The minimal FreeBSD distribution, Web interface, PHP scripts and documentation are based on M0n0wall. (Which is another solid solution…)

Trove info

Underlying operating system: FreeBSD 6.4
Hardware requirements: 128MB of RAM minimum
Language: Mostly PHP
Licence: Modified BSD