SQL 2008 R2: Agent XPS component is turned off as part of the security configuration of this server.

To correct this
Open a new query and paste the following:

In the query window type the following:

sp_configure ‘show advanced options’, 1;
GO
RECONFIGURE;
GO
sp_configure ‘Agent XPs’, 1;
GO
RECONFIGURE
GO

After you run it you should expect to see the following in the message window.

Configuration option ‘show advanced options’ changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option ‘Agent XPs’ changed from 0 to 1. Run the RECONFIGURE statement to install.

If you want to disable either just replace the 1 with a 0.

SharePoint: Multiple databases for 1 Web Application

This process makes assumptions that the SharePoint environment is in a very exact state. Specifically that:
• All databases other the one currently being used for new MySites are in an offline state and that there is only ever one Web Application available.
• There is a defined business need for a series of smaller databases rather than one large and potentially unwieldy one.

When a database in SharePoint is in an offline state it is not in an inaccessible mode but rather it is not viewed as a tenable location by the process that creates new site collections. Instead SharePoint will look at the delta on the current number of sites value versus the set maximum in any and all other open Content Databases and select the one with the greatest value.

To set a content database status to offline:
1. Go to “Central Administration > Application Management > Content Databases”
2. Then click on the database of your choice set it to “Offline”

To create a new database for every new site collection the following steps need be followed.
1. Create a new content database.
a. Start the SharePoint Central Administration Web site.
b. On the Application Management page, in the SharePoint Web Application Management section, click Content databases.
c. On the Manage Content Databases page, click Add a content database.
d. On the Add Content Database page:
e. Select a Web application for the new database.
f. Select a database server to host the new database.
g. Specify the authentication method the new database will use and supply an account name and password if necessary.
h. Specify both the total number of top-level sites that can be created in the database and the number at which a warning will be issued.
i. Click OK.
2. Create the new Site Collection.
3. Set the newly created content database status to Offline.
4. View the contents of the new content database to confirm that it only contains what it should. (In theory a new My Site could be in there.)

It is important to confirm that this new database is appropriately accounted for by backup and DBA maintenance routines.

If a site collection was errantly created during the period that the new content database was online it can be moved as follows:

1. Run the following stsadm command, to get a list of all the sites in the web application

Stsadm -o enumsites -url <URL> > <path/file name>.xml

Where:
• <URL> is the address of the Web application that contains the site collection that you want to move,and
• <path/file name> is the name of the XML file that you want to create with the site collection data.

(Example: stsadm -o enumsites -url http://localhost > c:Sites.XML)

Open the XML file that you created in a text editing application. If there are any URLs for site collections that you do not want to move, be sure to delete them from the file. The only URLs that should remain in the XML file should be for the site collections that you want to move.

Note: There is no need to change the site count or any of the other site collection information in the file. Only the URLs are relevant to this procedure.

2. Run the following stsadm command:
Stsadm -o mergecontentdbs -url <URL> -sourcedatabasename <database name> -destinationdatabasename <destination database name> -operation 3 -filename <file name>

Where:
• <URL> is the address of the Web application that contains the site collection that you want;
• <database name> is the name of the database that you want to move the site collection from; and
• <destination database name> is the name of the database that you want to move the site collection to;
• operation 3 is the “Read from file” operation; and
• <file name> is the name of the file that you created in step 4.

(Example: stsadm -o mergecontentdbs -url http://localhost -sourcedatabasename WSS_Content -destinationdatabasename WSS_Content2 -operation 3 -filename c:Sites.xml)

Note: This step assumes that all of the sites in the Sites.xml file where in the source database name. If the sites are located in multiple databases you may need to repeat these steps for each source Content Database.

3. Restart IIS by typing the following command, and then pressing ENTER: iisreset /noforce.

Troubleshooting problems encountered with the SharePoint 2010 Configuration Wizard

For something with so few options the SharePoint 2010 Configuration Wizard should just work. Sadly it does not always do what it is supposed to do. When it does not your options are pretty much limited to common sense, event logs, and the Configuration Wizards logs themselves which found in the LOGS folder in C:Program FilesCommon FilesMicrosoft Shared Web Server Extensions14

They’re not perfect but they normally are sufficent enough to get started down the right path if you have a really obscure issue.

For example, lets say that you are having some kind of connectivity issues at the specify configuration server and database screen after opting to create a new farm. Inside this file you can find the parameters being set such as the user in the standard DOMAINUN format and their password as a series of ***** followed by mention of the functions as they are being stepped through by the installer. Very helpful if you are trying to figure out where the break is occurring!

A point to take away is that knowing the local time that you are running through the steps can be very helpful.

SharePoint 2010 Optimization: Start with SQL

If you need more than 4Gb of storage for your SharePoint 2010 environment and blob linking is not for you, working with SQL Server is on your plate. 2010 has a lot of databases, each and every one of which has a role and performance profile. Knowing these can help you squeeze out that extra bit out of your environment.

A typical, or maybe not so depending on your individual needs/budget/etc, SharePoint 2010 environment has the potential to look like this. If yours does not, and almost certainly it does not 🙂 , I’m sure you can see similarities.

Viewing this in a compartmentalized fashion skewed towards where you can gain literal and alluded performance improvements you have roughly three major sections.

A: The database
B: The WFEs
C: The UI / General payload

I’m going to focus on the databases, what they are, and what they do. With knowledge comes power as they say.

So starting with the basics…


System
A normal installation of SQL drops a couple of databases on the disk.
Expect to see:

  1. MASTER: Records all system level information for a SQL Server instance. E.g. logins, configuration, and other databases.
  2. MSDB: Records operators and is used by SQL Server Agent to schedule jobs and alerts.
  3. MODEL: Is used as the template for all the databases created in the instance.
  4. TEMPDB: All temporary tables, temporary stored procedures, and anything else that is ‘temporary’ is stored here. Like a cache this is recreated every time the SQL Server instance is started. 

Notes:
  • Of these four, only the last one TEMPDB really should be a candidate for its own spindle. 
  • Size wise, TEMPDB is the only one that you should expect any type of growth from.
  • All four scale vertically, as in you can only allow them to grow in size rather than add in extra databases.
Reporting Services
Reporting Services are often used with SharePoint 2010s Excel, Visio, and PerformacePoint Services but are not required. Access Services does require them, specifically the SQL Server 2008 R2 Reporting Services Add-in. 
Expect to see:
  1. ReportServer: This stores all the metadata for reports such as definitions, history, scheduling, and snapshots. When ReportServer is actually in use the report documents are stored in SharePoint Content databases.
  2. ReportServerTempDB: When reports are running temporary snapshots are found here.

Notes:
  • Both of these databases must coexist on the same database server. 
  • They have a heavy read load.
  • The ReportServerTempDB can grow if there is heavy use of cached snapshots. Just like with the System databases, all four scale vertically, as in you can only allow them to grow in size rather than add in extra databases.
Overview 1
Okay, so now we have a bunch of databases sitting on our server making it look a tad like this:
Now Lets start to add in SharePoint 2010…
Microsoft SharePoint Foundation 2010
Microsoft SharePoint Foundation 2010 drops a number of databases to your disks. (If you install Search Server 2010 Express you will see some of these databases. I have put a * next to the name if it is a factor in play for that kind of scenario.)
Expect to see:

  1. Configuration*: As inferred by its name, this is where information about the environment is stored. Databases, IIS sites, Web Applications, Trusted Solutions, Wb Part Packages, site templates, blocked file types, quotas, they’re all in here. There can only be one configuration database per farm and you can expect it to remain relatively small.
  2. Central Administration Content*: Just like the configuration database, there can only be one, it does really grow much, and as its name infers it stores the Central Administration sites content.
  3. Content*: All site content is stored in these guys. Site pages, documents, lists, web part properties, audit logs, user names and rights, they’re all in here. Office Web Application data is also stored in here.
  4. Usage*: The Usage and Health Data Collection Service Application uses this database. This write heavy database is unique in that it is the only database that can be (and should be) queried directly by external applications. There can only be one per farm. Health monitoring and usage data used for reporting and diagnostics is in here.
  5. Business Data Connectivity*: External Content Types and related objects are in here. It is typically a read heavy yet small sized database. 
  6. Application Registry*: You really only see this if backward compatibility with the BDC API is needed. Typically this can be deleted AFTER upgrading is complete.
  7. Subscription Settings: Features and settings for hosted customers gets put in here. It is not something you normally see as it has to be manually created using Powershell or SQL Server. If it is present you can expect it to remain small and busy only from reads.
Note:

  • Content databases can hold multiple site collections and their content. 
  • They can get BIG. Their size is dependent on size and count of both content and users.
  • 200GB should be your ceiling. Odds are you will see a performance hit as you pass that number. 1TB or more are an option but for Record Centers and the like. I.e. non collaborative usage and data retrieval (read only.) Add more content databases as you reach that 200GB level.
  • Due to its nature it is wise to move the Usage database to a separate spindle.
Overview 2
Okay, so now we have Foundation running with a bunch more databases sitting on our server making it look a tad like this:







Right so now lets move on to the Standard edition.

Microsoft SharePoint Server 2010 Standard
Compared directly to Foundation, Microsoft SharePoint Server 2010 Standard drops a number of extra databases to your disks. (If you install Search Server 2010 Express you will see some of these databases. I have put a * next to the name if it is a factor in play for that kind of scenario.)
Expect to see:
  1. Search Administration*: Used by the Search application the ACL and configuration information is in here. Growth can be vertical or horizontal – if you create new instances of the service application. 
  2. Crawl*: Again used by the Search application, the state of the crawled content/data and the historical crawl records are in here. This database can get a bit big and if you are crawling LARGE amounts of data it is a good idea to not have it on the same server as the next database, Property. It is also a good idea to be using the Enterprise version of SQL so that you can leverage the data compression functionality. This is a read heavy database.
  3. Property*: Again used by the Search application associated data to the crawls is stored in here such as history, crawl queues, and general properties. If you have large volumes of data, move this to its own server and you will see performance improvements with regard to search query result returns. This is a write heavy database.
  4. Web Analytics Reporting*: Used by the Web Analytics application, aggregated report tables, fact data grouped by site – date – asset, diagnostic information can be found in here. Depending on your policies this database can get massive. As it grows scale out by adding more databases. 
  5. Web Analytics Staging*: Temporary data (unaggregated) is in here. 
  6. State*: Used by the State Service application, InfoPath Forms Services, and Visio Services. Temporary state information for Exchange is also in here. More State databases can be added via PowerShell. 
  7. Profile: Used by the User Profile service application, users and their information is found and managed in here. Additional databases can be created if you create additional service instances. Expect lots of reads and medium growth.
  8. Synchronization: Again, used by the User Profile service application. It contains configuration and staging data for use when a directory service such as AD is being synched with. Scaling up or out is the same as with Profile. Its size can be influenced by not just the number of users and groups but their ratios as well. Expect fairly balanced read vs write activity.
  9. Social Tagging: Again, used by the User Profile service application. As users create social tags and notes they are stored, along with their URLs in here. Expect growth to be representative of how much your community embraces the functionality of social tagging. Scaling up or out is the same as with Profile. This is definitely more read than write heavy. 
  10. Managed Metadata Service: Syndicated content types and managed metadata is stored in here. Growth can be vertical or horizontal – if you create new instances of the Managed Metadata service application. 
  11. Secure Store*: Account names, passwords, and their mappings are stored in here. This is where you may have to collaborate with other internal groups as it is suggested that this database be hosted on a seperate database instane with limited access. Growth can be vertical or horizontal – if you create new instances of the service application. 

Note:
  • Watch the Web Analytics Reporting database. It can get very big.
  • Use compression for Crawl if you have the Enterprise version of SQL.
Overview 3
Okay, so now we have SharePoint Server 2010 Standard edition running with a bunch more databases sitting on our server making it look a tad like this:

Right so now lets move on to the Enterprise edition…

Microsoft SharePoint Server 2010 Enterprise
Microsoft SharePoint Server 2010 Enterprise drops extra databases on top of the Standard list to handle Word and PerformancePoint.
Expect to see:
  1. Word Automation Services: Used by the Word Automation service application information about impending and completed document conversions is to be found in here. This database can be expected to remain relatively small.
  2. PerformancePoint: Used by the PerformancePoint service application settings, persisted user comments, and temporary objects are stored in here. It can be expected to be read heavy and can be scaled up or out.
Overview 4
Okay, so now we have SharePoint Server 2010 Enterprise edition running on our server making it look a tad like this:

Where else can we go from here? Well there’s always Project Server , PowerPivot, and FAST… All are complicated products that require SharePoint Server 2010 Enterprise.





Microsoft Project Server 2010





Expect to see:

  1. Draft: Not accessible to end users this is used to store data used by the project queue. There can only be one as far as I know. 
  2. Published: When a project is published this is where it lives. Timesheets, resources, custom fields, and all the other metadata is stored in here. This database can get large.
  3. Archive: Backup data of projects, resources, calendars, etc. is in here. Growth can be limited, if it is not and is used it will get large.
  4. Reporting: This is the repository of the entire project portfolio. When you publish a project plan,  a copy will be put in here. Read heavy and large…





Microsoft FAST Search Server 2010 for SharePoint





Expect to see:

  1. Search Administration: Stores and manages search setting groups, keywords, synonyms, promotion/demotions, best bets, and search schema metadata. It should stay on the small side and remain read heavy. 




Microsoft SQL Server PowerPivot for SharePoint





Expect to see:

  1. PowerPivot Service Application: Cached and loaded PowerPivot files are in here along with usage data and schedules. You can expect this database to remain small.  
Note:
  • PowerPivot files can be rather big and PowerPivot stores data in content databases as well as the central administration database. 
So where does that leave us? If you have everything above installed you should expect to see something like this…
Looking at these databases and what could / should be 

candidates for having their own platters you should start to see something like this: 












Note that the Secure Store should be moved primarily for security purposes. 


It has to be mentioned as well that each and every environment can prove to be unique so you may not see value in moving anything other than your content databases to another platter.

So what does this look like in the wild? Viewing a screenshot of, an admittedly deliberately slightly duplicated environment,  you can see these  databases as they would appear in real life…

Their file system looks like this:

Clearly there is job security for DBAs going forward :). There is also a level of complexity that merits caution and planning when considering SharePoint 2010. The phrase a stitch in time continues to hold value…

Moving large numbers of records with SqlBulkCopy

Sometimes you have to move large amounts of data in, out, or around SQL. There are plenty of ways to do that but few are as fast as using SqlBulkCopy. It is not fast. It is blazingly fast. As with anything else it has its place and limitations but it is well worth understanding at the least.
It is pretty easy to work with as the below example shows…
static void CopyData(DataTable sourceTable, SqlConnection destConnection)
{
// new method: SQLBulkCopy:
using (SqlBulkCopy s = new SqlBulkCopy(destConnection))
{
s.DestinationTableName = “SalesCopy”;
s.NotifyAfter = 10000;
s.SqlRowsCopied += new SqlRowsCopiedEventHandler(s_SqlRowsCopied);
s.WriteToServer(sourceTable);
s.Close();
}
}

When used properly it can make seemingly large data volume transactions trivial.

Visual Studio 2010 problem with Vault

Working with Visual Studio 2010 and SQL 2008 I got this message out of the blue:
‘System.__ComObject’ to interface type ‘Microsoft.VisualStudio.OLE.Interop.IServiceProvider’. This operation failed because the QueryInterface call on the COM component for the interface with IID ‘{6D5140C1-7436-11CE-8034-00AA006009FA}’ failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).

Whoah… After a reboot, some thought, a few more attempts to build, and a bit of luck, I deduced with reasonable certainty that Vault, which I use for version control, was the culprit.
So I closed Studio 2010 and removed Vault. Then I opened the same solution and did a build with no problems. Next step is to figure out what to do about getting Vault back into play. Or not.

Searching an entire database for a Guid or Unique Identifier

Searching an entire database for a Guid or Unique Identifier can be a bit of a tricky proposition. However a little bit of T-SQL can help…

DECLARE @GUIDHunted nvarchar(60)
SET @GUIDHunted = ‘0A24EC0C-65EE-4519-89DF-ABD3DD24F7EF’

SELECT *, ‘UNION ALL SELECT ”’ + s.name + ‘.’ + ao.name + ”’, count(*) FROM ‘
+ s.name +’.[‘ + ao.name + ‘] WHERE ‘ + ac.name + ‘ = ”’ + @GuidHunted + ””
FROM sys.all_columns ac
JOIN sys.all_objects ao
ON ac.[object_id] = ao.[object_id]JOIN sys.schemas s
ON ao.[schema_id] = s.[schema_id]where user_type_id = 36 — UniqueIdentifier
and s.name != ‘sys’

Here is the output result for a fictional database (just copy the results into a new query window, delete the first UNION ALL and execute).

UNION ALL SELECT ‘myschema.Objects’ , count(*) FROM myschema.Objects WHERE ObjectID = ‘0A24EC0C-65EE-4519-89DF-ABD3DD24F7EF’

UNION ALL SELECT ‘myschema.Objects2’ , count(*) FROM myschema.Objects2 WHERE ObjectID = ‘0A24EC0C-65EE-4519-89DF-ABD3DD24F7EF’

This assumes that programmers weren’t storing guids in nvarchar types or that the programmers didn’t create other user types using unique identifiers.

SQL Server 2008 R2 released to manufacturing

Microsoft’s flagship database SQL Server 2008 R2 refresh was completed and released to manufacturing today. With improved scalability, support for up to 256 logical processors, and simplified centralized management of multiserver deployments it should be a boon to many. The BI aspect of it alone should make its embracement rate outrace that of previous versions. Master Data, richer reports, and improved analysis features all combine together to make a solid product even better.

Personally I look forward to testing out StreamInsight with real time log analysis from SharePoint 2010 farms and RFID deployments.