SQL 2008 R2: Agent XPS component is turned off as part of the security configuration of this server.

To correct this
Open a new query and paste the following:

In the query window type the following:

sp_configure ‘show advanced options’, 1;
sp_configure ‘Agent XPs’, 1;

After you run it you should expect to see the following in the message window.

Configuration option ‘show advanced options’ changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option ‘Agent XPs’ changed from 0 to 1. Run the RECONFIGURE statement to install.

If you want to disable either just replace the 1 with a 0.

Update on ntfs-3g

Back in December I posted about setting up direct read and write access to a NTFS drive from 10.5. It all seemed to be working okay until last week when I had to move a couple of VHD files, close to 500Gb, from a Mac running 10.5 across the wire to a windows based NAS. Good grief is all I can say about sustained performance. It took days to complete. More like a week to be honest… Why, as of yet I do not know but there is defintely something “up” with either the Mac or the driver. As the NAS is “fine.”

ZFS project

I recently began a storage project at home. Basically I intend to build a central NAS based on FreeNAS formatted with ZFS.

If you’re not aware of it, and shame on you if you indeed are not, under the Common Development and Distribution License (CDDL) in 2004, Sun released the Zettabyte File System as a means to bring advanced features like filesystem/volume management integration, snapshots and automated repair to its storage systems to its platforms. Since then, it has been fully integrated into OpenSolaris and Solaris 10, FreeBSD 7, and others. (Though I would steer clear of anything FUSE related for now…)

The challenge that I have been facing is how to get performance levels that are supposedly possible out of it. I have done the math on my hardware and know my goals. Getting them should be “fun.”

So far these links have been helpful.


SharePoint Disaster Recovery: A moment

Disk space is cheap. We all hear and see it but plenty of you out there seem to ignore this fact. Yes, there can be a cost associated with maintaining the extra volumes in your data plan, but does there rally have to be?

Let’s face it, the average hard disk has a stated MTBF that is just ridiculous. Oft misinterpreted, and more generally misunderstood the numbers range upward of 50+ years. They are sourced roughly with the following logic. If a drive has a MTBF rating for 300,000 hours and the service life is 5 years a group of these drives should provide 300,000 hours of service before one fails. Needless to say, the unknown unknowns can interfere… The key point here is that they as a standalone device are supposed to be, and typically are, rock solid and reliable. Paired with a drive of equal properties from a different manufacturer, or if the same, from a different production batch, your odds of failure are even more reduced. Right now an external 1TB drive with USB or Firewire will run you less than $150. Buy two and you’re still under $300. Total costs for electricity ~$50 a year? That’s cheap.

Now why don’t people just hook one of these to a server, networked would be a bonuus, and add it in as an additional backup location? Some do, but they are the exception, not the norm. More than once, though sometimes it took some “cajoling”, clients of mine have seen the merits of extra, cheap, storage that STSADM can dump data securely onto and be retrieved quickly and easily. I’m a firm believer in the more baskets you have, the fewer broken eggs you have.

Needless to say you can secure these drives with something like this…