SharePoint – ‘Modified By’ does not get updated for an item

An issue was reported to me relating to a site collection whose Modified By information for their item in their lists would not update leaving the original name of the person who uploaded the file as the “Modified By” person. This was a consistent issue across all lists within only one particular Site Collection. All other site collections were working as expected.

After searching for the solution, it was discovered that the issue I was having was the same as discovered by Marc D Anderson (Link) and Victor Butuza ((Link).

Marc D Anderson – Item ‘Modified By’ Value Doesn’t change: Fixing a Damaged Column Schema

Victor Batuza – Modified By Column does not get updated

To diagnose the issue from the articles above to confirm I was having the same problem, I used SharePoint Manager 2010 to inspect the schema.xml of the site collection and a particular site collection. Please air on the side of caution on using SharePoint Manager as you don’t want to make changes to properties using this tool without knowing exactly what you are doing. Please use it on a copy\backup of the web application you are looking to diagnose.

What the schema should look like:

<Field ID=”{d31655d1-1d5b-4511-95a1-7a09e9b75bf2}” ColName=”tp_Editor” RowOrdinal=”0” ReadOnly=”TRUE” Type=”User” List=”UserInfo” Name=”Editor” DisplayName=”Modified By” SourceID=”http://schemas.microsoft.com/sharepoint/v3” StaticName=”Editor” FromBaseType=”TRUE” />

What my schema.xml actually looked like:

<Field ID=”{d31655d1-1d5b-4511-95a1-7a09e9b75bf2}” Name=”Editor” SourceID=”http://schemas.microsoft.com/sharepoint/v3” StaticName=”Editor” Group=”_Hidden” ColName=”tp_Editor” RowOrdinal=”0” Type=”User” List=”UserInfo” DisplayName=”Modified By” ReadOnly=”TRUE” Version=”1” />

As you can see from the above FromBaseType=”TRUE” was missing from my schema.xml. Next, I used the excellent powershell scripts from Marc D Anderson blog to diagnose how many lists were affected.

  1. Diagnose how many Lists are affected:
$siteURL = "http://siteurl"
$site = Get-SPSite($siteURL)
$errors = 0
$thisWebNum = 0
foreach($web in $site.AllWebs) {
$thisWebNum = $thisWebNum + 1
write-host $thisWebNum " Web " $web.Url  " Created on "  $web.Created
$listCounter = $web.Lists.Count
for($i=0;$i -lt $listCounter;$i++) {
$list = $web.Lists[$i]
$thisListNum = $i + 1
write-host "(" $thisListNum "/" $listCounter ") [" $list.Title "] Created on "  $list.Created
$f = $list.Fields.GetFieldByInternalName("Editor")
if ($f.SchemaXML -NotMatch 'FromBaseType="TRUE"')
{
$errors = $errors + 1
write-host "  Issue in schema " $f.schemaxml
}
}
$web.Dispose();
}
$site.Dispose();
write-host "TOTAL ISSUES: " $errors
  1. Fix one Document Library to test the fix

Once you have identified the number of document libraries that are affected affect, you will now need to update the document libraries schema.xml and add the FromBaseType=”True” to each one. To do this manually is an arduous task, so powershell is your friend here. Try it for one document library first and test it. This is explained is much more detail in Marc D Andersons blog so I won’t go over the details. But you will need to update the webappurl, sitename and list name before you run it.

IMPORTANT: Before you do any work on the list to fix it, it is important to have a backup of your content databases and run the fix on a copy of your web application so you are happy with the process. Then schedule downtime to put the fix in place on production. This is standard operation procedure, but still has to be mentioned.

$s = get-spsite http://webappurl
$w = $s.OpenWeb("SiteName")
$l = $w.Lists["ListName"]
$f = $l.Fields.GetFieldByInternalName("Editor")
write-host "BEFORE field at " $w.Url  " List "  $l.Title  "  is " $f.schemaxml
#add at the end of the schema the needed string and update the field and list
$f.SchemaXML = $f.SchemaXML -replace '/>',' FromBaseType="TRUE" />'
$f.Update()
$l.Update()
write-host "FIXED field at " $w.Url  " List "  $l.Title  "  is " $f.schemaxml
$w.Dispose();
$s.Dispose();
  1. Update all Lists with the Fix

Now that you have tested the fix, you now need to update the fix on all your lists. Run the powershell below on your site collection:

$siteURL = "http://siteurl"
$site = Get-SPSite($siteURL)
$errors = 0
$thisWebNum = 0
foreach($web in $site.AllWebs) {
$thisWebNum = $thisWebNum + 1
$listCounter = $web.Lists.Count
for($i=0;$i -lt $listCounter;$i++) {
$list = $web.Lists[$i]
$thisListNum = $i + 1
$f = $list.Fields.GetFieldByInternalName("Editor")
if ($f.SchemaXML -NotMatch 'FromBaseType="TRUE"')
{
$errors = $errors + 1
# fix the schema and update the field and list
$f.SchemaXML = $f.SchemaXML -replace '/>',' FromBaseType="TRUE" />'
$f.Update()
$list.Update()
write-host "FIXED field at " $w.Url  " List "  $l.Title  "  is " $f.schemaxml
}
if ($errors -gt 0)
{
write-host $thisWebNum " Web " $web.Url  " Created on "  $web.Created  " had " $errors " errors"
}
$errors = 0;
$web.Dispose();
}
$site.Dispose();
}

You can re-run step 2 to check to see if the fix has updated the schema.xml for all the lists.

  1. Test to Ensure New Document Libraries don’t have the same issue

A very good test is to check that when you create a new document library in the same site collection, that the original issue does not re-appear for the new list. If it does, then it is likely to be an issue with the site collection’s schema.xml which will need to be updated separately.

To update the schema.xml for the Site Collection, I found it easier to use SharePoint Manager 2010 as you can navigate to your troublesome site collection, go to the ‘Fields’ and find ‘Modified By’ which will be the second of the two entries and then add FromBaseType=”TRUE” to the schema.xml.

SharePoint Manager 2010 – https://social.technet.microsoft.com/wiki/contents/articles/8553.sharepoint-2010-manager.aspx

Download Link – http://spm.codeplex.com/releases/view/51438

I hope this helps people who had the same issue.

FAST Search – Repair or Rebuild your Corrupt Index from FIXML?

The FAST Search environment is a robust search solution for your SharePoint environment and once it is configured correctly and optimized then it will purr in the background surfacing rich search results for your users. However, on the rare occasion that your index gets corrupted, you as a SharePoint administrator will need to be aware of the tools and methods you can use to get it working again. If anything else, this should be part of your recovery procedure for your SharePoint environment.

So what options do you have when your index gets corrupted? Well, firstly you could completely delete the Index from SharePoint in the Search administration and via powershell on your FAST Search admin server. Or if you’re Index will take too long to re-build and will not meet your SLA then you can recover your Index using the FIXML. When FAST Search for SharePoint is indexing items it not only stores the physical index itself in the location ‘%drive%\FASTSeach\data\data_index’ but it also stores each indexed item in the FIXML in the location ‘%drive%\FASTSearch\data\data_fixml’. The FIXML contains all the information which is to become that item in the index.

Now that we know we can use the FIXML, there are two options available to you that are detailed below.

Option A – Repair a corrupt Index from FIXML

This process can be performed from any FS4SP server and on any column or row. The index reset does not rebuild the column from scratch, the indexer validates each item within the FS4SP column against the original FIXML. Any item not in sync or corrupt will be updated in the column / index.

  1. Ensure all crawls are stopped in Search Administration and the FS4SP column is idle. Use the powershell command below to show the status of FAST Search.

</p><p>Indexerinfo --row=0 --column=0 status
</p><p>

  1. Stop the web analyser and relevancy admin processes.

</p><p>Waadin abort processing
</p><p>Spreladmin abortprocessing
</p><p>

  1. Issue an Index Reset. You can re-run the second command to monitor the status or check the indexer.txt file in the logs directory (FAST Search\var\log\indexer).

</p><p>Indexeradmin --row=0 --column=0 resetindex
</p><p>Indexeradmin --row=0 --column=0 status
</p><p>

  1. Once the repair is complete, you can then resume the web analyser and relevancy admin.

</p><p>Waadin enqueueview
</p><p>Spreladmin enqueue
</p><p>

Your FAST Search Index will now be repaired and will be operational.

Option B – Rebuild an Index from XML

Ok, so you attempted Option A and this didn’t resolve your issue and your index is still corrupted. The next step is to rebuild the index from your fixml. Rebuilding the index requires a lot more disk space than option A as temporary files are created and released (within FAST Search directory) which means you are likely to consume twice as much disk space. If the disk space gets to 2GB of free space then the rebuild will fail so you will need to manage your disk space. Follow the steps below to complete this task.

  1. Ensure all crawls are stopped in Search Administration and the FS4SP column is idle. Use the powershell command below to show the status of FAST Search and keep a note of the entries “document size” and the “indexed=0”.

</p><p>Indexerinfo --row=0 --column=0 status
</p><p>

  1. Stop the web analyser and relevancy admin

</p><p>Waadin abort processing
</p><p>Spreladmin abortprocessing
</p><p>

  1. Rebuild the primary index column. Run the command from the primary indexer server to stop the processes.

</p><p>Nctrl stop
</p><p>

  1. Delete the folder ‘data_index’ within the directory ‘FASTSearch\data’ and start the services again using the nctrl command. When you start the processes the ‘data_index’ folder will be re-created and will be populated with a rebuild of the index.

</p><p>Nctrl start
</p><p>Indexerinfo --row=0 --column=0 status
</p><p>

Notice the “document size” entry, and check that the “indexed=0” is displayed as this comes from the fixml and means the index is empty. Keep re-running the status query until indexed items are at their original value. This is when it is complete.

  1. Once the rebuild is complete, you can then resume the web analyser and relevancy admin.

</p><p>Waadin enqueueview
</p><p>Spreladmin enqueue
</p><p>

Option B is now in place and complete. This will bring your FAST Search Index back online and ready for use.

SP2010 April CU stops incoming emails to a list or calendar

If you have followed my earlier post “Setting Up Incoming Mail to a List or Calendar” then you will most likely have a working incoming email configuration for your SharePoint 2010 environment. However, the laws that be at Microsoft have inadvertently released a bug in the April (2012) CU that will affect the incoming mail process.

The symptoms of this bug are that your email request will have been successfully sent through the Exchange process and arrived at your drop folder on your SharePoint web front end servers in the form of an ‘.EML’ file ready to be picked up by the timer job called ‘Microsoft SharePoint Foundation Incoming mail’. However, since the April CU was applied the .EML file will remain in the drop folder and is not picked up by the timer job. This is a frustrating bug and if you were to analyse the ULS logs a little deeper you are likely to get entries similar to the screenshot below and in particular the EventID of 6871.

The error is down to the Incoming Mail timer job somehow now being dependent on your site collection having a quota set. If you leave the site collection with the default quota of ‘0’ then the emails will fail to be picked up by SharePoint. However, this is not always consistent as I have a working SharePoint farm that is on the same farm level version with no quotas set for the site collection and incoming mail is working perfectly (very weird). So if you are unlucky to encounter this issue, then help is at hand as there are two options that you can run with which are detailed below:

Option 1: Configure your site collection with a site quota to your required specification. Once you make the changes the incoming emails that are sitting in your Drop folder will be picked up the next time the timer job has run.

Go to Central Administration > Application Management > Configure quotas and locks > Select the relevant site collection from the drop down menu > on the Site Quota Information section, set the limit to your desired number (e.g 10000 MB). Note: This be the new storage limit of your site collection so make sure have catered for future growth of the site collection.

Option 2: Download and apply the Microsoft hotfix KB2598348 which includes the fix for this bug. One thing to take into consideration for this resolution is that it will likely to bring your farm version level to July 2012 so you will need to test your new farm version in your development or staging environment to ensure it is stable for your site collections.

European SharePoint Conference Copenhagen 2013 – Forensics for IT Pro’s

On the Tuesday of the SharePoint conference I attended an interesting session by Jason Kaczor on ‘Forensics for IT pro’s and administrators’. This session had a beware sign stamped all over it for developers as the session was mainly aimed at giving IT pro’s the knowledge and tools to check custom code (thoroughly!!) before it is deployed.

In a majority of cases issues with SharePoint is commonly caused by customisations (custom code). The issues that you are most likely to discover from bad custom code is memory leaks. Below are some bullet points on the tips and recommendations that I found useful from the session:

  • Only accept .wsp files (cabinet file) to deploy to your environment. Reject .exe, .msi and bat files.
  • If you have to deploy a msi (eg from a vendor) then unpack the file using msiexec or 7-zip and inspect the files. Deploy to your test environment first and look out for a licence file.
  • The wsp should contain a manifest.xml, which will list the solutions and features.
  • The solutions will have no version numbers, but the features do have version numbers as it is used to update/rollback a solution. Ensure the version number is an increment on the previous version.
  • If the wsp has no dll’s then the deployment will generally be safe. If you have multiple dll’s in your deployment then this is a potential risk to your environment.
  • Run SPDisposeCheck against the compiled code (DLL) which checks for potential memory leaks in the code.
  • If you have custom code already deployed to the GAC in your farm then you can use tools like windiff or winmerge to extract the structure of the files to a safe place for future reference.
  • The solution called ‘Lapointe.SharePoint2010.Automation.wsp’ by Gary Lapointe is a must have tool for any SharePoint environment. The solution runs a full audit of your SharePoint farm detailing all custom code deployed to it.

There are some very useful Static Analysis tools available for you to use to help troubleshoot and analyse custom code which I have listed below for you:

  • Fx Cop 10 – Performs static code analysis of .NET code.
  • Gendarme – Extensible rule-based tool to fine problems in .Net applications.
  • Cat.net – Is a binary code analysis tool for finding security vulnerabilities
  • Dependency Walker – Finds dll’s dependencies
  • Perfmon – Use is to find memory leaks.
  • iLSpy – Open source .Net assembly browser and decompiler.