SDL Tridion 2009: Creating components through TOM API (via Interop) fails - tridion

Am facing a problem, while creating components through TOM API using .NET/COM Interop.
Actual Issue:
I have 550 components to be created through custom page. I am able to create between 400 - 470 components but after that it is getting failed and through an error message saying that
Error: Thread was being aborted.
Any idea / suggestion, why it is getting failed?
OR
Is there any restriction on Tridion 2009?
UPDATE 1:
As per #user978511 request, below is error on Application event log:-
Event code: 3001
Event message: The request has been aborted.
...
...
Process information:
Process ID: 1016
Process name: w3wp.exe
Account name: NT AUTHORITY\NETWORK SERVICE
Exception information:
Exception type: HttpException
Exception message: Request timed out.
...
...
...
UPDATE 2:
#Chris: This is my common function, which is called in a loop by passing list of params. Here am using Interop dll's.
public static bool CreateFareComponent(.... list of params ...)
{
TDSE mTDSE = null;
Folder mFolder = null;
Component mComponent = null;
bool flag = false;
try
{
mTDSE = TDSEInitialize();
mComponent = (Component)mTDSE.GetNewObject(ItemType.ItemTypeComponent, folderID, null);
mComponent.Schema = (Schema)mTDSE.GetObject(constants.SCHEMA_ID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadAll);
mComponent.Title = compTitle;
...
...
...
...
mComponent.Save(true);
flag = true;
}
catch (Exception ex)
{
CustomLogger.Error(String.Format("Logged User: {0} \r\n Error: {1}", GetRemoteUser(), ex.Message));
}
return flag;
}
Thanks in advance.

Sounds like a timeout, most likely in IIS which is hosting your custom page.
Are you creating them all in one synchronous request? Because that is indeed likely to time out.
You could instead create them in batches - or make sure your operations are done asynchronously and then polling the status regularly.
The easiest would just be to only create say 10 Components in one request, wait for it to finish, and then create another 10 (perhaps with a nice progress bar? :))

How you call TDSE object. I would like to mention here "Marshal.ReleaseComObject" procedure. Without releasing COMs objects can lead to enormous memory leaks.
Here is code for component creating:
private Component NewComponent(string componentName, string publicationID, string parentID, string schemaID)
{
Publication publication = (Publication)mTdse.GetObject(publicationID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Folder folder = (Folder)mTdse.GetObject(parentID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Schema schema = (Schema)mTdse.GetObject(schemaID, EnumOpenMode.OpenModeView, publicationID, XMLReadFilter.XMLReadContext);
Component component = (Component)mTdse.GetNewObject(ItemType.ItemTypeComponent, folder, publication);
component.Title = componentName;
component.Schema = schema;
return component;
}
After that please not forget to release mTdse ( in my case it is previously created TDSE object). Disposing "Components" object can be useful also after finish working with them.

For large Tridion batch operations I always use a Console Application and run it directly on the server.
Use Console.WriteLine to write to the output window and Console.ReadLine as the last line of code in the app (so the window stays open). I also use Log4Net as the logger.
This is by far the best approach if you have access to a remote session on the server - or can ask an admin to run it for you and give you access to the log folder via a network share.

As per #chris suggestions and part of immediate fix I have changed my web.config execution time out to 8000 seconds.
<httpRuntime executionTimeout="8000"/>
With this change, custom page is able to handle as of now.
Any more best suggestion, please post it.

Related

Realm doesn’t work with xUnite and .net core

I’m having issues running realm with xUnite and Net core. Here is a very simple test that I want to run
public class UnitTest1
{
[Scenario]
public void Test1()
{
var realm = Realm.GetInstance(new InMemoryConfiguration("Test123"));
realm.Write(() =>
{
realm.Add(new Product());
});
var test = realm.All<Product>().First();
realm.Write(() => realm.RemoveAll());
}
}
I get different exceptions on different machines (Windows & Mac) on line where I try to create a Realm instace with InMemoryConfiguration.
On Mac I get the following exception
libc++abi.dylib: terminating with uncaught exception of type realm::IncorrectThreadException: Realm accessed from incorrect thread.
On Windows I get the following exception when running
ERROR Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. at
System.Net.Sockets.NetworkStream.Read(Span1 destination) at
System.Net.Sockets.NetworkStream.ReadByte() at
System.IO.BinaryReader.ReadByte() at
System.IO.BinaryReader.Read7BitEncodedInt() at
System.IO.BinaryReader.ReadString() at
Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.LengthPrefixCommunicationChannel.NotifyDataAvailable() at
Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.TcpClientExtensions.MessageLoopAsync(TcpClient client, ICommunicationChannel channel, Action1 errorHandler, CancellationToken cancellationToken) Source: System.Net.Sockets HResult: -2146232800 Inner Exception: An existing connection was forcibly closed by the remote host HResult: -2147467259
I’m using Realm 3.3.0 and xUnit 2.4.1
I’ve tried downgrading to Realm 2.2.0, and it didn’t work either.
The solution to this problem was found in this Github post
The piece of code from that helped me to solve the issue
Realm GetInstanceWithoutCapturingContext(RealmConfiguration config)
{
var context = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
Realm realm = null;
try
{
realm = Realm.GetInstance(config);
}
finally
{
SynchronizationContext.SetSynchronizationContext(context);
}
return realm;
}
Though it took a while for me to apply this to my solution.
First and foremost, instead of just setting the context to null I am using Nito.AsyncEx.AsyncContext. Because otherwise automatic changes will not be propagated through threads, as realm needs a non-null SynchronizationContext for that feature to work. So, in my case the method looks something like this
public class MockRealmFactory : IRealmFactory
{
private readonly SynchronizationContext _synchronizationContext;
private readonly string _defaultDatabaseId;
public MockRealmFactory()
{
_synchronizationContext = new AsyncContext().SynchronizationContext;
_defaultDatabaseId = Guid.NewGuid().ToString();
}
public Realm GetRealmWithPath(string realmDbPath)
{
var context = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(_synchronizationContext);
Realm realm;
try
{
realm = Realm.GetInstance(new InMemoryConfiguration(realmDbPath));
}
finally
{
SynchronizationContext.SetSynchronizationContext(context);
}
return realm;
}
}
Further, this fixed a lot of failing unit tests. But I was still receiving that same exception - Realm accessed from incorrect thread. And I had no clue why, cause everything was set correctly. Then I found that the tests that were failing were related to methods where I was using async realm api, in particular realm.WriteAsync. After some more digging I found the following lines in the realm documentation.
It is not a problem if you have set SynchronisationContext.Current but
it will cause WriteAsync to dispatch again on the thread pool, which
may create another worker thread. So, if you are using Current in your
threads, consider calling just Write instead of WriteAsync.
In my code there was no direct need of using the async API. I removed and replaced with sync Write and all the tests became green again! I guess if I find myself in a situation that I do need to use the async API because of some kind of bulk insertions, I'd either mock that specific API, or replace with my own background thread using Task.Run instead of using Realm's version.

Monitor the "active state" of Biztalk send port service instance

Team,
My biztalk send port instance gets hung and stays in the active state for longer periods of time. I would like to monitor that send port active instance with the help of C#.
I intend to run a code which will check if the send port(passed as a parameter) is still in a running state or not. Can anyone help me with that piece of code ?
Use WMI MSBTS_ServiceInstance.ServiceStatus Property:
public static int GetRunningServiceInstanceCount()
{
int countofServiceInstances = 0;
try
{
ManagementObjectSearcher searcher = new ManagementObjectSearcher("root\\MicrosoftBizTalkServer", "SELECT * FROM MSBTS_ServiceInstance WHERE ServiceStatus = 1 or ServiceStatus = 2");
countofServiceInstances = searcher.Get().Count;
return countofServiceInstances;
}
catch (ManagementException exWmi)
{
throw new System.Exception("An error occurred while querying for WMI data: " + exWmi.Message);
}
}
To get to your actual problem: The SFTP adapter in BizTalk 2016 has a great way of using the most recent version of the FTP code. This might solve stability issues.
Assuming from your BizTalk 2013 tag, you're probably not using the 2016 version, in that case double check you are at least at CU3 since that one solves a few critical SFTP bugs.

NSFileProtectionComplete doesn't encrypt the core data file

I am using Xcode 7.3 for iOS 9.3 to try and encrypt a Core Data file. I am trying to use NSPersistentStoreFileProtectionKey and set it to NSFileProtectionComplete to enable the encryption. It is not working for some reason and I can always see the .sqlite file generated by the app and browse through the content in sqlitebrowser or iexplorer. Here is my code :
lazy var persistentStoreCoordinator: NSPersistentStoreCoordinator = {
// The persistent store coordinator for the application. This implementation creates and returns a coordinator, having added the store for the application to it. This property is optional since there are legitimate error conditions that could cause the creation of the store to fail.
// Create the coordinator and store
let coordinator = NSPersistentStoreCoordinator(managedObjectModel: self.managedObjectModel)
let url = self.applicationDocumentsDirectory.URLByAppendingPathComponent("SingleViewCoreData.sqlite")
var failureReason = "There was an error creating or loading the application's saved data."
let dict: [NSObject : AnyObject] = [
NSPersistentStoreFileProtectionKey : NSFileProtectionComplete
]
do {
try coordinator.addPersistentStoreWithType(NSSQLiteStoreType, configuration: nil, URL: url, options: dict)
} catch {
// Report any error we got.
var dict = [String: AnyObject]()
dict[NSLocalizedDescriptionKey] = "Failed to initialize the application's saved data"
dict[NSLocalizedFailureReasonErrorKey] = failureReason
dict[NSUnderlyingErrorKey] = error as NSError
let wrappedError = NSError(domain: "YOUR_ERROR_DOMAIN", code: 9999, userInfo: dict)
// Replace this with code to handle the error appropriately.
// abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
NSLog("Unresolved error \(wrappedError), \(wrappedError.userInfo)")
abort()
}
do {
let url = self.applicationDocumentsDirectory.URLByAppendingPathComponent("SingleViewCoreData.sqlite")
try NSFileManager.defaultManager().setAttributes([NSFileProtectionKey : NSFileProtectionComplete], ofItemAtPath: url.path!)
} catch {
}
do {
let url = self.applicationDocumentsDirectory.URLByAppendingPathComponent("SingleViewCoreData.sqlite-wal")
try NSFileManager.defaultManager().setAttributes([NSFileProtectionKey : NSFileProtectionComplete], ofItemAtPath: url.path!)
// try print(NSFileManager.defaultManager().attributesOfFileSystemForPath(String(url)))
} catch {
}
do {
let url = self.applicationDocumentsDirectory.URLByAppendingPathComponent("SingleViewCoreData.sqlite-shm")
try NSFileManager.defaultManager().setAttributes([NSFileProtectionKey : NSFileProtectionComplete], ofItemAtPath: url.path!)
// try print(NSFileManager.defaultManager().attributesOfFileSystemForPath(String(url)))
} catch {
}
return coordinator
}()
I have also enabled Data Protection for my target in the "Capabilities". I have regenerated the provisioning profile from the Apple Developer portal and am using that with Enabled Data Protection.
I am also using the following code to check the file attributes of .sqlite , .sqlite-wal and .sqlite-shm files. NSFileProtectionKey is correctly set for all 3 of them.
func checkProtectionForLocalDb(atDir : String){
let fileManager = NSFileManager.defaultManager()
let enumerator: NSDirectoryEnumerator = fileManager.enumeratorAtPath(atDir)!
for path in enumerator {
let attr : NSDictionary = enumerator.fileAttributes!
print(attr)
}
}
I also tried disabling the Journal mode to prevent -wal and -shm files from being created. But I can still read the .sqlite file. Even though the attributes read NSFileProtectionComplete.
As described in the Apple Documentation at Apple Docs under "Protecting Data using On Disk Encryption", I tried to check whether the value of variable protectedDataAvailable changes as shown in the code below
public func applicationDidEnterBackground(application: UIApplication) {
// Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later.
// If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits.
NSThread.sleepForTimeInterval(10)
sleep(10)
let dataAvailable : Bool = UIApplication.sharedApplication().protectedDataAvailable
print("Protected Data Available : " + String(dataAvailable))
}
If I check the value without the delay it's set to true but after adding the delay it's set to false. This is kind of encouraging, however, right after, when I download the container, to show the content, it still has .sqlite file that still shows the content when opened in sqlitebrowser.
Ok, I finally understand this.
Using Xcode 7.3.1
Enabling File Protection
Enable File Protection using the Capabilities tab on your app target
If you do not want the default NSFileProtectionComplete, change this setting in the developer portal under your app id
Make sure Xcode has the new provisioning profile this creates.
For protecting files your app creates, that's it.
To protect Core Data, you need to add the NSPersistentStoreFileProtectionKey: NSFileProtectionComplete option to your persistent store.
Example:
var options: [NSObject : AnyObject] = [NSMigratePersistentStoresAutomaticallyOption: true,
NSPersistentStoreFileProtectionKey: NSFileProtectionComplete,
NSInferMappingModelAutomaticallyOption: true]
do {
try coordinator!.addPersistentStoreWithType(NSSQLiteStoreType, configuration: nil, URL: url, options: options)
Testing File Protection
I am not able to test this using a non-jailbroken device connected to a computer. Every attempt to access the device this way requires that I "trust" the computer and I believe that trusted computers are always able to read the phone's data ("Trusted computers can sync with your iOS device, create backups, and access your device's photos, videos, contacts, and other content" - https://support.apple.com/en-us/HT202778). I think the other answers on SO referencing this technique are no longer valid with more recent versions of iOS. Indeed, I am always able to download the container using XCode and view the app's data using iPhone Explorer. So how to test...
1 - Create an archive and ensure that it is has the proper entitlements by running the following on the .app file from the command line:
codesign -d --entitlements :- <path_to_app_binary>
You should see a key/value pair that represents your Data Protection level. In this example, NSFileProtectionComplete:
<key>com.apple.developer.default-data-protection</key>
<string>NSFileProtectionComplete</string>
In addition, I used the following two techniques to satisfy myself that the data protection is indeed working. They both require code changes.
2 - Add some code to verify that the proper NSFileProtectionKey is being set on your files and/or core data store:
NSFileManager.defaultManager().attributesOfItemAtPath(dbPath.path!)
If I print this out on one of my files I get:
["NSFileCreationDate": 2016-10-14 02:06:39 +0000, "NSFileGroupOwnerAccountName": mobile, "NSFileType": NSFileTypeRegular, "NSFileSystemNumber": 16777218, "NSFileOwnerAccountName": mobile, "NSFileReferenceCount": 1, "NSFileModificationDate": 2016-10-14 02:06:39 +0000, "NSFileExtensionHidden": 0, "NSFileSize": 81920, "NSFileGroupOwnerAccountID": 501, "NSFileOwnerAccountID": 501, "NSFilePosixPermissions": 420, "NSFileProtectionKey": NSFileProtectionComplete, "NSFileSystemFileNumber": 270902]
Note the "NSFileProtectionKey": "NSFileProtectionComplete" pair.
3 - Modify the following code and hook it up to some button in your app.
#IBAction func settingButtonTouch(sender: AnyObject) {
updateTimer = NSTimer.scheduledTimerWithTimeInterval(0.5, target: self,
selector: #selector(TabbedOverviewViewController.runTest), userInfo: nil, repeats: true)
registerBackgroundTask()
}
var backgroundTask: UIBackgroundTaskIdentifier = UIBackgroundTaskInvalid
var updateTimer: NSTimer?
func registerBackgroundTask() {
backgroundTask = UIApplication.sharedApplication().beginBackgroundTaskWithExpirationHandler {
[unowned self] in
self.endBackgroundTask()
}
assert(backgroundTask != UIBackgroundTaskInvalid)
}
func endBackgroundTask() {
NSLog("Background task ended.")
UIApplication.sharedApplication().endBackgroundTask(backgroundTask)
backgroundTask = UIBackgroundTaskInvalid
}
func runTest() {
switch UIApplication.sharedApplication().applicationState {
case .Active:
NSLog("App is active.")
checkFiles()
case .Background:
NSLog("App is backgrounded.")
checkFiles()
case .Inactive:
break
}
}
func checkFiles() {
// attempt to access a protected resource, i.e. a core data store or file
}
When you tap the button this code begins executing the checkFiles method every .5 seconds. This should run indefinitely with the app in the foreground or background - until you lock your phone. At that point it should reliably fail after roughly 10 seconds - exactly as described in the description of NSFileProtectionComplete.
We need to understand how Data Protection works.
Actually, you don't even need to enable it. Starting with iOS7, the default protection level is “File Protection Complete until first user authentication.”
This means that the files are not accessible until the user unlocks the device for the first time. After that, the files remain accessible even when the device is locked and until it shuts down or reboots.
The other thing is that you're going to see the app's data on a trusted computer always - regardless of the Data Protection level setting.
However, the data can’t be accessed if somebody tries to read them from the flash drive directly. The purpose of Data Protection is to ensure that sensitive data can’t be extracted from a password-protected device’s storage.
After running this code, I could still access and read the contents written to protectedFileURL, even after locking the device.
do {
try data.write(to: protectedFileURL, options: .completeFileProtectionUnlessOpen)
} catch {
print(error)
}
But that's normal since I ran iExplorer on a trusted computer.
And for the same reason, it's fine if you see your sqlite file.
The situation is different if your device gets lost or stolen. A hacker won't be able to read the sqlite file since it's encrypted. Well, unless he guesses your passcode somehow.
Swift 5.0 & Xcode 11:
Enable "Data Protection" in "Capabilities".
Use the following code to protect a file or folder at a specific path:
// Protects a file or folder + excludes it from backup.
// - parameter path: Path component of the file.
// - parameter fileProtectionType: `FileProtectionType`.
// - returns: True, when protected successful.
static func protectFileOrFolderAtPath(_ path: String, fileProtectionType: FileProtectionType) -> Bool {
guard FileManager.default.fileExists(atPath: path) else { return false }
let fileProtectionAttrs = [FileAttributeKey.protectionKey: fileProtectionType]
do {
try FileManager.default.setAttributes(fileProtectionAttrs, ofItemAtPath: path)
return true
} catch {
assertionFailure("Failed protecting path with error: \(error).")
return false
}
}
(Optional) Use the following code to check whether the file or folder at the specific path is protected (note: This only works on physical devices):
/// Returns true, when the file at the provided path is protected.
/// - parameter path: Path of the file to check.
/// - note: Returns true, for simulators. Simulators do not have hardware file encryption. This feature is only available for real devices.
static func isFileProtectedAtPath(_ path: String) -> Bool {
guard !Environment.isSimulator else { return true } // file protection does not work on simulator!
do {
let attributes = try FileManager.default.attributesOfItem(atPath: path)
if attributes.contains(where: { $0.key == .protectionKey }) {
return true
} else {
return false
}
} catch {
assertionFailure(String(describing: error))
return false
}
}
Rather than encrypt a file at the local level I set NSFileProtectionComplete for the app as a whole.
Create the file 'entitlements.plist' in your apps root folder with the following content.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>DataProtectionClass</key>
<string>NSFileProtectionComplete</string>
</dict>
</plist>
Then if you haven't already done so already (this could be the problem with your file level encryption) enable Data Protection in your apps capabilities.

Getting App_Start Code First Migrations to work with Miniprofiler

I am running code first migrations. (EF 4.3.1)
I am also running Miniprofiler.
I run my code first migrations through code on App_Start.
My code looks like this:
public static int IsMigrating = 0;
private static void UpdateDatabase()
{
try
{
if (0 == System.Threading.Interlocked.Exchange(ref IsMigrating, 1))
{
try
{
// Automatically migrate database to catch up.
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("Checking db for pending migrations.")));
var dbMigrator = new DbMigrator(new Ninja.Data.Migrations.Configuration());
var pendingMigrations = string.Join(", ", dbMigrator.GetPendingMigrations().ToArray());
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("The database needs these code updates: " + pendingMigrations)));
dbMigrator.Update();
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("Done upgrading database.")));
}
finally
{
System.Threading.Interlocked.Exchange(ref IsMigrating, 0);
}
}
}
catch (System.Data.Entity.Migrations.Infrastructure.AutomaticDataLossException ex)
{
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(ex));
}
catch (Exception ex)
{
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(ex));
}
}
The problem is that my DbUpdate is about to get called and then my app throws an exception which I think comes from the app on the first web page request.
saying:
Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending model changes to a code-based migration or enable automatic migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to true to enable automatic migration.
The problem is that I think my homepage is firing the dbcontext and this error before my dbupdate has finished.
How would you go about solving this?
Should I make the context wait using locks etc or is there an easier way?
More interestingly, If i start and stop the app a few times the db changes are pushed and the error goes away...
So I need to find a way to have the first request to the database on App_Start wait for the migrations to happen.
Thoughts?

ASP.Net Page for file upload stops processing in middle of log statement

We have a very simple ASP.Net page for uploading a file to our webserver. The page has no controls - a client uses it to automatically send us a file each night.
On occasion, the file seems to not get to us, but the client reports that they have sent it.
We added some logging statements to the page, and discovered something quite odd. The page ceases to execute right in the middle of a log statement. No exceptions, just up and dies.
Here is the code-behind:
protected void Page_Load(object sender, EventArgs e) {
try {
// record that request came in at all
log.Debug("Update Inventory page requested through HTTP {2} on {0} {1}", DateTime.Now.ToShortDateString(), DateTime.Now.ToLongTimeString(), IsPostBack ? "POST" : "GET");
// make sure directory exists
string basePath = Server.MapPath("~/admin/uploads/");
log.Debug("Saving to folder {0}", basePath);
if (!Directory.Exists(basePath)) {
log.Debug("Creating folder {0}", basePath);
Directory.CreateDirectory(basePath);
}
// generate a unique file name
string fileName = DateTime.Now.Ticks.ToString() + ".dat";
string path = basePath + fileName;
log.Debug("Filename to save is {0}", fileName);
// record initial bytes of stream/file
StreamReader reader = new StreamReader(stream);
string fileContents = reader.ReadToEnd();
log.Debug("File received by GET is " + fileContents.Length + " characters long and begins with: "
+ Environment.NewLine + fileContents.Substring(0, Math.Min(fileContents.Length, 1000)));
// write out file
File.WriteAllText(path, fileContents);
log.Debug("Update Inventory page processing finished.");
// trap for and record any and all exceptions
}
catch (Exception ex) {
log.Debug(ex);
}
}
The processing seems to die in the middle of the log statement that outputs the length and first portion of the fileContents variable. The logging that occurs when the process fails looks like this:
2010-08-02 02:46:01.7342|DEBUG|UpdateInventory|Update Inventory page requested through HTTP GET on 8/2/2010 2:46:01 AM
2010-08-02 02:46:01.7655|DEBUG|UpdateInventory|Saving to folder c:\hosting\sites\musicgoround.com\wwwroot\admin\uploads\
2010-08-02 02:46:01.7811|DEBUG|UpdateInventory|Filename to save is 634163139617811250.dat
2010-08-02 02:48:02.3905|DEBUG|UpdateInventory|
I really don't understand what to make of this.
I assume if there was a error in the transmission of the file that either an exception would be thrown from the reader.ReadToEnd() line. And if not an exception, I would expect the page processing to continue but that I may only receive part of the file (in which case it should log something).
The logging statement is only accessing a string variable, and it's inside a try-catch. NLog is the logging component we use, and we access that through the facade provided by the Simple Logging Facade project on Codeplex. So, we trust the logging component to be more or less bulletproof - we certainly don't see anything in our usage of it here that should be causing problems.
So, what's the deal? Why on earth could this page just up and stop processing like this?
The fact that we get a half-finished logging statement seems to point towards an error being swallowed in the logging system - but that just seems so unlikely - and we have NLog's internal logging on and it is not reporting any problems.
The most likely candidate is that this line:
2010-08-02 02:48:02.3905|DEBUG|UpdateInventory|
Is caused by this:
log.Debug(ex);
I.e. it is throwing an exception, but the logger is not recording anything useful. Why don't you try switching about the log levels a bit, e.g. change the exception logging level to error:
log.Error(ex);
That way you can see if it is actually throwing an exception and it is just the logger not recording the exception string properly.

Resources