We are currently trying to handle a rather large file through our BizTalk process and are constantly gettingOutOfMemoryExceptions when processing. I have a custom disassembler that processes each record individually as to not read the entire message in memory. The environment is currently in dev and so SQL and BizTalk run on the same machine. The machine has 16 Gb of memory but BizTalk is only 32 bit architecture.
Are there any Host settings I can change to allow the file to be processed from start to finish?
Any reason you can't run 64bit BizTalk? There are Host Settings you can fiddle with, but they're related to Throttling. An OutOfMemoryException is a hard error which BizTalk really has little to no control over.
What you're describing is definitely doable. I've done it.
First look into the basics of your component. Are your message instances being properly de-referenced and all?
How are you submiting to the MessageBox? If it's through the normal Disassembler API, then them messages are going to sit in memory for a while. Using a VirtualStream can help with that.
Related
When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.
Is there any way to recover the physical send & receive ports once deleted without exporting the bindings?
Even if you export the bindings the ports will not exist and so will not be exported. I think Bilal is correct - this data resides in the management database, so you would need to restore from a backup.
Your best option is always have a binding file configuration for your application. Once your BizTalk applications are configured with correct send, receive ports, orchestration host etc make sure you back up your binding configuration using "Export Binding" option in BizTalk admin console.
In fact, in bigger projects binding files will be normally owned by deployment teams for various environments and you never create ports using admin console in real environments.
It will be bit overkill to restore your BizTalk databases, just get your ports back. I would rather create them manually again, even if the numbers are hight. Restoring databases requires special attention, if you do it wrong you will end up reconfiguring the whole environment which will be time consuming.
I agree with Saravana.
My 5c: you can import existed binding files and that will not remove ports that are not in the binding files. The import process works in "addition" mode.
I have the following scenario:
I publish a page which contains multiple binaries which is then received by an HTTP Receiver and Deployed using an in-process Deployer all hosted in IIS in a dedicated application pool running as the Local Service user.
The Page is stored in the Broker Database, and the binaries are published to the local file system using a path like "D:\Binaries\Preview".
The preview folder is shared to a domain user as a read only share at something like \machinename\PreviewBinaries so that the binaries can be displayed using the web application.
Nine time out of ten everything works fine, but occasionally publishing fails, and it seems to be because the binaries can not be overwritten due to them being locked by another process. I have used ProcessMon and other tools to try and establish what might be locking these files (to no avail). Sometimes I can manually delete the images, and then publishing works again. If I restart IIS on the server I can always delete the files and publish.
Does anyone have any suggestions on what processes could be locking these images? Has anyone seen this issue before? Could there be any issues that I am publishing to a share? Or could SiteEdit 2009 possibly be locking these files as it only seems to occur on our preview server and live (no SiteEdit) seems fine.
Thanks in advance
If you're on Windows 2008, you can try and delete the file from disk. It will then tell you what process has locked the file. But given that restarting IIS unlocks the file, it seems quite likely that it is IIS that keeps a lock on them.
I don't see how SiteEdit 2009 could cause a lock on these files. Given that you can have your preview server on another box, SiteEdit only talks to that server through HTTP. It never accesses the files on the preview server directly and not even through a CD API. Just regular requests to your web server, just like a visitor would.
Again, not a direct answer but I wanted to share this anyway:
I've seen a similar situation where I published Pages to the Broker Database and Binaries to the file system. When I changed the Identity of the Application Pool to Network Service this problem disappeared, and I haven't looked into it further.
OK, well it seems the offending code was in the Presentation Framework we are using. The framework used Response.TransmitFile(binaryPath) to asynchronously transmit the binaries to the clients. It seems that this puts a temporary lock handle on the binaries (even when they are on a read only share).
We have removed this line of code, and modified the application to server binaries in another way (we now rewrite the path so that IIS can transmit the files directly). This seems to have solved the issue, and improved site performance.
Thanks for all your suggestions, it helped me rule out all the things that were not causing the issue, so I was able to find the root cause.
Are there any Anti-virus or indexing services running. These tend to take very short-lived locks at just the moment you don't want them to. Particularly with Anti-virus, this is typically just as one process relinquishes its lock and just before your other process tries to take one. If this is the issue, then setting up some exclusion directories should help.
I see you have used Process Monitor, but have you tried Sysinternals Process Explorer? "Find->Find Handle or Dll" is pretty useful for this kind of thing. Or if you prefer a command line tool, Sysinternals aslo make handle.exe, which dumps everything out for you.
Is it possible in any way to edit an excel sheet through an ASP.net page that contains macro. I have tried to open the Excel sheet and it seems to just hang rather than load the excel. Testing on a page without macros works perfectly fine?
Disclaimer: I don't know the Excel license agreement and I don't know if utilizing Excel in a server process violates it or not. This is purely a technical description of how to get it working. The reader is advised to check the license agreement to see if it's allowed to do so or not. Different Office versions may have different license agreements. I used this method at several Fortune 100/500 companies and they didn't seem to care. Go figure.
This solution works but it has some limitations and require a fair amount of control over the server where it runs. The server also needs to have lots of memory.
To start, make sure that you perform a complete installation of every single Office feature on the server so that Excel won't try to install something if you attempt to use a feature that's not present.
You also need to create a dedicated user account on the server that has the right privileges. I can't tell you what exactly because in my case we controlled the server and we gave admin rights to this user.
When you have the user account, you need to log in as that user and run Excel (preferably all Office applications) at least once so that it can create its settings.
You also need to configure Excel to run under this user account when it's created as a COM object. For this, you need to go into DCOM Config on the server and configure Launch and Activation Permissions for the Excel.Application object to use your new user account. I'm not sure if I remember correctly, but I think after this step, running Excel as an interactive user was slightly problematic.
By default, Office applications try to display various messages on the screen: warnings, questions, etc. These must be turned off because when you utilize an Office application from a web application, it runs on the server so a human user won't be around to dismiss these messages - the Office program will just sit around indefinitely, waiting for the message to be dismissed.
You need to set (at the minimum) these properties:
DisplayAlerts = false
AskToUpdateLinks = false
AlertBeforeOverwriting = false
Interactive = false
Visible = false
FeatureInstall = 0 'msoFeatureInstallNone
to disable UI messages from Excel. If you use Excel 2010, there may be more, but I'm not familiar with that.
If you have Excel files with macros in them, you may have to disable macro security in Excel - that can't be done programmatically, for obvious reasons.
To access Excel services, implement a manager object that will actually hold the Excel reference - don't try to hold the Excel.Application object in the page because your page code will become very complicated and you may not be able to properly clean things up.
The object that holds the Excel reference may be a separate DLL or an out-of-process server. You must make sure, however, that when you acquire an instance of Excel on a given thread you always create a new Excel instance. The default is that an already running Excel instance will also serve other requests but that won't work for you because the same Excel instance cannot be shared among multiple threads. Each request-processing thread in IIS must have its own Excel instance - if you share instances, you'll have all kinds of problems. This means that your server will need to have quite a bit of memory to have many instances of Excel running. This was not an issue for me becasue we controlled the server.
If you can, try to create an out-of-proc (.exe) COM server because this way you can hold the Excel reference in a separate process. It's possible to get it working using an in-proc (.dll) COM object but it'll be more risky to your application pool - if Excel crashes, it'll crash your app pool as well.
When you have an .exe server, you can pass parameters in several possible ways:
Make your manager objet a COM object and pass parameters as properties.
Pass parameters as command-line parameteres to the .exe as it starts up.
Pass parameters in a text/binary file; pass the name of the file on the command-line.
I used all these and found the COM object option the cleanest.
In your manager object, follow these guidelines:
Wrap every single function that uses Excel in a try..catch block to capture any possible exception.
Always explicitly release all Excel objects by calling Marshal.ReleaseComObject() and then setting your variables to null as soon as you don't need them. Always release these objects in a finally block to make sure that a failed Excel method call won't result in a dangling COM object.
If you try to use any formatting features in Excel (page header, margins, etc.) you must have a printer installed and accessible to the user account that you use to run Excel. If you don't have an active printer (preferably attached to the server), formatting-related features may not work.
When an error happens, close the Excel instance that you're using. It's not likely that you can recover from Excel-related errors and the longer you keep the instance, the longer it uses resources.
When you quit Excel, make sure that you guard that code against recursive calls - if your exception handlers try to shut down Excel while your code is already in the process of shutting down Excel, you'll end up with a dead Excel instance.
Call GC.Collect() and GC.WaitForPendingFinalizers() right after calling the Application.Quit() method to make sure that the .NET Framework releases all Excel COM objects immediately.
Edit: John Saunders may have a point regarding the license agreement - I can't advise about that. The projects that I did using Word/Excel were all intranet applications at large clients and the requirement to use Word/Excel was a given.
The link he provided also has some tools that may be useful, although those libraries won't have full Excel functionality and if that's what you need, you don't have a choice. If you don't need full Excel functionality, check out those libraries - they may be much simpler to use.
A few links that may be useful to people trying to implement this approach:
StackOverflow question
Possible alternate products
COM server activation and window stations
The story changed a little while ago, with HPC Services for Excel.
With that, you can do Office Automation on a web server. I'm still trying to determine how it fits my situation, but you may want to check it out.
I have an asp.net web application. When it needs to produce a file on a remote system, it makes a call (over tcp/ip) to a process on the same machine that creates the file. It creates it on a network share. When that process is finished writing out the file, it sends an "OK" response back to my asp.net application. I'm sure you see where this is headed. When the asp.net application checks if the file exists (using File.Exists()), it can sometimes take as long as 8 seconds for it to be "found". Could there be some kind of directory info being cached across windows networking? And would it be per processs?
In summary, one process creates the file, and it takes up to 8 seconds before the other process can see it. How can I overcome this? Again, this is standard windows netorking. Server 2008 to server 2008.
Have some updates from experimenting: Another program running on the desktop can see the file a lot sooner than the asp.net application can. The difference can be 7-10 seconds. Why would the ASP.NET iis service take so long to see the remote file?
Thanks,
Brian
Is the share perhaps a DFS share? If the answer is yes, then it is there, but only on one machine, and the second time you connect to that share you may be connecting to another machine that does not have the file yet, as it is still replicating.