I want to make a receive location in Biztalk 2010 that will poll for a file just just once a day.
If one file is moved it should stop polling again. Because when the file is moved another application can create a new file in that directory just 1 millisecond later, and that new file may not be moved.
I agree with Filburt, scheduling is not the answer here. You may be able to create an orchestration that only processes once a day, and queues up the other files. However, if the existence of that file is somehow 'gating' the other system then this is bad design up front.
You can put your receive location on a schedule to receive only within a given time frame. Dealing with milliseconds however it would be a bad idea to try to control your receive location with timing.
Depending on your requirements I would configure the receive location to only pick up a file with a given name (not a wildcard *.txt) or have your other application create its file in a different location altogether.
Open the receive location properties and click "Configure" next to the FILE type dropdown.
Click Advanced settings, change the Polling interval to 3600000 (one hour). Click OK.
Click the "Batching" tab. Change "Number of messages in a batch:" to 1. Click OK.
Go to the "Schedule" pane. Check "Enable service window."
Set the service window to whenever you want this to run. Make sure you make the window less than 1 hour.
This should do what you want. You can make it less than one hour or any time range, but the service window range must be less than the Polling interval.
Related
I have a process that adds tabs to Vivaldi (or any browser): one to an external url and one to a local html file. I am able to identify the process IDs associated with each tab.
I want to be able to close the tabs. I have tried kill <id>. That clears the page of the local file, but the tab is still there and can be reloaded if I refresh the page. kill has no effect on the tab associated with the external url.
Is there a way to do this?
Killing processes is the wrong approach here anyway because apart from causing unexpected termination and not orderly closing, nothing guarantees each tab to live its own process. You may have both of them living in the same process, or sharing a process with other, unrelated tabs. Bottom line, it's not going to work or at least it'll work only sometimes and cause collateral damage. (Others asked for such a way before.)
My suggestion would be a browser extension that uses native messaging. You could then ask it via the native messaging function to close certain tabs for you, using the officially supported tabs API that the browser exposes to extensions.
(These links are to the Chrome extension docs, but Vivaldi is Chromium-based as well and supports the same APIs.)
Alternative idea that works without an extension:
Tabs opened through the command line behave as if they were opened by a script of the same origin, insofar that the website in them is able to call window.close(). So depending on your use case, maybe you can arrange for the website in the tab to close the tab by itself.
If one of them is "external" in such a way that you can't control its contents, then you could instead have one tab open the other one through JavaScript, because then the first tab can close the second tab using close as well.
If you need a way to communicate to the website running in your tab(s) that you want it to close itself, you could also do something like starting a local server at a random unused port and passing the port into the website via a URL parameter1, and stopping the server when you want to close the tab. Then, inside your website you would regularly poll the local server URL using AJAX and close the tab when it fails2. (Remember to return CORS headers for this to work.)
This is just one of several possible ways, and yes it is a bit "hacky" - so I'm open to suggestions on how to improve on this idea.
Another alternative (which may or may not fit to your use case): Instead of opening a tab, you could open a separate popup window for each website using --app in the command line before the URL. Then you could find the corresponding window by checking what is the newest window with a matching title, and you could close it programmatically (check out xdotool and xwininfo).
1: Why not a fixed port number? Because you can't control whether something else is already listening on that port on the user's machine.
2: Why not the other way round, starting the server in order to close the tab? Because then you would have to wait to ensure that the website noticed that you started the server, and if you would stop the server too early then the tab would never close, so it's extra effort and an extra possible failure point, for example if there is high CPU usage at the moment or Vivaldi put the tab into sleep mode in the background. Additionally, with my method, killing your "manager" process would then also cause the tab to close instead of leaving it sticking around. And, finally, you don't want another process to interfere with your communication by opening a server on the same port that you chose before you do so, so it'll be best to open the server right away and not only once you want the tab to close.
I have a receive location in my BizTalk 2010 project, and sometimes that receive location will receive an empty file. The receive pipeline is PassThruReceive. We then have a Send Port that has a filter for that Receive Port Name. So all we are doing is moving the file from the receive location to the send location.
The issue that I'm running into is that in the event that we get an empty file in the receive location, my client wants the file to still be moved to the send port. I know that out of the box, the FILE adapter discards empty files and writes an event to the Event Log stating that it was deleted.
I have followed articles that show a custom FILE adapter accomplishing this task. I have had some success with this custom adapter. The file is picked up. Received by BizTalk and the Send Port successfully sends the file. However, even with this solution, I'm running into an issue in the receive side where the file is locked and cannot be deleted. I have followed various articles on this subject, and I get the same issue every time where the file is locked and can not be deleted.
My question is. Even though batchMessage.Message.BodyPart.Data.Close(); is being called, the stream is still locked. Is there any way for me to find where else BizTalk may be locking the file? Is there any other way of handling this?
One of the articles that I followed is located here: http://biztalkwithshashikant.blogspot.com/2011/04/processing-empty-files-in-biztalk.html
It seems to me that you are running into issues when running your custom FILE adapter multi-server. I bet you are running more than 1 server in the BizTalk Group?
I haven't done this myself, but I heard that getting an adapter to run smoothly multi-server is one of the hardest things to do in BizTalk. The trick is to find a way to be able to share the load between multiple instances of the same BizTalk host.
Do you still have the same problem when running the instance only on 1 server instead of 2?
in the custom pipeline component code method:
IBaseMessage Execute(IPipelineContext pContext, IBaseMessage pInMsg)
You should return pInMsg, not set to null, and the .BodyPart stream positioned at the end. If pInMsg is null BizTalk will discard the message silently. You don't need to close it but you do need to move it to the end to let BizTalk know you read and processed it all.
A way around this is to use the FTP adapter instead to pick up the files, the FTP adapter does not discard empty files.
It is quite possible that to system creating the files, it is still a file location, but also accessible via FTP.
My application allows user to upload CSV file that is processed and records are written in the database. But this file can contain a very big number of records for example 300 000. And in this case it may need to up to half an hour to process all this records, I would like my application not to freeze the page for this period, but show progress and maybe some errors, or it would be better to allow user to move to another pages and from time to time come back to check process.
By what means can I achieve that?
The approach we took to resolve a similiar issue was as follows;
Upload File using normal http methods.
Save file locally.
Submit file to asynchronous webservice (.asmx). This process will insert a record that will store the status of the import, along with actually starting importing the records. Once all records have been processed, set the status accordingly.
This all happens in a single flow. Because the WebMethod is asynchronous, it will return without waiting for itself to complete and the import will happen in the background.
You now redirect user to page that periodically checks the status of the asynchronous import until such point as it is finished. You can also add additional information to this process such as progress by batching the records and updating another fields accordingly.
This has worked well for us for many years now. I have not added any real detail as that will be specific to your implementation.
I know we need a better solution but we need to get this done this way for right now. We have a long import process that's fired when you click start import button on a aspx web page. It takes a long time..sometimes several hours. I changed the timeout and that's fine but I keep getting a connection server reset error after about an hour. I'm thinking it's the asp.net lifecycle and I'd like to know if there are settings in IIS I can change to make this lifecycle last longer.
You should almost certainly do the long-running work in a separate process (not just a separate thread).
Write a standalone program to do the import. Have it set a flag somewhere (a column in a database, for example) when it's done, and put lines into a logfile or database table to show progress.
That way your page just gets the job started. Afterwards, it can self-refresh once every few minutes, until the 'completed' flag is set. You can display the log table if you want to be sure it's still running and hasn't died.
This is pretty straightforward stuff, but if you need code examples they can be supplied.
One other point to consider which might explain the behaviour is that the aspnet_wp.exe recycles if too much memory is being consumed (do not confuse this with the page life cycle)
If your long process is taking up too much memory ASP.NET will launch a new process and reassign all existing request. I would suggest checking for this. You can do this by looking in task manager at the aspnet_wp and checking the memory size being used - if the size suddnely goes back down it has recycled.
You can change the memory limit in machine.config:
<system.web>
<processModel autoConfig="true"/>
Use memoryLimit to specify the maximum allowed memory size, as a percentage of total system memory that the worker process can consume before ASP.NET launches a new process and reassigns existing requests. (The default is 60)
<system.web>
<processModel autoConfig="true" memoryLimit="10"/>
If this is what is causing a problem for you, the only solution might be to have a separate process for your long operation. You will need to setup IIS accordingly to allow your other EXE the relevant permissions.
You can try running the process in a new thread. This means that the page will start the task and then finish the page's processing but the separate thread will still run in the background. You wont be able to have any visual feedback though so you may want to log progress to a database and display that in a separate page instead.
You can also try running this as an ajax call instead of a postback which has different limitations...
Since you recognize this is not the way to do this I wont list alternatives. Im sure you know what they are :)
Extending the timeout is definitely not the way to do it. Response times should be kept to an absolute minimum. If at all possible, I would try to shift this long-running task out of the ASP.NET application entirely and have it run as a separate process.
After that it's up to you how you want to proceed. You might want the process to dump its results into a file that the ASP application can poll for, either via AJAX or having the user hit F5.
If it's taking hours I would suggest a separate thread for this and perhaps email a notification when it is ready to download the result from the server (i.e. send a link to the finished result)
Or if it is important to have a UI in the client's browser (if they are going to be hanging around for n hours) then you could have a WebMethod which is called from the client (JavaScript) using SetInterval to periodically check if its done.
I have an application that scans an input directory every five seconds and when a job (i.e. file) is placed in the directory the application reads it, processes it and outputs another file(s) in an output directory.
My question is, If I wanted to put a web based front end on this application, how would I wait for the processing to be complete?
User submits job
job is placed in input directory
......What am I doing on the web page here?
processing occurs
output file is generated
......How do I know that the job is finished?
The two solutions I came up with were:
poll output directory every x seconds from the webpage
use ajax to poll a webservice or webpage that reports back whether the output file is present
Is there a better design for the server? In other words, how would TCP or Named Pipes help in this situation? (Cannot use remoting due to a DCOM object.)
A solution we have done commercially in the past is basically the daemon writes to a log (typically DB), with a date/time stamp, about what its doing, and the web frontend just shows the latest X amount of entries from the log, with a little toggle to hide all of the "Looked in directory, no file found" messages, worked fairly well, we upgraded it later on with AJAX (Timer that reloaded every 20 seconds).
I don't think that Named Pipes are going to make it any easier to get the web client to poll automatically, but it might make the server better able to notify another process that the conversion has completed--and ultimately queue a message to the web browser.
You can try having the web client poll every few seconds to see if the file process has completed, alternatively you could have something like Juggernaut "push" commands out to the page. Juggernaut works using Flash to open a socket on the web browser that continually feeds JavaScript from the server. It could be responsible for sending a command to alert the browser that the file has completed and then issue a redirect.