Meteor.logout() and Meteor.call() too slow - meteor

I have a web app. I am living a problem about time of Meteor.logout() and Meteor.call(). When i meteor.logout(), it takes time between about 30-40 sec. Same for Meteor.call() as well. About 200-250 clients use this system on the same time.
if a client see about 100-200 items his on app screen this delay time is so much. but 10-20 items, it's a little well. we get data every 5-10 sec as different times each others on these items. I mean, live screen.
I don't get this problem when i work this system on diffrent port with same code and same database by the way just use only me.
I can't figure it. What can be reason it. I need your ideas and help.

The logout function waits for a callback form the server, there is something wrong with the way you have configured your server.
Run the same code on another machine, it should not happen.

You can use this.unblock() in every method and publications.
By default, Meteor process requests one by one, it will queue all the requests coming, if one is processing.
This may be due to the reason that some of the functions doing some bigger functionalities will be requiring more time and all other request to the server have to wait till it ends.
You need to simply place this.unblock() at the starting of every method and publications and it will not block your requests.
Thanks

I solved my problem.
While the collection update process is performed from one side, the meteor publish process is performed from the other side. As the number of clients increases, the server becomes unresponsive. I solved it with Mongodb oplog feature.
Thank you for your interest.

There could be multiple reasons.
There could be unsubscription of collections, which means client and server exchange the list of id's which are being unsubscribed.
You many have reactive UI, which suddenly gets overwhelmed with the amount of data that is being transferred and needs to update itself. (example angular digest cycle always runs after meteor sub/unsub)
Chrome Inspector - Network websocket frame is your best tool understand how soon Meteor logout fires and and if there are any messages being passed back and forth before server retutns the result of logout request.
You may also use this.unblock() feature in subscribe. This way your subscritption run parallelly and don't block each other

Related

Trigger a series of SMS alerts over time using Twilio/ASP.NET

I didn't see a situation quite like mine, so here goes:
Scenario highlights: The user wants a system that includes custom SMS alerts. A component of the functionality is to have a way to identify a start based on user input, then send SMS with personalized message according to a pre-defined interval after the trigger. I've never used Twilio before and am noodling around with the implementation.
First Pass Solution: Using Twilio account, I designated the .aspx that will receive the inbound triggering alert/SMS via GET. The receiving page declares and instantiates my SMSAlerter object within page load, which responds immediately with a first SMS and kicks off the System.Timer.Timer. Elementary, and functional to a point.
Problem: The alerts continue to be sent if the interval for the timer is a short time span. I tested it at a minute interval and was successful. When I went to 10 minutes, the immediate SMS is sent and the first message 10 minutes later is sent, but nothing after that.
My Observation: Since there is no interaction with the resource after the inbound text, the Session times out if left at default 20 minutes. Increasing Session timeout doesn't work, and even if it did does not seem correct since the interval will be on the order of hours, not minutes.
Using Cache to store each new SMSAlerter might be the way to go. For any SMSAlerter that is created, the schedule is used for roughly 12 hours and is replaced with a new SMSAlerter object when the same user notifies the system the following day. Is there a better way? Am I over/under-simplifying? I am not anticipating heavy traffic now (tens of users), but the user is thinking big.
Thank you for comments, suggestions. I didn't include the code, because the question is about design, not syntax.
I think your timer is going out of scope about 20 minutes after the original request, killing the timer. I have a feeling that if you keep refreshing the aspx page it won't happen - but obviously that doesn't help much.
You could launch a new thread that has the System.Timers.Timer object so it stays alive, and doesn't go out of scope when there are no follow up requests to the server. But this isn't a great idea to be honest - although it might help with understanding the issue.
Ultimately, you'll need some sort of continuously running service - as you don't want to depend on the app pool for this, so I'd suggest a Windows Service running in the background to handle it, which is going to be suitable for a long term solution.
Hope this helps!
(Edited slightly to make the windows service aspect clearer)

Multiple postbacks of ASPX page

I have an aspx page with a simple form to send emails to pre-defined lists of users. On the longer lists the page usually times out before the emails finish sending but this has never been an issue.
Today something weird happened and each user got four emails. In the log I could see three new threads crank up one at a time and start over sending from the beginning of the list.
Any ideas? I absolutely know I didn't intentionally refresh the Web page myself, and certainly not three times. But could the browser (IE8) have done it? Would it post again trying to re-establish a connection when it timed out? Or when I switched back to the browser window from another app? I have never seen behavior like this before.
First question would be whether there is any reason to do a long-running task syncronously, i.e. lock up a thread that should be serving web requests for something that could be done in the background, while the browser sits and waits for a response that its probably not going to get. I'd look into running this asynchronously unless there's a very deliberate reason not to.
Secondly have you looked into creating some kind of locking mechanism such that the process can't be started more than once? I have processes where I add a token to the application cache (and remove it when I'm done) so that if the token exists the process won't run again (the call to the asynch task isn't made), and that does the job. That way it doesn't matter how many clients call your code, you prevent things happening more than they should.

Dealing with Long running processes in ASP.NET

I would like to know the best way to deal with long running processes started on demand from an ASP.NET webpage.
The process may consist of various steps (like upload files to the server, run SSIS packages on them, execute some stored procedures etc.) and sometimes the process could take up to couple of hours to finish.
If I go for asynchronous execution using a WCF service, then what happens if the user closes the browser while the process is running, how the process success or failure result should be displayed to the user? To solve this, I choose one-way WCF service calls, but the problem with this is I need to create a process table and store the result (and error messages if it fails in any of the steps and which steps have completed successfully) in that table which is an additional overhead because there are many such processes with various steps that the user can invoke from the web page and user needs to be made aware of the progress (in simplest case, the status can be "process xyz running") and once it is done, the output needs to be displayed to the user (for example by running a stored procedure).
What is the best way to design the solution for this?
As I see it, you have three options
Have a long running page where the user waits for the response. If this is several hours, you're going to have many usability problems, so I wouldn't even consider it.
Create a process table to store the results of operations. Run service functions asynchronously and delegate logging the results to the service. There can be a page that the user refreshes which gets the latest results of this table.
If you really don't want to create a table, then store all the current process details in the users' session state, and have a current processes page as above. You have the possible issue that the session might timeout, or the web app might restart and you'll lose all this.
I can't see that number 2 is such a great hardship. You could make the table fairly generic to encompass all types of processes: process details could just be encoded as binary or xml and interpreted by the web application. You then have the most robust solution.
I cant say what the best way would be but using Windows Workflow Foundation for such long running processes is definitely one way to go about it.
You can do tracking of the process to see what stage it is at, even persist it if you have steps where it is awaiting user input etc.
WF provides a lot of features out of the box (especially if your storage medium is SQL Server) and may be a good option to consider.
http://www.codeproject.com/KB/WF/WF4Extensions.aspx might help give you some more insight into the same.
I think you are in the right track. You should run the process asynchronously, store the execution somewhere (a table), and keep status of the running process in there.
Your user should see a pending display label while the process is executing, and a finished label with the result when the process finished. If the user closed the browser, she will see the result of her running process next time she logs in.

How useful is Response.IsClientConnected?

I was wondering if anyone had experience they could share using the Response.IsClientConnected property as a performance optimization for asp.net websites.
The reason I ask is that I am a bit skeptical on how effective it would be in real life scenarios. I understand the concept of checking the value before performing a large task but I just can't see how useful this would be as clients could disconnect at any point time.
I think the main usage would be for optimizing the delivery of long processes. For example, if you had to generate a huge report or something, you might run the report in a separate thread and then periodically check to see if the user is still connnected. If not, you could kill this long running process so that it is not running needlessly since the user is no longer expecting a response.
This helps to prevent users from starting long processes and then making more requests over and over because they might think it is slow or something. If you were not doing this type of checking, you could tax your server due to all the requests even though all but one is valid. This scenario could be handled by allowing only one user to run one long running task, but it would also help in a multi-user environment as well to make sure you are only spending time serving up requests where the user is still connected and waiting for the response.
Note: I have never actually used this before, this is just based on my very basic understanding of what I have read.
I have used this extensively in my applications and it can give you a huge saving on resources.
Try this: create a page that needs -some- time to complete and try refresh it many many times before it complete. You will see that requests are queued to be executed. Imagine a user that has a slow connection and refreshes his page many many times thinking this will fetch the page (a very common issue from what a site can die out of resources when all users are connected and for some reason it becomes slow).
Now, change it and at the start of each page load, (or sooner at page init) check if HttpContext.Current.Response.IsClientConnected and in the case that he is not connetced throw a threadabord exception. You will see, your site will respond much sooner.
Actually I check if client is connected before any heavy action on the page so as to prevent needless executions. In production environments, I have seen that especially in cases where the system becomes slow, this validation will help much.

browser timeouts while asp.net application keeps running

I'm encountering a situation where it takes a long time for ASP.NET to generate reply with the web page (more than 2 hours). It due to the codebehind running for a while (very long, slow loop).
Browser (both IE & Firefox) stops waiting for the reply (after about an hour) and gives generic cannot display webpage error (similar to what you would see if you'd try to navige to non-existing server).
At the same time asp.net app keeps going (I can see it in debugger) and eventually completes.
Why does this happen? Are there any settings in web.config to influence this? I'm hoping there's a timeout setting that I'm missing that's causing this.
Maybe a settings in IE or Firefox? But I think they wait while the server is keeping connection alive.
I'm experiencing this even when I launch app in debug mode (with compilation debug="true") on my local machine from VS (so it's not running on IIS, but on ASP.NET Dev Server).
I know it's bad that it takes so long to generate the page, but it doesn't matter at this stage. Speeding it up would take a lot of extra work and the delay doesn't really matter. This is used internally.
I realize I can redesign around this issue running logic to a background process and getting notified when it's done through AJAX, or pull it to a desktop app or service or whatever. Something along those lines will be done eventually, but that's not what I'm asking about right now.
Sounds like you're using IE and it is timing out while waiting for a response from the server.
You can find a technet article to adjust this limit:
http://support.microsoft.com/kb/181050
CAUSE
By design, Internet Explorer imposes a
time-out limit for the server to
return data. The time-out limit is
five minutes for versions 4.0 and 4.01
and is 60 minutes for versions 5.x, 6,
and 7. As a result, Internet Explorer
does not wait endlessly for the server
to come back with data when the server
has a problem. Back to the top
RESOLUTION
In general, if a page does not return within a few
minutes, many users perceive that a
problem has occurred and stop the
process. Therefore, design your server
processes to return data within 5
minutes so that users do not have to
wait for an extensive period of time.
The entire paradigm of the Web is of request/response. Not request, wait two hours, response!
If the work takes so long to do, then have the page request trigger the work, and then not wait for it. Put the long-running code into a Windows service, and have the service listen to an MSMQ queue (or use WCF with an MSMQ endpoint). Have the page send requests for work to this queue. The service will read a request, maybe start up a new thread to process it, then write a response to another queue, file, or whatever.
The same page, or a different, "progress" page can poll the response queue or file for responses, and update the user, assuming the user still cares after two hours.
For something that takes this long, I would figure out a way to kick it off via AJAX and then periodically check on it's status. The background process should update some status variable on a regular basis and store it's data in the cache or session when complete. When it completes and the browser detects this (via AJAX), have the browser do a real postback (or get by changing location.href), pick up the saved data, and generate the page.
I have a process that can take a few minutes so I spin off a separate thread and send the result via ftp. If an error occures in the process I send myself an error message including the stack trace. You may want to consider sending the results via email or some other place then the browser and use a thread as well.

Resources