I had created an application to send Bulk Emails in ASP.NET.
You can check my these Articles on the same:
http://www.c-sharpcorner.com/UploadFile/cd7c2e/send-bulk-mails-using-smtp-configuration-part-2/
http://www.c-sharpcorner.com/UploadFile/cd7c2e/send-bulk-email-from-yahoo-and-hotmail-using-Asp-Net/
The problem is that these emails are taking a long time to get sent nearly 800e mails in 1 hour, I want these mails to go at a much faster rate. Could anyone help help me by showing any example or telling me what can I do to achieve this?
You can optimize your code to use async (best) or multiple threads (easier for some). You won't want to try to do too many at once as you can theoretically overload the smtp server. You can also look at using a different SMTP server that may provide faster performance, or setup a pool of SMTP servers and use many to send in parallel.
Another option is to use a more optimized third party component. MailBee.NET says they are fast (no specifics though) and from the description certainly sound like they have a lot of optimizations. They support queued/pooled messages and can send messages directly (no SMTP server needed). Threaded code combined with direct message sending should be really fast.
http://www.afterlogic.com/mailbee-net/smtp-component
You should be able to get far more than 800 messages per hour. That's very slow. Back when I used to program in ColdFusion (many years ago) I remember one of the main features in their 6.1 release was improved mail handling that in their tests was able to send 1.2 million messages per hour.
NOT SPAM
For those people who think bulk mail is only for spam, think about companies and organizations that have a lot of members who do want real mail. Even a small bank would easily have 10,000 customers. Send them each their monthly statement in a day and at 800 e-mails per hour, it'll take half a day just to send the mail.
In my particular case we run offer a product that includes an LMS for large universities. It's important for the notifications to be delivered quickly since many students will want to sign up for classes as soon as they get the notification that sign-ups are available. If we sent notifications to 2,000 students at 800 an hour, the people that happened to be first would have a huge advantage in course selection over the people who happened to be later in the list. That would not be acceptable.
I used to work for a large non-profit organization that had 4,000,000 members and we sent a monthly newsletter to members (not spam, subscribed newsletter). At that time we were using ColdFusion.
Related
Is it possible to limit the speed at which Google Firestore pushes writes made in an app to the online database?
I'm investigating the feasibility of using Firestore to store a data stream from an IoT device via a mobile device/bluetooth.
The main concern is battery cost - receive a new data packet roughly two minutes, I'm concerned about the additional battery drain that an internet round-trip every two minutes, 24hrs a day, will cost. I also would want to limit updates to wifi connections only.
It's not important for the data to be available online real-time. However it is possible for multiple sources to add to the same datastream in a 2-way sybc, (ie online DB and all devices have the merged data).
I'm currently handling that myself, but when I saw the offline capability of Datastore I hoped I could get all that functionality for free.
I know we can't directly control offline-mode in Firestore, but is there any way to prevent it from always and immediately pushing write changes online?
The only technical question I can see here has to do with how batch writes operate, and more importantly, cost. Simply put, a batch write of 100 writes is the same as writing 100 writes individually. The function is not a way to avoid the write costs of firestore. Same goes for transactions. Same for editing a document (that's a write). If you really want to avoid those costs then you could store the values for the thirty minutes and let the client send the aggregated data in a single document. Though you mentioned you need data to be immediate so I'm not sure that's an option for you. Of course, this would be dependent on what one interprets "immediate" as based off the relative timespan. In my opinion, (I know those aren't really allowed here but it's kind of part of the question) if the data is stored over months/years, 30 minutes is fairly immediate. Either way, batch writes aren't quite the solution I think you're looking for.
EDIT: You've updated your question so I'll update my answer. You can do a local cache system and choose how you update however you wish. That's completely up to you and your own code. Writes aren't really automatic. So if you want to only send a data packet every hour then you'd send it at that time. You're likely going to want to do this in a transaction if multiple devices will write to the same stream so one doesn't overwrite the other if they're sending at the same time. Other than that I don't see firestore being a problem for you.
I'd like to create a system that 'appends' mails to each other.
Situation: Everytime an entity is changed I'd like to send a mail to subscribers of that entity.
But when the entity is changed 10 times on a small time (like 5 / 10 minutes) the subscribers don't need to be spammed with emails.
So I was thinking of creating a 'Queue'. And to be more precise I was thinking about using the Azure Servicebus.
After searching some of the documentation. I found two interesting properties.
SessionId => Would be the entity of the Id
BatchFlushInterval (Client-side batching)
'If the client sends additional messages during this time period, it transmits the messages in a single batch'
This sounded perfect.
In this way I recieve all the 'changes of the entity' in a single batch. And could construct a single e-mail to send.
But I don't seem to find this option anymore in the new "Azure Service Bus NuGet".
Now that I searched for alternatives, I have a feeling this is not a 'normal' practice.
Does someone have some experience in this field?
I would like to avoid having to use a cron job. But if this is the best solution please let me know.
I know this a really broad question and more a 'need for information'. So commenting with links can already make me real happy.
Thanks in advance
Brecht
Don't think Message Sessions or BatchFlushInterval is the approach to take here. What you're looking for is to buffer messages to create a single notification rather than multiple ones. I'd personally go with receiving a batch from the Azure Service Bus and process the batch to "append" notifications.
I crawl some data from the web, because there is no API. Unfortunately, it's quite a lot of data from several different sites and I quickly learned I can't just make thousands of requests to the same site in a short while... I want to approach the data as fast as possible, but I don't want to cause a DOS attack :)
The problem is, every server has different capabilities and I don't know them in advance. The sites belong to my clients, so my intention is to prevent any possible downtime caused by my script. So no policy like "I'll try million requests first and if it fails, I'll try half million, and if it fails..." :)
Is there any best practice for this? How Google's crawler knows how many requests it can do in the same while to the same site? Maybe they "shuffle their playlist", so there are not as many concurrent requests to a single site. Could I detect this stuff somehow via HTTP? Wait for a single request, count response time, approximately guess how well balanced the server is and then somehow make up a maximum number of concurrent requests?
I use a Python script, but this doesn't matter much for the answer - just to let you know in which language I'd prefer your potential code snippets.
The google spider is pretty damn smart. On my small site it hits me 1 page per minute to the second. They obviously have a page queue that is filled keeping time and sites in mind. I also wonder if they are smart enough about not hitting multiple domains on the same server -- so some recognition of IP ranges as well as URLs.
Separating the job of queueing up the URLs to be spidered at a specific time from the actually spider job would be a good architecture for any spider. All of your spiders could use the urlToSpiderService.getNextUrl() method which would block (if necessary) unless the next URL is to be spidered.
I believe that Google looks at the number of pages on a site to determine the spider speed. The more pages that you have the refresh in a given time then the faster they need to hit that particular server. You certainly should be able to use that as a metric although before you've done an initial crawl it would be hard to determine.
You could start out at one page every minute and then as the pages-to-be-spidered for a particular site increases, you would decrease the delay. Some sort of function like the following would be needed:
public Period delayBetweenPages(String domain) {
take the number of pages in the to-do queue for the domain
divide by the overall refresh period that you want to complete in
if more than a minute then just return a minute
if less than some minimum then just return the minimum
}
Could I detect this stuff somehow via HTTP?
With the modern internet, I don't see how you can. Certainly if the server is returning after a couple of seconds or returning 500 errors, then you should be throttling way back but a typical connection and download is sub-second these days for a large percentage of servers and I'm not sure there is much to be learned from any stats in that area.
We want to launch a vehicle tracking service, remote monitoring of assets through GPRS/SMS. development, integration and maintenance of gps tracking software /Remote Monitoring SYSTem (Gsm/Gprs based)having Google Map API or mapinfo,.img or possibility to integrate any other map service, geo fencing, geo-coding, reverse geo-coding, alerts on events, user friendly gui, dash board, Billing each user , scrolling, fuel meter display etc. For reference , have a look at gpsgate.com (tracking server solution)
How to develop this and how much time is needed for this ?, any idea ?
First of all you will need some sort of gateway. It must handle TCP connections from devices(use async sockets!=)), parse their data and send to storage.
Next big thing is storage itself. If you want to support different devices, I would suggest to use something like Apache Cassandra with keys, based on date(only date, not time)and device UID.
Third part of puzzle is how you going to present data to users. This is pretty simple. Id suggest REST services.
This is my own experience. On my last job I was an Architect/Lead on quite the same project.
It is now live and successful handling 30k+ devices online on 1 server for apps(IIS), 2 for data and 2 for TCP gateways.
If you want more specific info, feel free to ask=)
Honestly, it all depends on your skills and expertise.
A team that is well versed in designing complex systems like that could finish the task in 4-6 months.
Given that you are asking such a question, rather than already having a ballpark estimate, means that you probably would be learning as you go. This could easily stretch into over a year, especially without prior experience managing such an overarching project.
I'm writing an application where the user will create an appointment, and instantly get an email confirming their appointment. I'd also like to send an email the day of their appointment, to remind them to actually show up.
I'm in ASP.NET (2.0) on MS SQL . The immediate email is no problem, but I'm not sure about the best way to address the reminder email. Basically, I can think of three approaches:
Set up a SQL job that runs every night, kicking off SQL emails to people that have appointments that day.
Somehow send the email with a "do not deliver before" flag, although this seems like something I might be inventing.
Write another application that runs at a certain time every night.
Am I missing something obvious? How can I accomplish this?
Choice #1 would be the best option, create a table of emails to send, and update the table as you send each email. It's also best not to delete the entry but mark it as sent, you never know when you'll have a problem oneday and want to resend out emails, I've seen this happen many times in similar setups.
One caution - tightly coupling the transmission of the initial email in the web application can result in a brittle architecture (e.g. SMTP server not available) - and lost messages.
You can introduce an abstraction layer via an MSMQ for both the initial and the reminder email - and have a service sweeping the queue on a scheduled basis. The initial message can be flagged with an attribute that means "SEND NOW" - the reminder message can be flagged as "SCHEDULED" - and the sweeper simply needs to send any messages that it finds that are of the "SEND NOW" or that are "SCHEDULED" and have a toBeSentDate >= the current date. Once the message is successfully sent - the unit of work can be concluded by deleting the message from the queue.
This approach ensures messages are not lost - and enables the distribution of load to off-peak hours by adjusting the service polling interval.
As Rob Williams points out - my suggestion of MSMQ is a bit of overkill for this specific question...but it is a viable approach to keep in mind when you start looking at problems of scale - and you want (or need) to minimize/reduce database read/write activity (esepcially during peak processing periods).
Hat tip to Rob.
For every larger project I usually also create a service which performs regular or periodical tasks.
The service updates its status and time of last execution somewhere in the database, so that the information is available for applications.
For example, the application posts commands to a command queue, and the service processes them at the schedule time.
I find this solution easier to handle than SQL Server Tasks or Jobs, since it's only a single service that you need to install, rather than ensuring all required Jobs are set up correctly.
Also, as the service is written in C#, I have a more powerful programming language (plus libraries) at hand than T-SQL.
If it's really pure T-SQL stuff that needs to be handled, there will be a Execute_Daily stored procedure that the service is going to call on date change.
Create a separate batch service, as others have suggested, but use it to send ALL of the emails.
The web app should record the need to send notifications in a database table, both for the immediate notice and for the reminder notice, with both records annotated with the desired send date/time.
Using MSMQ is overkill--you already have a database and a simple application. As the complexity grows, MSMQ or something similar might help with that complexity and scalability.
The service should periodically (every few minutes to a few hours) scan the database table for notifications (emails) to send in the near future, send them, and mark them as sent if successful. You could eventually leverage this to also send text messages (SMS) or instant messages (IMs), etc.
While you are at it, you should consider using the Command design pattern, and implement this service as a reusable Command executor. I have done this recently with a web application that needs to keep real estate listing (MLS) data synchronized with a third-party provider.
Your option 2 certainly seems like something you are inventing. I know that my mail system won't hold messages for future delivery if you were to send me something like that.
I don't think you're missing anything obvious. You will need something that runs the day of the appointment to send emails. Whether that might be better as a SQL job or as a separate application would be up to your application architecture.
I would recommend the first option, using either an SQL or other application to run automatically every day to send the e-mails. It's simple, and it works.
Microsoft Office has a delivery delay feature, but I think that is an Outlook thing rather than an Exchange/Mail Server thing, so you're going to have to go with option 1 or 3. Or option 4 would be to write a service. That way you won't have to worry about scheduled tasks to get the option 3 application to run.
If you are planning on having this app hosted at a cheap hosting service (like GoDaddy), then what I'd recommend is to spin off a worker thread in Global.asax at Application_Start and having it sleep, wake-up, send emails, sleep...
Because you won't be able to run something on the SQL Server machine, and you won't be able to install your own service.
I do this, and it works fine.