How to Billing of SIP call session when concurrent call make - asterisk

I am making Billing server for Conference calls. I am using Asterisk and A2Billing. All things work when Single Calling Card make call. But when Single Calling Card make concurrent calls then Billing server don't perform well due to A2Biling programming.
When call was generated from a2biling then A2B check credit and then after testing with Tariiffplan convert into nen0seconds. So don't check that we are making Single call or making another call from that card.
So if anyone know how to make real-time billing with asterisk server then please help me .....

In that case, it might be best to use a billing system able to do real time billing. NibbleBill is a billing system available for Freeswitch and it can do it. A2Billing will not cut it for you because it does things differently by checking the account balance only at the beginning of the call and updating the account balance at the en of the call. Let's make the following assumptions:
The account's credit is $10
The average session time of 5 minutes
There are 10 participants in the conference
The following WILL happen:
If all the participants get connected in less than 5 minutes (the average session duration), then they will all hear (if audio where activated): You have 10$
At the end of the call, each participant will potential consume a maximum of 10$
If all the participant use up the $10, the final account balance will be a big fat negative -$90 which is the initial $10 minute $10 x 10 participants ($100).
A real real-time billing system will have a daemon running in the background and monitoring the lines. It will be able to disconnect any call when the total credit used by all the instances of a given account reaches ~0.

Related

Redis streams - free struck messages in a consumer group without claiming

Lets say, there are messages in a Redis consumer group that has not been processed for N seconds. I am trying to understand if its possible to free them and put them back for other members of the consumer group to see it. I don't want to claim/process these struck messages. I just want to make them accessible to other active members of the consumer group. Is this possible?
From what I have understood from the documents, options mentioned are XAUTOCLAIM or use a combination of XPENDING and XCLAIM and neither of these are meeting my requirements.
Essentially, I am trying to create a standalone process that can act as monitor and make those messages visible to active consumers in the consumer group and I am planning to use this standalone process to perform similar activity for multiple consumer groups (around 30). So I don't want this standalone process to be taking other actions.
Please suggest how this can be designed.
Thanks!
Pending messages are removed from the Redis' PEL only when they are acknowledged: this is by design and allows to scale the message re-distribution process to each individual consumer and to avoid the single point of failure condition of having a single monitoring process like the one you described.
So, in short, what you are looking for can't be done and I would suggest to consider using XAUTOCLAIM or XPENDING / XCLAIM into your consumer processes instead.

Ensure In Process Records are Unique ActiveMQ

I'm working on a system where clients enter data into a program and the save action posts a message to activemq for more time intensive processing.
We are running into rare occasions where a record will be updated by a client twice in a row and a consumer on that activemq queue will process the two records at the same time. I'm looking for a way to ensure that messages containing records with the same identity are processed in-order and only one at a time. To be clear if a record with ID 1, 1, and 2 (in that order) are sent to activemq, 1 would process, then 2 (if 1 was still in process) and finally 1.
Another requirement, (due to volume) requires that the consumer be multi-threaded, so there may be 16 threads accessing that queue. This would have to be taken into consideration.
So if you have multiple threads reading that queue and you want the solution to be close to ActiveMQ you have to think about how you scale related to order concerns.
If you have multiple consumers, they may operate at different speed and you can never be sure which consumer goes before the other. The only way is to have a single consumer (you can still achieve High Availability by using exclusive-consumers).
You can, however, segment the load in other ways. How depends a lot on your application. If you can create, say 16 "worker" queues (or whatever your max consumer count would be) and distribute load to these queues while guarantee that requests from a single user always come to the same "worker queue", message order will remain per user.
If you have no good way to divide users into groups, simply take the userID mod MAX_CONSUMER_THREADS as a simple solution.
There may be better ways to deal with this problem in the consumer logic itself. Like keeping track of the sequence number and postpone updates that are out of order (scheduled delay can be used for that).

Overcome Marketo's quota limits

As far as I know, Marketo limits the number of REST API requests to 10,000 per day. Is there a way to overcome this limit? Can I pay and get more of those?
I found out that the REST API requests and the SOAP API requests counts separately but I'm trying to find a solution that is limited to REST API.
Moreover, in order to get an access token I need to sacrifice a request. I need to know how long this access token will be alive in order to save as much requests as possible.
You can increase your limit just by asking your account manager. It costs about 15K per year to increase your limit by 10K API calls.
Here are the default limits in case you don't have them yet:
Default Daily API Quota: 10,000 API calls (counter resets daily at 12:00 AM CST)
Rate Limit: 100 API calls in a 20 second window
Documentation: REST API
You'll want to ask your Marketo account manager about this.
I thought I would update this with some more information since I get this question a lot:
http://developers.marketo.com/rest-api/
Daily Quota: Most subscriptions are allocated 10,000 API calls per day (which resets daily at 12:00AM CST).  You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit:  Maximum of 10 concurrent API calls.
For the Daily limit:
Option 1: Call your account manager. This will cost you $'s. For a client I work for we have negotiated a much higher limit.
Option 2: Store and Batch your records. For example, you can send a batch of 300 leads in a single lead insert/update call. Which means you can insert/update 3,000,000 leads per day.
For the Rate limit:
Option 1 will probably not work. Your account manager will be reluctant to change this unless you a very large company.
Option 2: You need to add some governance to your code. There are several ways to do this, including queues, timers with a counter, etc. If you make multi-threaded calls, you will need to take into account concurrency etc.
Concurrent call limit:
You have to limit your concurrent threads to 10.
There are multiple ways to handle API Quota limits.
If you all together want to avoid hitting API limit, try to achieve your functionality thru Marketo Webhooks. Marketo webhook will not have API limits, but it has its own CONS. Please research on this.
You may use REST API, but design your strategy to batch the maximum records in a single payload instead of smaller chunks, e.g. sending 10 different API calls with each 20 records, accumulate the max allowed payload and call Marketo API once.
The access token is valid for 1 hour after authenticating.
Marketo's Bulk API can be helpful in regard to rate limiting as once you have the raw activities the updates, etc on the lead object can be done without pinging marketo for each lead: http://developers.marketo.com/rest-api/bulk-extract/ however be aware of export limits that you may run into when bulk exporting lead + activities. Currently, Marketo only counts the size of the export against the limit when the job has been completed which means you can launch a max of 2 concurrent export jobs(which sum to more than the limit) at the same time as a workaround. Marketo will not kill a running job if a limit has been reached so long as the job was launched prior to the limit being reached.
Marketo has recently upgraded the maximum limit
Daily Quota: Subscriptions are allocated 50,000 API calls per day (which resets daily at 12:00AM CST). You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit: Maximum of 10 concurrent API calls.
https://developers.marketo.com/rest-api/

Send Email at a faster rate in ASP.NET

I had created an application to send Bulk Emails in ASP.NET.
You can check my these Articles on the same:
http://www.c-sharpcorner.com/UploadFile/cd7c2e/send-bulk-mails-using-smtp-configuration-part-2/
http://www.c-sharpcorner.com/UploadFile/cd7c2e/send-bulk-email-from-yahoo-and-hotmail-using-Asp-Net/
The problem is that these emails are taking a long time to get sent nearly 800e mails in 1 hour, I want these mails to go at a much faster rate. Could anyone help help me by showing any example or telling me what can I do to achieve this?
You can optimize your code to use async (best) or multiple threads (easier for some). You won't want to try to do too many at once as you can theoretically overload the smtp server. You can also look at using a different SMTP server that may provide faster performance, or setup a pool of SMTP servers and use many to send in parallel.
Another option is to use a more optimized third party component. MailBee.NET says they are fast (no specifics though) and from the description certainly sound like they have a lot of optimizations. They support queued/pooled messages and can send messages directly (no SMTP server needed). Threaded code combined with direct message sending should be really fast.
http://www.afterlogic.com/mailbee-net/smtp-component
You should be able to get far more than 800 messages per hour. That's very slow. Back when I used to program in ColdFusion (many years ago) I remember one of the main features in their 6.1 release was improved mail handling that in their tests was able to send 1.2 million messages per hour.
NOT SPAM
For those people who think bulk mail is only for spam, think about companies and organizations that have a lot of members who do want real mail. Even a small bank would easily have 10,000 customers. Send them each their monthly statement in a day and at 800 e-mails per hour, it'll take half a day just to send the mail.
In my particular case we run offer a product that includes an LMS for large universities. It's important for the notifications to be delivered quickly since many students will want to sign up for classes as soon as they get the notification that sign-ups are available. If we sent notifications to 2,000 students at 800 an hour, the people that happened to be first would have a huge advantage in course selection over the people who happened to be later in the list. That would not be acceptable.
I used to work for a large non-profit organization that had 4,000,000 members and we sent a monthly newsletter to members (not spam, subscribed newsletter). At that time we were using ColdFusion.

What are the Best Practices For SQL Inserts on Large Scale in reference to ad impressions?

I am working on a site where I will need to be able to track ad impressions. My environment is ASP.Net with IIS using a SQL Server DMBS and potentially Memcached so that there are not as many trips to the database. I must also think about scalability as I am hoping that this application becoming a global phenom (keeping my fingers crossed and working my ass off)! So here is the situation:
My Customers will pay X amount for Y Ad impressions
These ad impressions (right now, only text ads) will then be shown on a specific page.
The page is served from Memcached, lessening the trips to the DB
When the ad is shown, there needs to be a "+1" tick added to the impression count for the database
So the dilemma is this: I need to be able to add that "+1" tick mark to each ad impression counter BUT I cannot run that SQL statement every time that ad is loaded. I need to somehow store that "+1" impression count in the session (or elsewhere) and then run a batch every X minutes, hours, or day.
Please keep in mind that scalability is a huge factor here. Any advice that you all have would be greatly appreciated.
I've seen projects deal with this by deploying SQL Server Express edition on each web farm server and relying on Service Broker to deliver the track audit to the central server. Because each server in the farm updates a local SQL instance, they can scale as high as the sky. Service Broker assures the near-real-time reliable delivery. I've seen web farms handle 300-400 requests per second on average, 24x7 over long time, and with the queued nature of Service Broker being able to absorb spikes of 5000-7500 hits per second for hours at end, and recover in reasonable time, without audit loss and with the backlog staying under control.
If you really expect to scale and become the next MySpace, then you should learn from how they do it, and queue based asynchronous, decoupled processing is the name of the game.
Something you can do is increment the counts into a less permanent store, and periodically (every 1 minute, 5 minutes, hour..) sync into your more reliable database.
If your temporary store goes down, you lose the hit counts for some short period of time; the net effect of this is that in a rare event of a malfunction, the people paying for ads get some free impressions.
You can send a "atomically increment this value by +1" command to memcache. You could also do something like write a line to a flat file every time an ad is displayed, and have your "every 5 minute" sync job rotate the log, then go through counting all the lines in the file that was just rotated out.

Resources