I am a developer at Across Cultures - we provide online EAL (English as an Additional Language) support for learners in schools.
I've been looking at your Speech Services API and have something working for our requirements, however we will need support for more than 20 concurrent connections to the API - currently we are experiencing as much as 100+ concurrent users.
Can you tell me if it is possible to increase the concurrent connections, how it affects price, and if it can auto-scale or do we need to specify the number in advance?
Thanks,
Simon
The faq page: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/faq-stt
tells you about the information we need to increase concurrency. Please don't post this information publicly (especially subscription keys etc).
There is no additional cost to increase concurrency. The concurrency defines the upper limit. The service scales dynamically up to that limit.
thx
Wolfgang
Related
I am investigating one issue related to the rate limiting in my server. Previous developer has setup spike arrest and quota policy in ApiGee. I read the documentation but I am unable to understand how both the policy works parallelly?
For example:
Client (Web, Mobile) calling the API. There are more than 100 concurrent users access the API. So which policy applied? Spike arrest or Quota?
If anyone has real world idea about this then please provide some insight.
Thanks
The particular behavior of the API proxy will depend on the placement of the two policies within flows, but assuming a standard request flow with serial policies, then generally the spike-arrest policy will protect your back-end services in aggregate, while the quota policy will enforce rate-limits on some chosen client-specific criteria. Thus one is a general overall safety protection for your business-logic back-end (spike arrest), and the other is more for enforcing client-specific constraints as dictated by your end-to-end application design and expected use-case interactions (quota). Both are configurable though, so the details of those configurations matter in the final analysis.
Comparison docs are here: https://docs.apigee.com/api-platform/develop/comparing-quota-spike-arrest-and-concurrent-rate-limit-policies
Does anyone know how many concurrent requests per second can be made to sabre developer bargain finder Max API?
I have searched everywhere and all I can find is contact your rep to have it increased.
We had the setup of 50 Sessions included but it was no hard limit. Also we were able to extend it to 150 without any costs. There is as far as i know no limit on concurrent requests but depending on what calls you make, you should have your "look to book ratio" (usually 500:1) in mind.
I believe they are usually sold in bundles of 50 sessions per EPR but can be increased. I do not believe there is any way of viewing that information about a given EPR in any kind of tool, so your rep is the best place to go for that kind of information. Pricing depends on all sorts of contractual items, I believe.
There is a certain limit associated with the number of concurrent requests for the API. If the same gets exceeded, you would get the following error :- USG_CONNECTOR_IS_BUSY. This means that the maximum number of concurrent requests for the API has been exceeded. Please contact your Sabre account manager to determine or increase your allocated concurrent request limit for this API. In this case, wait at least 500 milliseconds and resend the request.
The allocations are usually increased in bundles of 50. Your Sabre account manager would be the best person who can provide specific details.
#PCM7,
There is no specific limit, as this depends on the commercial agreement between the travel agency and Saber, as the token generated for the BFM is managed directly at SABRE.
In the agency where I work we have several tourism companies and with that each one has a specific amount of TPS for BFM.
This information will be obtained directly from the SABER account executive for the travel agency in question.
https://developer.sabre.com/docs/soap_apis/air/search/bargain_finder_max
There are several option for storing the users info when dealing with ASP.NET Membership providers. I would like to ask if they are comparable in terms of performance. Especially of ActiveDirectoryMembershipProvider and SqlMembershipProvider if when there will be e.g. 100 000 users recorded.
Both Providers can handle the workload. Question is if the infrastructure below can handle it. An AD-Server with 100.000 accounts should be big enough to handle it.
So, the real question in my eyes is, do you write the app for an intranet and want to provide SSO functionality? Then, by all means, go with ActiveDirectory!
Your question is unanswerable, as "performance" depends greatly upon many factors.. for instance, network speed, network latency, network saturation, the power of your AD server vs your SQL Server, the disk subsystems in use in either, etc...
There is no way to say one way or the other without thoroughly evaluating each environment, and even at that point, you should just benchmark each and determine what works best for you.
In most cases, though.. the decision between sql vs ad has nothing to do with performance, and has to do with the features offered by each. I would strongly doubt you have 100,000 users in your active directory, as that would cost a millions of dollars in licensing costs.
When an asp.net website has about 1,000 active users, it works good.
How should I do if the website has about 100,000 active users?
How to upgrade my asp.net app to support a larger number of users?
Changing the webApp's architecture?
Or buying more web servers?
I just wonder in the real-world, how do other people build an asp.net website supporting millions of users? What's the app architecture of a website to support that?
Any suggestion will be welcome.
First, make sure you're with a first rate hosting provider.
Second, download a performance profiler (I always suggest Red Gate Performance Profiler) and profile your app. Find the bottlenecks and eliminate them. Repeat until you get your desired performance metric.
If your application is querying a database or other web services, try to use asynchronous methods. Using asynch methods will free up the web server to handle a lot more client requests while it is waiting for a response from the database server or web service.
You say it "works good" at the moment. It's impossible to know what the point at which this may change will be wihtout knowing a whole lot more about the nature of your traffic, current set up, what else runs on the server, etc ,etc. It could be that it continues to "work good" with a million users as it is.
When you need to make changes (and slowly reducing performance will alert you), that's whne you need to worry. And then, as Justin says, knowing the potential bottelnecks will give you pointers as to what solution you need.
Buying more servers is one strategy. So is changing the architecture. The easiest and cost effective is throwing more servers at it. It does depend a little bit on the current application architecture, but nothing that can't be easily overcome.
What I suggest, is to load test your application. See what happens as you increase the active users. Who knows it might handle 100k active users, maybe it won't but at least you will know the tipping point.
In regards to what you should do, that really depends on your business needs. If your company has the $$ and this is a core product, then it makes sense to architect a robust application. If it's not, maybe throwing hardware at the problem is good enough.
It would also help if you could define an active user. Is it someone who is visiting your site and has a session? Is it 100k concurrent requests to the server...?
In terms of hardware scaling: Scaling Up or Scaling out
Software scaling - Profile your app
I've used asp.net profiles (using the AspNetSqlProfileProvider) for holding small bits of information about my users. I started to wonder how it would handle a robust profile for a large number of users. Does anyone have experience using this on a large website with large numbers of simultaneous users? What are the performance implications? How about maintenance?
Running against this via SQL I have found is a bit tricky, but i have worked with clients that have scaled it up to a few hundred properties, and 10K+ users without difficulty. Granted not a lot of users but it is working thus far.
I think it really depends on the specific project, and your exact needs when it comes to working with the profile information. Do you need to query on it regularly via SQL? Do you just need to for user display only, these types of things might help provide a more solid answer for your needs.
The SQL provider performance is more closely correlated to big iron throughput. Performance is more or less directly proportional to a single SQL Server's ability to handle the number of queries. Scale-up is the only option, so as such its not really five-nines robust out the box.
You'll have to figure out if you need scale-out performance and availability e.g. through partitioning, replication, redundancy etc. and at what cost to performance. Some of the capabilities are are possible as is - the current implementation is more aimed at the middle-market and enterprise.
Good thing is you can put your own implementation of the profile provider - then attach it to services and systems with capabilities outlined above.
We wrote a custom authn,authz and profile provider and strapped it to large AD/LDS LDAP cluster across 3 datacenters. We're in the Comscore Top 10 - so you could say that we deal with a good slice of internet every day. 1000's of profile queries per second and 100'millions of profiles - it can scale with good planning, engineering and operations.