About Firebase Remote Configuration limitations - firebase

We'll start to new project using the Firebase.
First of all, I'll try to set static like app version) to remote config to check the lower version in app-side.
So, I'd try to search the limitation/quota of the 'firebase remote configuration' such as traffic, connections, concurrent connections per month and so on, but I can't find any documents about Firebase remote configurations.
Can anybody help me?

There aren't really any official public numbers, partly because the team reserves the right to change them in the future to better suit the needs of the service.
That said, Remote Config is designed to be free for your apps, no matter how popular they become. You shouldn't ever have to worry about concurrent connections or connections per month or anything like that, as long as your client behaves reasonably.
What does that mean? Well, personally, my recommendation would be to not set your cache for anything less than 3 hours. If you need something faster than that, then you should really start looking into the Realtime Database. Otherwise, you should be find with Remote Config.

Related

Need Firebase Database behaviour clarification when inside a Service

I am testing a feature which requires a Firebase database write to happen at midnight everyday. Now it is possible that at this particular time, the client app might not be connected to the internet.
I have been using Firebase with persistence off as that can potentially cause issues of stale data in another feature of mine.
From my observation, if I disconnect the app before the write and keep it this way for a minute or so, Firebase eventually reconnects when I turn on the connectivity again and performs the write.
My main questions are:
Will this behaviour be consistent even if the connectivity is lost for quite a few hours?
Will Firebase timeout?
Since it is inside a forever running service, does it still need persistence to ensure that writes are not lost? (assume that the service does not restart).
If the service does restart, will the writes get lost?
I have some experience with this exact case, and I actually do NOT recommend the use of a background service for managing your Firebase requests. In fact, I wouldn't recommend managing Firebase requests at all (explained later).
Services, even though we can make them run forever, tend to get killed by the system quite a lot actually (unless you set their CPU priority to a higher level, but even then the system still might kill them).
If you call a Firebase Write call (of any kind), and your service gets killed, the write will get lost as you said. Unless, you create a sophisticated manager in which you store requests that haven't been committed into your internal storage, and load them up each time the service is restarted - but that is a very dirty work to do, considering the fact that Firebase Developers took care of us and made .setPersistenceEnabled(true) :)
I know, you mentioned you don't want to use it, but I STRONGLY advise you to do so. It works like charm, no services required, and you don't have to worry at all about managing your write requests. Perhaps it would be better to solve the other issue you have in order to make this possible.
To sum up, here's what I would do in your case:
I would call the .setPersistenceEnabled(true) someplace at the beginning (extending the Application class and calling it from onCreate() is recommended)
I would use Android's AlarmManager and register a BroadcastReceiver to receive an alarm at midnight (repetitive or not - you decide)
Inside the BroadcastReceiver, I'd simply call a write function of Firebase and worry about nothing :)
To make sure I covered all of your questions:
will this behaviour be consistent....
No. Case-scenario: Midnight time, your service has successfully received the call and is now trying to write into Firebase. If, for example, the user has no connection until 6 AM (just a case scenario), there is a very high chance that the system will kill it during those 6 hours, and your write will get lost. Flight Time, or staying in an area with no internet coverage - both are examples of risky scenarios that could break your app's consistency
Will Firebase Timeout?
It definitely could, as mentioned. I wouldn't take the risk and make a 80-90% working app. Use persistence and have a 100% working app :)
I believe I covered the rest of the questions..
Good luck!

Multiple requests to server question

I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.

What about using a microkernel for Node.js + NginX?

Not even sure if it would easily work but for an upcoming project I may need to set up a web sockets only server, it would not have a database, memcache or even serve static files, all it would need to do is work some logic and update other clients.
The server may need to support 1~300000 clients simultaneously so Node.js+NginX makes sense, but maybe not all the other features of a traditional web server (apache for example) are necessary...
Something like Minix sounds like it would work...
This may be exactly what you're looking for:
https://github.com/tmpvar/cluster-socket.io
It allows you to handle large amounts of requests across multiple node processes.
Remember you can always stop into #node.js and ask questions! Make sure to report back with your findings.

Bandwidth Monitoring in asp.net

Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.

ASP.NET What's the best way to produce a trial version for customers to download?

I've written a ASP.NET app that I hope to sell to businesses, I could host the trial but it's designed to connect to the customers data so customers will certainly want to install it to do a successful evaluation.
I've never produced anything commercial before so I'm looking for advice on how best to limit the trial, a 30 day trial seems most common, do you simply rely on the clock of the PC/Server they install it on? Any other suggestions welcome, please keep in mind this is ASP.NET app so will be installed on their web server.
Thanks
Craig
I would just do it via the PC's clock. At the end of the day, they could just change the clock and continue to use your software, though it's probably not going to work in practice (i.e. most software actually uses the date/time for other things as well and changing it going to screw that up).
Generally, you can usually trust business more than you trust the general public. The liability of a business is much higher than that of an individual, so if it came to it, you could potentially sue them for quite a bit. That alone means most businesses will purchase licenses for all of their software: a few hundred (or even thousand) dollars for a software license is much better than risk getting sued.
When they sign up for the demo, make sure you get all of their contact details and so on.
I would setup a web service on your server to authenticate the demo application. The web service should get called periodically and if it fails, then shut down the application. That way you have complete control over the trial (you can extend it or shut it down remotely).
You should give them some sort of key which they will place in your web.config that will identify them as a customer.
Make sure you take the usual precautions of encrypting / using hashes with both the key and the web service so it's not bypassed.
This sort of thing has been well covered on SO in the past.
You cannot make it unbreakable, but you can make it very difficult for the client to break your trial period.
One way to do it is to take the first run time and encrypt that info and store it either in your web.config or database. This has a weakness though: what do you do if the value is not present where you expect it to be?
Another option is to ping a webservice that you host. If the webservice says their trial is over then you can render the appropriate page to tell them that. This has the advantage that the webservice is beyond their control and cannot be messed with. It has the disadvantage that not every client will want to be allowing their web app to phone home, and there may be connectivity issues which would interfere with the functioning of your app.
So you might want to come up with a variety of options, and then implement a licencing module using the Provider pattern, so that you can swap in the licencing module most suitable for that client.
Put a counter in the web.config, of course give the counter a non-related name so the customer does not know what it is for. Every time they access the application you can increment the counter. Give them x number of log-in's.
If you want you can encrypt the counter if you do not want the customer to figure out that the counter is incrementing.

Resources