Uploading 100GB of data [duplicate] - firebase

We're thinking about using Firebase Hosting (which is awesome: ssl, control over redirects, easy CLI tool, etc) to host our API docs. Currently, we count 17k files generated. We did a test upload, and everything worked (pretty cool!). We're curious, is there a limit to the number of files we can deploy to Firebase Hosting?

I'm not aware of a limit on the number of files. But they zipped result (which is what actually gets uploaded) definitely has to be less than 2GB.

Related

What's the purpose of .firebase/hosting. ALPHANUM.cache

Today I deployed firebase hosting. After deployment, I noticed firebase creates file of .firebase/hosting.ALPHANUM.cache, where ALPHANUM is actually some random baseNN ish value.
Question
What is the purpose of this file?
More especially, can I add this to .gitignore?
Or, I should not?
This file is part of a new feature in Firebase Hosting that minimizes the size the time of a hosting deployment by only uploading the files that changed since the last deployment. It's new in CLI version 4.2.0, and you can read about that on GitHub.
As Frank suggested, you should definitely add the .firebase directory to your .gitignore or equivalent file, since it contains information that's not strictly part of your project, and is likely not applicable for everyone sharing and contributing to your project source code.

Accessing Files from Firebase Storage vs Firebase Hosting?

So here is the scenario:
When I access files from Firebase Storage:
I get my file from storage bucket(.html, .png, .zip etc) (Small in size btw no more than 2mb).
Store that file in my local storage so the app don't need to download it again and consume the bandwidth of the server.
Use it from local storage everytime the app needs it.
When I access files from Firebase Hosting:
I get my file from nearest CDN of Firebase(.html, .png, .zip etc) (Small in size btw no more than 2mb).
Store that file in my local storage so the app don't need to download it again and consume the bandwidth of the server.
Use it from local storage everytime the app needs it.
NOTE: I also have one file version.txt on storage bucket (Firebase Storage). According to the value in this file, I decide whether to fetch file in Step 1 again or not. It means the version.txt is fetched everytime.
Questions:
How to achieve the similar version programming part in Firebase Hosting? I know we deploy folders, can we get their version from Firebase CDN. If yes, how?
In which method I gonna first hit my LIMIT as we know Firebase is paid after a limit.
Pros of Hosting: It will be faster. Link
PS:
1. My concern is bandwidth and not security.
Currently, I am using basic Plan (free) with limits Source:
From the Firebase docs:
The Firebase Realtime Database stores JSON application data, like
game state or chat messages, and synchronizes changes instantly
across all connected devices.
Firebase Remote Config stores
developer-specified key-value pairs to change the behavior and
appearance of your app without requiring users to download an update.
Firebase Hosting hosts the HTML, CSS, and JavaScript for your website
as well as other developer-provided assets like graphics, fonts, and
icons.
Cloud Storage stores files such as images, videos, and audio as well as other user-generated content.
Storage has higher free tier limits, while Hosting might be a little faster. Note that all files on Hosting are publicly accessible, so if you need authentication or authorization, you should use Storage.

How many files can I upload at once to Firebase Hosting?

We're thinking about using Firebase Hosting (which is awesome: ssl, control over redirects, easy CLI tool, etc) to host our API docs. Currently, we count 17k files generated. We did a test upload, and everything worked (pretty cool!). We're curious, is there a limit to the number of files we can deploy to Firebase Hosting?
I'm not aware of a limit on the number of files. But they zipped result (which is what actually gets uploaded) definitely has to be less than 2GB.

Handling Wordpress media files on EC2/S3 with auto scaling

I'm working on a WordPress deployment configuration on Amazon AWS. I have WordPress running on Apache on an Ubuntu EC2 instance. I'm using W3 Total Cache for caching and to serve user-uploaded media files from an S3 bucket. A load balancer distributes traffic to two EC2 instances with auto scaling to handle heavy loads.
The problem is that user-uploaded media files are stored locally in wp-content/uploads/ and then synced to the S3 bucket. This means that the media files are inconsistent between EC2 instances.
Here are the approaches I'm considering:
Use a WordPress plugin to upload the media files directly to S3 without storing them locally. The problem is that the only plugins I've found (this and this) are buggy and poorly maintained. I'd rather not spend hours fixing one of these myself. It's also not clear whether they integrate cleanly with W3 Total Cache (which I also want to use for its other caching tools).
Have a master instance where users access the admin interface and upload media files. All media files would be stored locally on this instance (and synced to S3 via W3 Total Cache). Auto scaling would deploy slave instances with no local file storage.
Make all EC2 instances identical and point wp-content/uploads/ to a separate EBS volume. All instances would share media files.
Use rsync to copy media files between running EC2 instances.
Is there a clear winner? Are there other approaches I should think about?
You might consider looking at something like s3fs (http://code.google.com/p/s3fs/). This allow you to mount your S3 bucket as a volume on your server instances. You could simply have the code to mount the volume executed on instance start-up.
s3fs also has the ability to use local (ephermal) directories as a cache to the s3fs directory so as to improve performance.

How do I get site usage from IIS?

I want to build a list of User-Url
How can I do that ?
By default, IIS creates log files in the system32\LogFiles directory of your Windows folder. Each website has its own folder beginning in “W3SVC” then incrementing sequentially from there (i.e. “W3SVC1”, “W3SVC2” etc). In there you’ll find a series of log files containing details of each request to your website.
To analyse the files, you can either parse them manually (i.e. suck them into SQL Server and query them) or use a tool like WebTrends Log Analyser. Having said that, if you really want to track website usage you might be better off taking a look at Google Analytics. Much simpler to use without dealing with large volumes of log files or paying heft license fees.
if you have any means of identifying your users via web server logs (e.g. username in the cookie) then you can do it by parsing your web logs and getting info from csUriquery and csCookie fields.
Alternatively you can rely on external tracking systems (e.g Omniture)
I ended up finding the log files in C:\inetpub\logs\LogFiles.
I used Log Parser Studio from Microsoft to parse the data. It has lots of documentation on how to query iis log files, including sample querys.

Resources