Is the traffic from/to firebase database server compressed? - firebase

Perhaps a bit of a silly question, but, is the traffic from/to firebase DB server compressed?
If so, what algorithm(s)?
What compression ratios are usually occurring, for plain-text data being sent/received by firebase client?
Does the compression have a noticeable impact on CPU usage on today's devices?
Does the client code have some control over this aspect?
Are there differences in that regard, between the Java/Android/Web/iOS SDK-s?
EDIT: also, on which communication/transport layer(s) does the compression occur?

The communication between a Firebase client and its Database Servers goes over a secured web socket connection. The data is not compressed.
You can easily see this yourself by accessing the Firebase Database from your browser and then looking at the network tab. It'll show you exactly what data is being exchanged and in which format.

Related

How to inspect Firestore network traffic with charles proxy?

As far as I can tell, Firestore uses protocol buffers when making a connection from an android/ios app. Out of curiosity I want to see what network traffic is going up and down, but I can't seem to make charles proxy show any real decoded info. I can see the open connection, but I'd like to see what's going over the wire.
Firestores sdks are open source it seems. So it should be possible to use it to help decode the output. https://github.com/firebase/firebase-js-sdk/tree/master/packages/firestore/src/protos
A few Google services (like AdMob: https://developers.google.com/admob/android/charles) have documentation on how to read network traffic with Charles Proxy but I think your question is, if it’s possible with Cloud Firestore since Charles has support for protobufs.
The answer is : it is not possible right now. The firestore requests can be seen, but can't actually read any of the data being sent since it's using protocol buffers. There is no documentation on how to use Charles with Firestore requests, there is an open issue(feature request) on this with the product team which has no ETA. In the meanwhile, you can try with the Protocol Buffers Viewer.
Alternatives for viewing Firestore network traffic could be :
From Firestore documentation,
For all app types, Performance Monitoring automatically collects a
trace for each network request issued by your app, called an HTTP/S
network request trace. These traces collect metrics for the time
between when your app issues a request to a service endpoint and when
the response from that endpoint is complete. For any endpoint to which
your app makes a request, Performance Monitoring captures several
metrics:
Response time — Time between when the request is made and when the response is fully received
Response payload size — Byte size of the network payload downloaded by the app
Request payload size — Byte size of the network payload uploaded by the app
Success rate — Percentage of successful responses compared to total responses (to measure network or server failures)
You can view data from these traces in the Network requests subtab of
the traces table, which is at the bottom of the Performance dashboard
(learn more about using the console later on this page).This
out-of-the-box monitoring includes most network requests for your app.
However, some requests might not be reported or you might use a
different library to make network requests. In these cases, you can
use the Performance Monitoring API to manually instrument custom
network request traces. Firebase displays URL patterns and their
aggregated data in the Network tab in the Performance dashboard of the
Firebase console.
From stackoverflow thread,
The wire protocol for Cloud Firestore is based on gRPC, which is
indeed a lot harder to troubleshoot than the websockets that the
Realtime Database uses. One way is to enable debug logging with:
firebase.firestore.setLogLevel('debug');
Once you do that, the debug output will start getting logged.
Firestore use gRPC as their API, and charles not support gRPC now.
In this case you can use Mediator, Mediator is a Cross-platform GUI gRPC debugging proxy like Charles but design for gRPC.
You can dump all gRPC requests without any configuration.
For decode the gRPC/TLS traffic, you need download and install the Mediator Root Certificate to your device follow the document.
For decode the request/response message, you need download proto files which in your description, then configure the proto root in Mediator follow the document.

NGINX as warm cache in front of wowza for HLS live streams - Get per stream data duration and data transferred?

I've setup NGINX as a warm cache server in front of Wowza > HTTP-Origin application to act as an edge server. The config is working great streaming over HTTPS with nDVR and adaptive streaming support. I've combed the internet looking for examples and help on configuring NGINX and/or other solutions to give me live statistics (# of viewers per stream_name) as well parse the logs to give me stream duration per stream_name/session and data_transferred per stream_name/session. The logging in NGINX for HLS streams logs each video chunk. With Wowza, it is a bit easier to get this data by reading the duration or bytes transferred values from the logs when the stream is destroyed... Any help on this subject would be hugely appreciated. Thank you.
Nginx isn't aware of what the chunks are. It's only serving resource to clients over HTTP, and doesn't know or care that they're interrelated. Therefore, you'll have to derive the data you need from the logs.
To associate client requests together as one, you need some way to track state between requests, and then log that state. Cookies are a common way to do this. Alternatively, you could put some sort of session identifier in the request URI, but this hurts your caching ability since each client is effectively requesting a different resource.
Once you have some sort of session ID logged, you can process those logs with tools such as Elastic Stack to piece together the reports you're looking for.
Depending on your goals with this, you might find it better to get your data client-side. There, you have a better idea of what a session actually is, and then you can log client-side items such as buffer levels and latency and what not. The HTTP requests don't really tell you much about the experience the end users are getting. If that's what you want to know, you should use the log from the clients, not from your HTTP servers. Your HTTP server log is much more useful for debugging underlying technical infrastructure issues.

Implementing an audio stream service similar to Spotify

High Level Description
Let's say I have a client program (an iOS app in my specific case) that should communicate with a server program running on a remote host. The system should work as follows:
The server has a set of indexed audio files and exposes them to the client using the indexes as identifiers
The client can query the server for an item with a given identifier and the server should stream its contents so the client can play it in real time
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents and the user should not be able to access the data.
From my perspective, this is a simple implementation of what Spotify does.
Technical Questions
How should the audio data be streamed between server and client? What protocols should be used? I'm aware that using something on top of TLS will protect the information from someone sniffing the traffic, however it won't protect it from the user himself if he has access to the encryption keys.
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents…
HTTPS is the best way for this.
…and the user should not be able to access the data.
That's not possible. Even if you had some sort of magic to prevent capture of decrypted data (which isn't possible), someone can always record the audio output, even digitally.
From my perspective, this is a simple implementation of what Spotify does.
Spotify doesn't do this. Nobody does, and nobody can. It's impossible. If the client must decode data, then you can't stop someone from modifying how that data gets decoded.
What you can do
Use HTTPS
Sign your URLs so that the raw media is only accessible for short periods of time. Everyone effectively gets their own URL to the media. (Check out how AWS S3 handles this, for an excellent example.)
If you're really concerned, you can watermark your files on-the-fly, encoding an ID within them so that should someone leak the media, you can go after them based on their account data. This is expensive, so make sure you really have a business case for doing so.

Did server successfully receive request

I am working on a C# mobile application that requires major interaction with a PHP web server. However, the application also needs to support an "offline mode" as connection will be over a cellular network. This network may drop requests at random times. The problem that I have experienced with previous "Offline Mode" applications is that when a request results in a Timeout, the server may or may not have already processed that request. In cases where sending the request more than once would create a duplicate, this is a problem. I was walking through this and came up with the following idea.
Mobile sets a header value such as UniqueRequestID: 1 to be sent with the request.
Upon receiving the request, the PHP server adds the UniqueRequestID to the current user session $_SESSION['RequestID'][] = $headers['UniqueRequestID'];
Server implements a GetRequestByID that returns true if the id exists for the current session or false if not. Alternatively, this could returned the cached result of the request.
This seems to be a somewhat reliable way of seeing if a request successfully contacted the server. In mobile, upon re-connecting to the server, we check if the request was received. If so, skip that pending offline message and go to the next one.
Question
Have I reinvented the wheel here? Is this method prone to failure (or am I going down a rabbit hole)? Is there a better way / alternative?
-I was pitching this to other developers here and we thought that this seemed very simple implying that this "system" would likely already exist somewhere.
-Apologies if my Google skills are failing me today.
As you correctly stated, this problem is not new. There have been multiple attempts to solve it at different levels.
Transport level
HTTP transport protocol itself does not provide any mechanisms for reliable data transfer. One of the reasons is that HTTP is stateless and don't care much about previous requests and responses. There have been attempts by IBM to make a reliable transport protocol called HTTPR what was based on HTTP, but it never got popular. You can read more about it here.
Messaging level
Most Web Services out there still uses HTTP as a transport protocol and SOAP messaging protocol on top of it. SOAP over HTTP is not sufficient when an application-level messaging protocol must also guarantee some level of reliability and security. This is why WS-Reliability and WS-ReliableMessaging protocols where introduced. Those protocols allow SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures. At the same time they provide additional security. You can read more about those protocols here and here.
Your solution
I guess there is nothing wrong with your approach if you need a simple way to ensure that message has not been already processed. I would recommend to use database instead of session to store processing result for each request. If you use $_SESSION['RequestID'][] you will run in to trouble if the session is lost (user is offline for specific time, server is restarted or has crashed, etc). Also, if you use database instead of session, you can scale-up easier later on just by adding extra web server.

find out connection speed on http request?

is it possible to find out the connection speed of the client when it requests a page on my website.
i want to serve video files but depending on how fast the clients network is i would like to serve higher or lower quality videos. google analytics is showing me the clients connection types, how can i find out what kind of network the visitor is connected to?
thx
No, there's no feasible way to detect this server-side short of monitoring the network stream's send buffer while streaming something. If you can switch quality mid-stream, this is a viable approach because if the user's Internet connection suddenly gets burdened by a download you could detect this and switch to a lower-quality stream.
But if you just want to detect the speed initially, you'd be better off doing this detection on the client and sending the results to the server with the video request.
Assign each request a token /videos/data.flv?token=uuid123, and calculate amount of data your webserver sends for this token per second (possible check for multiple tokens at one username at a time period). You can do this with Apache sources and APR.

Resources