As far as I can tell, Firestore uses protocol buffers when making a connection from an android/ios app. Out of curiosity I want to see what network traffic is going up and down, but I can't seem to make charles proxy show any real decoded info. I can see the open connection, but I'd like to see what's going over the wire.
Firestores sdks are open source it seems. So it should be possible to use it to help decode the output. https://github.com/firebase/firebase-js-sdk/tree/master/packages/firestore/src/protos
A few Google services (like AdMob: https://developers.google.com/admob/android/charles) have documentation on how to read network traffic with Charles Proxy but I think your question is, if it’s possible with Cloud Firestore since Charles has support for protobufs.
The answer is : it is not possible right now. The firestore requests can be seen, but can't actually read any of the data being sent since it's using protocol buffers. There is no documentation on how to use Charles with Firestore requests, there is an open issue(feature request) on this with the product team which has no ETA. In the meanwhile, you can try with the Protocol Buffers Viewer.
Alternatives for viewing Firestore network traffic could be :
From Firestore documentation,
For all app types, Performance Monitoring automatically collects a
trace for each network request issued by your app, called an HTTP/S
network request trace. These traces collect metrics for the time
between when your app issues a request to a service endpoint and when
the response from that endpoint is complete. For any endpoint to which
your app makes a request, Performance Monitoring captures several
metrics:
Response time — Time between when the request is made and when the response is fully received
Response payload size — Byte size of the network payload downloaded by the app
Request payload size — Byte size of the network payload uploaded by the app
Success rate — Percentage of successful responses compared to total responses (to measure network or server failures)
You can view data from these traces in the Network requests subtab of
the traces table, which is at the bottom of the Performance dashboard
(learn more about using the console later on this page).This
out-of-the-box monitoring includes most network requests for your app.
However, some requests might not be reported or you might use a
different library to make network requests. In these cases, you can
use the Performance Monitoring API to manually instrument custom
network request traces. Firebase displays URL patterns and their
aggregated data in the Network tab in the Performance dashboard of the
Firebase console.
From stackoverflow thread,
The wire protocol for Cloud Firestore is based on gRPC, which is
indeed a lot harder to troubleshoot than the websockets that the
Realtime Database uses. One way is to enable debug logging with:
firebase.firestore.setLogLevel('debug');
Once you do that, the debug output will start getting logged.
Firestore use gRPC as their API, and charles not support gRPC now.
In this case you can use Mediator, Mediator is a Cross-platform GUI gRPC debugging proxy like Charles but design for gRPC.
You can dump all gRPC requests without any configuration.
For decode the gRPC/TLS traffic, you need download and install the Mediator Root Certificate to your device follow the document.
For decode the request/response message, you need download proto files which in your description, then configure the proto root in Mediator follow the document.
Related
I am developing a cloud-based back-end HTTP service that will be exposed for integration with some on-prem systems. Client systems are custom-made by external vendors, they are back-end systems with their own databases. These systems are deployed in companies of our clients, we don't have access to them and don't control them. We are providing vendors our API specifications and they implement client code.
The data format which my service exchanges with clients is based on XML and follows a certain standard. Vendors implement their client systems in different programming languages and new vendors will appear over time. I want as many of clients to be able to work with my service as possible.
Most of my service API is REST-like: it receives HTTP requests, processes them, and sends back HTTP responses.
Additionally, my service accumulates some data state changes and needs to regularly push this data to client systems. Because of the below limitations, this use-case does not seem to fit the traditional client-server HTTP request-response model.
Due to the nature of the business, the client systems cannot afford to have their own HTTP API endpoints open and so my service can't establish an outbound HTTP connection to them for delivering data state notifications. I.e. use of WebHooks is not an option.
At the same time my service stakeholders need recorded acknowledgment that data state notifications were accepted by the client system, therefore fire-and-forget systems like Amazon SNS don't seem to apply.
I was considering few approaches to this problem but I'm not sure if I'm missing some simple options or some technologies that already address the problem. Hence this question.
The question text updated: options moved to my own answer.
Related questions and resources
REST API with active push notifications from server to client
Is ReST over websockets possible?
Can we use Web-Sockets for Communication between Microservices?
What is difference between grpc and websocket? Which one is more suitable for bidirectional streaming connection?
https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/
I eventually found answers to my question myself and with some help from my team. For people like me who come here with a question "how do I arrange notifications delivery from my service to its clients" here's an overview of available options.
WebHooks
This is when the client opens endpoint iself. The service calls client's endpoints whenever the service has some notification to deliver. This way the client also acts as a service and so the client and the service swap roles during notification delivery.
With WebHooks the client must be able to open the endpoint with a well-known address. This is complicated if the client's software is working behind NAT or firewall or if the client is Browser or a mobile application.
The service needs to be prepared that client's WebHook endpoints may not always be online and may not always be healthy.
Another issue is flow control: special measures should be taken in the service not to overwhelm the client with high volume of connections, requests and/or data.
Polling
In this case the client is still the client and the service is still the service, unlike WebHooks. The service offers an endpoint where the client can continuously request new notifications. The advantage of this option is that it does not change connection direction and request-response direction and so it works well with HTTP-based services.
The caveat is that polling API should have some rich semantics to be reasonably reliable if loss of notifications is not acceptable. Good examples could be Google Pub/Sub pull and Amazon SQS.
Here are few considerations:
Receiving and deleting notification should be separate operations. Otherwise, if the service deletes notification just before giving it to the client and the client fails to process the notification, the notification will be lost forever. When deletion operation is separate from receiving, the client is forced to do deletion explicitly which normally happens after successful processing.
In case the client received the notification and has not yet deleted it, it might be undesirable to let the same notification to be processed by some other actor (perhaps a concurrent process of the same client). Therefore the notification must be hidden from receiving after it was first received.
In case the client failed to delete the notification in reasonable time because of error, network loss or process crash, the service has to make notification visible for receiving again. This is retry mechanism which allows the notification to be ultimately processed.
In case the service has no notifications to deliver, it should block the client's call for some time by not delivering empty response immediately. Otherwise, if the client polls in a loop and response comes immediately, the loop iteration will be short and clients will make excessive requests to the service increasing network, parsing load and requests counts. A nice-to have feature is for the service to unblock and respond to the client as soon as some notification appears for delivery. This is sometimes called "long polling".
HTTP Server-sent Events
With HTTP Server-sent Events the client opens HTTP connection and sends a request to the service, then the service can send multiple events (notifications) instead of a single response. The connection is long-living and the service can send events as soon as they are ready.
The downside is that the communication is one-way, the client has no way to inform the service if it successfully processed the event. Because this feedback is absent, it may be difficult for the service to control the rate of events to prevent overwhelming the client.
WebSockets
WebSockets were created to enable arbitrary two-way communication and so this is viable option for the service to send notifications to the client. The client can also send processing confirmation back to the service.
WebSockets have been around for a while and should be supported by many frameworks and languages. WebSocket connection begins as HTTP 1.1 connection and so WebSockets over HTTPS should be supported by many load balancers and reverse proxies.
WebSockets are often used with browsers and mobile clients and more rarely in service-to-service communication.
gRPC
gRPC is similar to WebSockets in a way that it enables arbitrary two-way communication. The advantage of gRPC is that it is centered around protocol and message format definition files. These files are used for code generation that is essential for client and service developers.
gRPC is used for service-to-service communication plus it is supported for Browser clients with grpc-web.
gRPC is supported on multiple popular programming languages and platforms, yet the support is narrower than for HTTP.
gRPC works on top of HTTP/2 which might cause difficulties with reverse proxies and load balancers around things like TLS termination.
Message queue (PubSub)
Finally, the service and the client can use a message queue as a delivery mechanism for notifications. The service puts notifications on the queue and the client receives them from the queue. A queue can be provided by one of many systems like RabbitMQ, Kafka, Celery, Google PubSub, Amazon SQS, etc. There's a wide choice of queuing systems with different properties and choosing one is a challenge on its own. The queue can also be emulated by using database for example.
It has to be decided between the service and the client who owns the queue, i.e. who pays for it. Either way, the queuing system and the queue should be available whenever the service needs to push notifications to it otherwise notifications will be lost (unless the service buffers them internally, with another queue).
Queues are typically used for service-to-service communication but some technologies also allow Browsers as clients.
It is worth noting that an "implicit" internal queue might be used on the service side in other options listed above. One reason is to prevent loss of notifications when there's no client available to receive them. There are many other good reasons like letting clients handle notifications at their pace, allowing to maximize processing throughput, allowing to handle spiky traffic with fixed capacity.
In this option the queue is used "explicitly" as delivery mechanism, i.e. the service does not put any other mechanism (HTTP, gRPC or WebSocket endpoint) in front of the queue and lets the client receive notifications from the queue directly.
Message passing is popular in organizing microservice communications.
Common considerations
In all options it has to be decided whether the loss of notifications is tolerable for the service, the client and the business. Some simpler technical choices are possible if it is ok to lose notifications due to processing errors, unavailability, etc.
It is valuable to have a monitoring for client processing errors from the service side. This way service owners know which clients are more broken without having to ask them.
If the queue is used (implicitly or explicitly) it is valuable to monitor the length of the queue and the age of the oldest notifications. It lets service owners judge how stale data may be in the client.
In case the delivery of notification is organized in a way that notification gets deleted only after a successful processing by the client, the same notification could be stuck in infinite receive loop when the client fails to process it. Such notification is sometimes called "poison message". Poison messages should be removed by the service or the queuing system to prevent clients being stuck in infinite loop. A common practice is to move poison messages to a special place, sometimes called "dead letter queue", for the later human intervention.
One alternative to WebSockets for the problem of server→client notifications with acks from the client seems to be gRPC.
It supports bidirectional communication between server and client in bidirectional streaming mode.
It works on top of HTTP 2.0. In our case functioning over HTTP ports is essential.
There are client and server generators for multiple popular languages and platforms. A nice thing is that I can share protocol definition file with vendors and can be sure my service and their clients will talk the same language.
Drawbacks:
Not as many languages and platforms are supported compared to HTTP. Alternative C from the question will be more accessible if based on HTTP 1.1. WebSockets have also been around longer and I would expect broader adoption than gRPC.
Not all gRPC implementations seem to currently support XML format for data according to FAQ. In order to transport XML my service and its clients will have to transfer XML message as byte arrays inside of gRPC protobuf message.
With gRPC, TLS termination cannot be done on general-purpose HTTP 1.1 load balancer. An application-layer HTTP/2-aware reverse proxy (load balancer) such as Traefik is required.
There are approaches like this and this to allow HTTP 1.1 compatible protocols but they have their own restrictions like limited amount of available clients or necessary client customizations.
I have an internal website which is completely based on Geo Map and uses HTTP/HTTPS protocol also wfs and wms service calls.
While recording with LR I am able to launch and record all the contents,
but after that for next transaction when I click on any of the option to see the map It is not recording in LR.
For next transaction type of calls are jquery which gets the server response in form of images (geo locations).
I tried recording with HTTP single and HTTP with Web-services multiple protocols also
My LR version is 12.01 and recording it with Chrome browser
Help me out please !
Both WFS and WMS, from the open geospacial consortium, should be using HTTP as a carrier. Can you provide insight on why such calls to these servers are being ignored
Do you have the servers in any sort of filter as a third party element which should not be tested as you do not have ownership or control of the target?
Are you electing to connect to these services over a protocol other than HTTP?
You note, "Not recording," what is the specific objective evidence that a recording is not occuring? Lack of evidence in the recording logs? Lack of an increase in the events counter? Lack of ?? Please clarify
Have you tried the HTTP Proxy method of recording versus the default sockets model?
We have an already running MQTT setup for communication between smart home devices and remote server, for remotely controlling the devices. Now we want to integrate our devices with Google Home and Alexa. These two use HTTP for communication with third party device clouds.
I have implemented this for Google Home and after receiving the request to device cloud, the request is converted to MQTT. This MQTT request is then sent to smart home device. The device cloud waits for few seconds to receive reply from smart home device. If no reply is received within predefined time, it then sends failure HTTP response to Google Home else it sends the received reply.
Is there a better way to handle this? Since this is a commercial project I want to get this implemented in the correct way.
Any help will be appreciated.
Thanks
We're using AWS IoT and I think it's a good way to handle IoT issues, below are some features of it:
Certification, each device is a thing and attached its own policy, it's security
Shadow, it's device's current state JSON document, The Device Shadow service acts as an intermediary, allowing devices and
applications to retrieve and update a device's shadow
Serverless, we use lambda to build skill and servers, it's flexible
Rule, we use it to intercept MQTT messages so that we can report device's state changing to Google and Alexa. BTW, to Google, Report State implementation has become mandatory for all partners launch & certify.
You can choose either MQTT or HTTP
It’s time-consuming but totally worth it! We've sold 8k+ products, so far so good.
At least Google Home doesn't really require synchronous operation there. Once you get the EXECUTE-intent via their API, you just need to sent that to your device (but it doesn't necessarily has to report its state back).
Once its state changes, you either store it for further QUERY-intents or provide this data to the Google Homegraph server using the "Report State" interface.
I'm developing gBridge.io as a project providing quite similar functionality (but for another target group). There, it is strictly split as described: A HTTP endpoint listener reacts to commands from Google Home and sends it to a cache, where it is eventually sent to the matching MQTT topic. Another worker is listening to MQTT topics from the users and storing there information in the cache, so it can be sent back to Google once required.
I am currently using PHP / CURL on the back-end to update values in Firebase. We use Firebase primarily as a JavaScript layer so we can show browser and app clients real time status progression of jobs we process.
We've gotten to the point where we're doing quite a bit of status updating using CURL from our back-end and I feel we are close to the threshold where establishing a persistent connection between Firebase and our server would be more efficient than opening and closing dozens of HTTP requests per minute.
Is there anyway to do this with Firebase right now?
Firebase has server-side SDKs for Java and Node.js. If you can't use those, the REST API is your only alternative.
If you'd like to listen for data over REST, you can use Firebase's REST Streaming API, which uses a long-lived HTTP connection to return a stream of events. It is similar to the Firebase SDKs, but it can only attach a single listener per connection, and you'll still need separate requests for write operations.
That last part seems to the crux of your problem. So I'm afraid there really aren't any alternatives from using the SDKs as I mentioned. In my testing using HTTP requests for frequent small (although in my case admittedly read) operations was quite fast.
I am working on a C# mobile application that requires major interaction with a PHP web server. However, the application also needs to support an "offline mode" as connection will be over a cellular network. This network may drop requests at random times. The problem that I have experienced with previous "Offline Mode" applications is that when a request results in a Timeout, the server may or may not have already processed that request. In cases where sending the request more than once would create a duplicate, this is a problem. I was walking through this and came up with the following idea.
Mobile sets a header value such as UniqueRequestID: 1 to be sent with the request.
Upon receiving the request, the PHP server adds the UniqueRequestID to the current user session $_SESSION['RequestID'][] = $headers['UniqueRequestID'];
Server implements a GetRequestByID that returns true if the id exists for the current session or false if not. Alternatively, this could returned the cached result of the request.
This seems to be a somewhat reliable way of seeing if a request successfully contacted the server. In mobile, upon re-connecting to the server, we check if the request was received. If so, skip that pending offline message and go to the next one.
Question
Have I reinvented the wheel here? Is this method prone to failure (or am I going down a rabbit hole)? Is there a better way / alternative?
-I was pitching this to other developers here and we thought that this seemed very simple implying that this "system" would likely already exist somewhere.
-Apologies if my Google skills are failing me today.
As you correctly stated, this problem is not new. There have been multiple attempts to solve it at different levels.
Transport level
HTTP transport protocol itself does not provide any mechanisms for reliable data transfer. One of the reasons is that HTTP is stateless and don't care much about previous requests and responses. There have been attempts by IBM to make a reliable transport protocol called HTTPR what was based on HTTP, but it never got popular. You can read more about it here.
Messaging level
Most Web Services out there still uses HTTP as a transport protocol and SOAP messaging protocol on top of it. SOAP over HTTP is not sufficient when an application-level messaging protocol must also guarantee some level of reliability and security. This is why WS-Reliability and WS-ReliableMessaging protocols where introduced. Those protocols allow SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures. At the same time they provide additional security. You can read more about those protocols here and here.
Your solution
I guess there is nothing wrong with your approach if you need a simple way to ensure that message has not been already processed. I would recommend to use database instead of session to store processing result for each request. If you use $_SESSION['RequestID'][] you will run in to trouble if the session is lost (user is offline for specific time, server is restarted or has crashed, etc). Also, if you use database instead of session, you can scale-up easier later on just by adding extra web server.