Device Access rate limit - nest-device-access

I'm using the Device Access Sandbox environment and getting a RESOURCE_EXHAUSTED error response from the API with the message set to "Rate limited".
How do I know when I'm going to hit the rate limit?

Rate limits have been implemented in Device Access Sandbox to prevent over-utilization of devices and services. Rate limits are configured slightly differently for API methods, commands, and device instances, so there are a few different ways you could be hitting a rate limit.
Typical usage of Device Access with Nest devices shouldn't hit the rate limit. For example, if you wish to keep a history of temperature and humidity changes for a Nest Thermostat, there's really no need to check the temperature and humidity values more than 10 times in a minute (the devices.get limit)—they do not fluctuate enough in that time period to make such granularity useful.
To see all the rate limits applied to the Sandbox, see User and Rate Limits.

Related

How do I determine total network bandwidth usage on windows server 2016?

I am currently looking at 1Gb/s download and 35 MB/s upload over coax. We are looking at setting up some VOIP services etc which will be impacted by such a low upload speed. How do I determine what the max bandwidth usage for the day was? I'm aware that netstat, netsh, and network monitor provide information regarding individual processes but I cannot find the data I need to determine whether upgrading to fiber would be marginally beneficial or entirely necessary. Any help would be greatly appreciated.
Netstat, netsh, performance monitor, network monitor
I can obtain the information regarding any connection in particular but i need something more akin to over all statistics so that i can make an informed decision regarding our network limitations ( fiber vs coax)....Do we need an additional 200 mb/s ? etc
Typical VOIP services only require a few kilobytes per second of upload bandwidth per phone call. Do you anticipate having many (hundreds of) concurrent phone calls which would add up to 35MBytes/s (or more likely 35Mbits/sec). As an aside, network bandwidth is typically expressed with big-M and little-b (e.g. Mb) to denote megabits per second.
I would suggest first using a utility like SolarWinds RealTime Bandwidth Monitor to look at your router/gateways utilization.

Multiple Google Vision OCR requests at once?

According to the Google Vision documentation, the maximum number of image files per request is 16. Elsewhere, however, I'm finding that the maximum number of requests per minute is as high as 1800. Is there any way to submit that many requests in such a short period of time from a single machine? I'm using curl on a Windows laptop, and I'm not sure how to go about submitting a second request before waiting for the first to finish almost a minute later (if such a thing is possible).
If you want to request 1800 images, you can group 16 images per request (1800/16) you will need 113 request.
On the other hand, if the limit is 1800 requests per minute and each request can contain 16 images, then you can process 1800 * 16 = 28800 images per minute.
Please consider that docs says: These limits apply to each Google Cloud Platform Console project and are shared across all applications and IP addresses using that project. So it doesn't matter if requests are sent from a single o many machines.
Cloud Vision can receive parallel requests, so your app should be prepared to manage this amount of requests/responses. You may want to check this example and then use threads in your preferred programming language for sending/receiving parallel operations.

DynamoDB: High SuccessfulRequestLatency

We had a period of latency in our application that was directly correlated with latency in DynamoDB and we are trying to figure out what caused that latency.
During that time, the consumed reads and consumed writes for the table were normal (much below the provisioned capacity) and the number of throttled requests was also 0 or 1. The only thing that increased was the SuccessfulRequestLatency.
The high latency occurred during a period where we were doing a lot of automatic writes. In our use case, writing to dynamo also includes some reading (to get any existing records). However, we often write the same quantity of data in the same period of time without causing any increased latency.
Is there any way to understand what contributes to an increase in SuccessfulRequest latency where it seems that we have provisioned enough read capacity? Is there any way to diagnose the latency caused by this set of writes to dynamodb?
You can dig deeper by checking the Get Latency and Put Latency in CloudWatch.
As you have already mentioned, there was no throttling, and your writes involve some reading as well, and your writes at other period of time don't cause any latency, you should check for what exactly in read operation is causing this.
Check SuccessfulRequestLatency metric while including the Operation dimension as well. Start with GetItem and BatchGetItem. If that doesn't
help include Scan and Query as well.
High request latency can sometimes happen when DynamoDB is doing an internal failover of one of its storage nodes.
Internally within Dynamo each storage partition has to be replicated across multiple nodes to provide a high level of fault tolerance. Occasionally one of those nodes will fail and a replacement node has to be introduced, and this can result in elevated latency for a subset of affected requests.
The advice I've had from AWS is to use a short timeout and a fast retry (e.g. 100ms) if your use-case is latency-sensitive. It's my understanding that only requests that hit the affected node experience increased latency, so within one or two retries you'll hit a different node and get a successful response, with minimal impact on your overall latency. Obviously it's hard to verify this, because it's not a scenario you can reproduce!
If you've got a support contract with AWS, it's well worth submitting a support ticket from the AWS console when events like this happen. They are usually able to provide an insight into what actually happened.
Note: If you're doing retries, remember to use exponential backoff to reduce the risk of throttling.

Tuning the cost of data transfer with NGINX

I have a streaming setup using ngnix and i would like to know how to fine tune the data transfer, say i have the following in this diagram.
You can see one person is connected via a media player but nobody is watching their stream but it remains connected constantly even if i reboot ngnix it will reconnect. So it is currently at 56.74GB but can reach up to 500GB or more. Does this get charged as data transfer bill on my hosting of am i ok to forget about this?
Just want to understand best practises when using ngnix live streaming and try and reduce the costs of users using my server as much as possible.
Would love some good advise on this from any one doing something similar.
Thanks
When the hosting providers themselves procure the traffic capacity wholesale for their clients, they usually have to pay on a 95th percentile utilisation scale, which means that if a 5-minute average utilisation is at or below 5Gbps 95% of the time, then they'll pay at a rate for 5Gbps for all of their traffic, even if consumption at about 04:00 in the morning is way below 1Gbps, nor at certain times of the day is way above 5Gbps for a spike of many minutes at a time -- they still pay for 5Gbps, which is their 95th percentile on a 5-minute average basis.
Another consideration is that links are usually symmetrical, whereas most hosting providers that host web-sites have very asymmetrical traffic patterns -- an average HTTP request is likely to be about 1KB, whereas a response will likely be around 10KB or more.
As for the first point above, as it's relatively difficult to calculate the 95th percentile usage for the clients individually, the providers absorb the cost, and charge their retail clients on a TB/month basis. As for the second point, what this basically means is that in most circumstances, the incoming traffic is basically already paid for through the roof, and noone's using it, so, most providers only really charge for the outgoing traffic because of that.

Does changing the data rate of a line increase throughput?

I'm using IT Guru's Opnet to simulate different networks. I've run the basic HomeLAN scenario and by default it uses an ethernet connection running at a data rate of 20Kbps. Throughout the scenarios this is changed from 20K to 40K, then to 512K and then to a T1 line running at 1.544Mbps. My question is - does increasing the data rate for the line increase the throughput?
I have this graph output from the program to display my results:
Please note it's the image on the forefront which is of interest
In general, the signaling capacity of a data path is only one factor in the net throughput.
For example, TCP is known to be sensitive to latency. For any particular TCP implementation and path latency, there will be a maximum speed beyond which TCP cannot go regardless of the path's signaling capacity.
Also consider the source and destination of the traffic: changing the network capacity won't change the speed if the source is not sending the data any faster or if the destination cannot receive it any faster.
In the case of network emulators, also be aware that buffer sizes can affect throughput. The size of the network buffer must be at least as large as the signal rate multiplied by the latency (the Bandwidth Delay Product). I am not familiar with the particulars of Opnet, but I have seen other emulators where it is possible to set a buffer size too small to support the select rate and latency.
I have written a couple of articles related to these topics which may be helpful:
This one discusses common network bottlenecks: Common Network Performance Problems
This one discusses emulator configuration issues: Network Emulators

Resources