I am receiving the following error when I am using the speech API. I am looking to have the same return as if a file would be under that limit. Thanks in advance.
asynch <- gl_speech(MonoPath,
asynch = TRUE)
gl_speech_op(asynch)
Error: API returned: Request payload size exceeds the limit: 10485760 bytes.
I just ran into this same issue and it appears it is because I have the API set on the free trial even though I have a CC connected to it.
https://cloud.google.com/speech-to-text/quotas
by updating your account to a paid account you can increase your quotas.
I have yet to get approval to go paid from my finance department so I can't guarantee this will solve your issue at this time.
I finally found the trial setting you mentioned. You have to look at quotas for project and there will be a blue button indicating "Upgrade Account".
I did the upgrade but no change still getting the error
Request payload size exceeds the limit: 10485760 bytes.
Using the Cloud SDK for Speech recognition and sending the request via the command line for a long running recognize command. No syntax errors other than a change to the example they gave was needed
Example, had to remove the single quotes around the file name and don't have spaces in your filename
gcloud ml speech recognize-long-running \
'gs://cloud-samples-tests/speech/brooklyn.flac' \
--language-code='en-US' --async
I seemed to be logged in properly, is it possible it is not recognizing my credentials properly and defaulting to trial type of request?
Related
I am using Alertatron to send manual alerts to bybit testnet exchange. I am getting the following error log. Please let me know what the issue is
======error start==
[v283, bybit, jothibybit, EOSUSD] ::: market(side=buy, amount=1);
Script market v1.0.0, by Alertatron
using buy offset of 0 from 2.884 (current price) --> 2.884
[bybit, jothibybit, EOSUSD] Executing market order to buy 1.
Not enough margin to cover order that size [post /v2/private/order/create] - status code: 30031
Session 40b71524 has no more commands to process on bybit (jothibybit), EOSUSD - waiting for background processes...
bybit : jothibybit : No active background processes for EOSUSD. Done.
Session 40b71524 finished waiting for related background tasks
Request to close exchange connection bybit, jothibybit. Not being used any more.
Bybit closed
Bot entering idle state - updating to latest release.
===end log
my code
jothibybit(EOSUSD){
exchangeSettings(leverage=cross);
cancel(which=all);
market(side=buy, amount=1);
}
#bot
Generally that error means that you do not have enough money on your Bybit account to execute the trade.
The reason for that can be, that you have not enough money, then you need to reduce the trade size, or you are trying to execute the trade from a subaccount, which is somehow not working properly. in case you are using a subaccount API Key on Bybit, try to execute the trades using a Main Account API Key.
In both cases double check on Alertatron if you have access to the Bybit balance, by running a Manual Alert:
https://alertatron.com/docs/automated-trading/balance
While you are executing that test trade, you should open the "live bot output" in Alertatron, to see the response you get from Bybit.
If that works, try to execute an example trade:
https://alertatron.com/docs/automated-trading/api-keys-bybit
I know a similar question has been asked (link), but the response didn't work for me.
TLDR: I keep running into errors when trying to authenticate Google Cloud Storage in RStudio. I'm not sure what is going wrong and would love advice.
I have downloaded both the GCS_AUTH_FILE (created a service account with service admin privileges'--downloaded the key associated with the service account) and also downloaded GAR_CLIENT_WEB_JSON by creating a OAuth 2.0 Client ID and downloading that associated JSON file.
I've tried authenticating my Google Cloud Storage in several ways and hit different errors.
Way 1-automatic setup:
gcs_setup()
Then I select any one of the options, and get the error: Error in if (file.exists(local_file)) { : argument is of length zero And that error happens no matter which of the three options I select.
Way 2 - basic, following manual setup instructions from the package:
Sys.setenv("GCS_DEFAULT_BUCKET" = "my-default-bucket",
"GCS_AUTH_FILE" = "/fullpath/to/service-auth.json")
gcs_auth()
In this case, GCS_AUTH_FILE is the file that I mentioned at the beginning of this post, and the GCS_DEFAULT_BUCKET is the name of the bucket. When I run the first line, it seems to be working (nothing goes awry and it runs just fine), but when I run gcs_auth() I get taken to a web browser page that states:
"Authorization Error
Error 400: invalid_request
Missing required parameter: client_id"
Way 3: Following the method from the post that I linked above
This way involves manually setting the .Renviron file w/ the GCS_AUTH_FILE and GAR__CLIENT_WEB_JSON locations, and then running gar_auth(). And yet again, I get the exact same error as in Way 2.
Any ideas about what could be going wrong? Thanks for your help. I wasn't sure how to put in totally reproducible code in this case, so if there is a way I should do that, please let me know.
Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.
This is really frustrating. I have a 104 MB JSON file that I want to upload to my Firebase database through the web front end, but after a random period of time (I've timed it, it's not constant, anywhere from 2 to 20 seconds) I get the error:
There was a problem contacting the server. Try uploading your file again.
So I do try again, and it just keeps failing. I've uploaded files nearly this big before, and the limit for stored data in the realtime DB is 1 GB,
I'm not even close to that. Why does it keep failing to upload?
This is the error I get in chrome dev tools:
Failed to load resource: net::ERR_CONNECTION_ABORTED
https://project.firebaseio.com/.upload?auth=eyJhbGciOiJIUzI1NiIsInR5cCI6…Q3NiwiYWRtaW4iOnRydWUsInYiOjB9.CihvjvLSlx43nOBynAJeyibkBRtygeRlG4Yo1t3jKVA
Failed to load resource: net::ERR_CONNECTION_ABORTED
If I click on the link that shows up in the error, it's a page with the words POST request required.
Turns out the answer is to ignore the web importer entirely and use firebase-import. It worked perfectly first time, and only took a minute to upload the whole json. And it also has merging capabilities.
Using firebase-import as the accepted answer suggested, I get error:
Error: WRITE_TOO_BIG: Data to write exceeds the maximum size that can be modified with a single request.
However, with the firebase-cli I was successful in deleting my entire database:
firebase database:remove /
It seems like it automatically traverses down your database tree to find requests that are under the limit size, then it does multiple delete requests automatically. It takes some time, but definitely works.
You can also import via a json file:
firebase database:set / data.json
I'm unsure if firebase database:set supports merging.
I am trying to use Google Translate REST API and while requesting the following url:
http://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=test&langpair=en|hi&key=mykey
I am getting the following response:
Response: {"responseData": null, "responseDetails": "Quota Exceeded.
Please see
http://code.google.com/apis/language/translate/overview.html",
"responseStatus": 403}
I am getting this message today only. I have tried using the service after one or two months. Previously it was working perfectly. Has Google stopped the Google translate free service or what?
You exceeded your quota. Google started to limit the number of API usages a few months back due to the large number of users using the tool excessively.
EDIT: Read the notice on the top of this page: http://code.google.com/apis/language/translate/v2/getting_started.html
Google has moved to a Paid model. We moved to the free Bing Translation API, its very similar, it seems to be better at translating and still free:
http://www.microsoft.com/web/post/using-the-free-bing-translation-apis
Example of how to use it:
http://basharkokash.com/post/2010/04/19/Bing-Translator-for-developers.aspx
Well, I think the error message explains itself - there seems to be some daily quota on the use of their API, and you have exceeded it.
But yes, Google is discontinuing the free version of their Translate API, and you will have to pay to continue to use it after December 1 2011.