Error while sending trade commands to bybit testnet - margin

I am using Alertatron to send manual alerts to bybit testnet exchange. I am getting the following error log. Please let me know what the issue is
======error start==
[v283, bybit, jothibybit, EOSUSD] ::: market(side=buy, amount=1);
Script market v1.0.0, by Alertatron
using buy offset of 0 from 2.884 (current price) --> 2.884
[bybit, jothibybit, EOSUSD] Executing market order to buy 1.
Not enough margin to cover order that size [post /v2/private/order/create] - status code: 30031
Session 40b71524 has no more commands to process on bybit (jothibybit), EOSUSD - waiting for background processes...
bybit : jothibybit : No active background processes for EOSUSD. Done.
Session 40b71524 finished waiting for related background tasks
Request to close exchange connection bybit, jothibybit. Not being used any more.
Bybit closed
Bot entering idle state - updating to latest release.
===end log
my code
jothibybit(EOSUSD){
exchangeSettings(leverage=cross);
cancel(which=all);
market(side=buy, amount=1);
}
#bot

Generally that error means that you do not have enough money on your Bybit account to execute the trade.
The reason for that can be, that you have not enough money, then you need to reduce the trade size, or you are trying to execute the trade from a subaccount, which is somehow not working properly. in case you are using a subaccount API Key on Bybit, try to execute the trades using a Main Account API Key.
In both cases double check on Alertatron if you have access to the Bybit balance, by running a Manual Alert:
https://alertatron.com/docs/automated-trading/balance
While you are executing that test trade, you should open the "live bot output" in Alertatron, to see the response you get from Bybit.
If that works, try to execute an example trade:
https://alertatron.com/docs/automated-trading/api-keys-bybit

Related

RMAN does not delete archive logs not applied on GG Downstream

There is an issue in a topology of 3 hosts.
Primary has a scheduled OS-task (every hour) to delete archive logs older 3 hours with RMAN. Archivelog deletion policy is configured to "Applied on all standby".
There are 2 remote log_archive_dest entries - Physical Standby and Downstream. Every day there appears a "RMAN-08120: WARNING: archived log not deleted, not yet applied by standby" in the task's logs and than it resolves in 2-3 hours.
I've checked V$ARCHIVE_LOG during the issue and figured out, that the redos are not applied on the Downstream server. I have not caught the issue on the Downstream server yet, but during the "good" periods all the apply processes are enabled, but the dba_apply_progress view tells me, that apply_time of the messages is 01-JAN-1970.
The dba_capture view tells that capture processes' statuses are ENABLED, status_change_time is approximately the time of the RMAN-issue resolved.
I'm new to the Golden Gate, Streams and Downstream technologies. I've read the reference Oracle Docs, but couldn't find anything about some schedule for capture or apply processes.
Could someone please help to figure out, what else to check or what to read?
Grateful for every response.
Thanks.

How to setup web hooks to send message to Slack when Firebase functions crash?

I need to actively receive crash notifications for firebase functions.
Is there any way to set up Slack webhooks to receive a message when Firebase Functions throw an Error, functions crash, or something like that?
I would love to receive issue messages by velocity ie: Firebase Functions crash 50 times a day.
Thank you so much.
First you have to create a log based (counter) metric that will be counting specific error occurencies and second - you create alerting policy with Slack notification channel.
Let's start from finding corresponding logs that appear when the function throws an error. Since I didn't have one that would crash I used logs that indicated that it was started.
Next you have to create a log based metric. Ignore the next screen and go to Monitoring > Alerting. Click on "Create new policy", find your metric and select "Rolling Window" to whatever time period you need. For testing I used 1 minute. Then set "Rollind windows function" to "mean".
Now configure when the alert has to be triggered - I chose over 3 (within 1 minute window).
On the next screen you select notification channel. In case of Slack it has to be configured first in "Notification Channels".
You can save policy the policy now.
After a few minutes I gathered enough data to generate two incidents:
And here's some alerting related documentation that may help you understand how to use them.

Google Speech API request exceeds limit R

I am receiving the following error when I am using the speech API. I am looking to have the same return as if a file would be under that limit. Thanks in advance.
asynch <- gl_speech(MonoPath,
asynch = TRUE)
gl_speech_op(asynch)
Error: API returned: Request payload size exceeds the limit: 10485760 bytes.
I just ran into this same issue and it appears it is because I have the API set on the free trial even though I have a CC connected to it.
https://cloud.google.com/speech-to-text/quotas
by updating your account to a paid account you can increase your quotas.
I have yet to get approval to go paid from my finance department so I can't guarantee this will solve your issue at this time.
I finally found the trial setting you mentioned. You have to look at quotas for project and there will be a blue button indicating "Upgrade Account".
I did the upgrade but no change still getting the error
Request payload size exceeds the limit: 10485760 bytes.
Using the Cloud SDK for Speech recognition and sending the request via the command line for a long running recognize command. No syntax errors other than a change to the example they gave was needed
Example, had to remove the single quotes around the file name and don't have spaces in your filename
gcloud ml speech recognize-long-running \
'gs://cloud-samples-tests/speech/brooklyn.flac' \
--language-code='en-US' --async
I seemed to be logged in properly, is it possible it is not recognizing my credentials properly and defaulting to trial type of request?

How long does Firebase throttle you?

Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.

Google Cloud Bigtable: repeated grpc error code 13, then suddenly success

In short, we are sometimes seeing that a small number of Cloud Bigtable queries fail repeatedly (for 10s or even 100s of times in a row) with the error rpc error: code = 13 desc = "server closed the stream without sending trailers" until (usually) the query finally works.
In detail, our setup is as follows:
We are running a collection (< 10) of Go services on Google Compute Engine. Each service leases tasks from a pair of PULL task queues. Each task contains an ID of a bigtable row. The task handler executes the following query:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
If the query fails then the task handler simply returns. Since we lease tasks with a lease time between 10 and 15 minutes, a little while later the lease will expire on that task, it will be lease again, and we'll retry. The tasks have a max retry of 1000 so they can be retried many times over a long period. In a small number of cases, a particular task will fail with the grpc error above. The task will typically fail with this same error every time it runs for hours or days on end, before (seemingly out of the blue) eventually succeeding (or the task runs out of retries and dies).
Since this often takes so long, it seems unrelated to server load. For example right now on a Sunday morning, these servers are very lightly loaded, and yet I see plenty of these errors when I tail the logs. From this answer, I had originally thought that this might be due to trying to query for a large amount of data, perhaps near the max limit that cloud bigtable will support. However I now see that this is not the case; I can find many examples where tasks that have failed many times finally succeed and report only a small amount of data (e.g. <1 MB) was retrieved.
What else should I be looking at here?
edit: From further testing I now know that this is completely machine (client) independent. If I tail the log on one of the task leasing machines, wait for a "server closed the stream without sending trailers" error, and then try a one-off ReadRow query to the same rowId from another, unrelated, totally unused machine, I get the same error repeatedly.
This error is typically caused by having more than 256MB of data in your reply.
However, there is currently a bug in our server side error handling code that allows some invalid characters in HTTP/2 trailers which is not allowed by the spec. This means that some error messages that have invalid characters will be seen as this kind of error. This should be fixed early next year.

Resources