I can't create an order on the exchange bybit using python (pybit) - python-bybit

i need help
from pybit.usdt_perpetual import HTTP
session_auth_ = HTTP(
endpoint='https://api.bybit.com',
api_key=api_key,
api_secret=secret_key
)
res=session_auth_.place_active_order(
symbol='LTCUSDT',
side='Sell',
order_type='Limit',
qty=1,
price=56.01,
time_in_force='GoodTillCancel',
reduce_only=False,
close_on_trigger=False
)
Error:
pybit.exceptions.InvalidRequestError: Oc_diff[568068600], new_oc[568068600] with ob[0]+ab[0] (ErrCode: 130021) (ErrTime: 20:24:51).
Request → POST https://api.bybit.com/private/linear/order/create: {'api_key': '.........', 'close_on_trigger': False, 'order_type': 'Limit', 'price': 56.1, 'qty': 1, 'recv_window': 5000, 'reduce_only': False, 'side': 'Sell', 'symbol': 'LTCUSDT', 'time_in_force': 'GoodTillCancel', 'timestamp': 1666815890695, 'sign': 'cf8c055049303634c8c6aa17077689ddb6d8ca490302e392b0590b3dbd02ca19'}.
I tried to change the quantity, price, but the result was not received

130021 : order cost not available
Not having enough funds can be an issue.

Related

count bytes with influxs telegraf

I can receive messages with the inputs.mqtt_consumer telegraf plugin, but it gives me a lot of data in influxdb.
How can I in the telegraf configuration just count the number of received bytes and messages and report that to influx db?
# Configuration for telegraf agent
[agent]
interval = "20s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb_v2]]
urls = ["XXXXXXXXXXXXXXXX"]
token = "$INFLUX_TOKEN"
organization = "XXXXXXXXXXXXXXX"
bucket = "XXXXXXXXXXXXXXX"
[[inputs.mqtt_consumer]]
servers = ["tcp://XXXXXXXXXXXXXXXXXXXXX:1883"]
topics = [
"#",
]
data_format = "value"
data_type = "string"
I tried to google around but din't find any clear ways to do it.
I just want number of bytes and messages received each minute for the selected topic
I did not manage to receive all the messages and count them, but I found a solution where I can get the data from the broker. Not exactly what I asked for but fine for what I need.
topics = [
"$SYS/broker/load/messages/received/1min",
"$SYS/broker/load/messages/sent/1min",
]
...
data_format = "value"
data_type = "float"

R: search_fullarchive() and Twitter Academic research API track

I was wondering whether anyone has found a way to how to use search_fullarchive() from the "rtweet" package in R with the new Twitter academic research project track?
The problem is whenever I try to run the following code:
search_fullarchive(q = "sunset", n = 500, env_name = "AcademicProject", fromDate = "202010200000", toDate = "202010220000", safedir = NULL, parse = TRUE, token = bearer_token)
I get the following error "Error: Not a valid access token". Is that because search_fullarchive() is only for paid premium accounts and that doesn't include the new academic track (even though you get full archive access)?
Also, can you retrieve more than 500 tweets (e.g., n = 6000) when using search_fullarchive()?
Thanks in advance!
I've got the same problem w/ Twitter academic research API. I think if you set n = 100 or just skip the argument, the command will return you 100 tweets. Also, the rtweet package does not (yet) support the academic research API.
Change your code to this:
search_fullarchive(q = "sunset", n = 500, env_name = "AcademicProject", fromDate = "202010200000", toDate = "202010220000", safedir = NULL, parse = TRUE, token = t, env_name = "Your Environment Name attained in the Dev Dashboard")
Also The token must be created like this:
t <- create_token(
app = "App Name",
'Key',
'Secret',
access_token = '',
access_secret = '',
set_renv = TRUE
)

Why can't I access information in tbody?

[This is the source code of the website][1]I am doing web scraping with BeautifulSoup but cannot find tr in tbody; there actually is tr in tbody in the source code of the website; however, the find_all function can only return the tr in thead.
link I am scraping on: https://cpj.org/data/killed/?status=Killed&motiveConfirmed%5B%5D=Confirmed&type%5B%5D=Journalist&start_year=1992&end_year=2019&group_by=year
Here are some of my code:
```from bs4 import BeautifulSoup
```url = "https://cpj.org/data/killed/?status=Killed&motiveConfirmed%5B%5D=Confirmed&type%5B%5D=Journalist&start_year=1992&end_year=2019&group_by=year"
```html = urlopen(url)
```soup = BeautifulSoup(html,'lxml')
```type(soup)
```tr = soup.find_all("tr")
```print(tr)
[1]: https://i.stack.imgur.com/NFwEV.png
Data is requested via API returning json i.e. it is dynamically added so it doesn't appear in your request to the landing page. You can find the API endpoint in the network tab which is used to get the info.
You can alter one of the parameters to a number larger than expected result set then check if you need to make further requests.
import requests
r = requests.get('https://cpj.org/api/datamanager/reports/entries?distinct(personId)&includes=organizations,fullName,location,status,typeOfDeath,charges,startDisplay,mtpage,country,type,motiveConfirmed&sort=fullName&pageNum=1&pageSize=2000&in(status,%27Killed%27)&or(eq(type,%22media%20worker%22),in(motiveConfirmed,%27Confirmed%27))&in(type,%27Journalist%27)&ge(year,1992)&le(year,2019)').json()
Otherwise, you can do an initial call and verify how many more requests to make and alter the appropriate paramters in the url. You can see the pageCount is returned.
You can see relevant parts in response here for pagesize 20:
{'rowCount': 1343,
'pageNum': 1,
'pageSize': '20',
'pageCount': 68,
All the relevant info for a loop to get all results is there.
After altering to larger number you can see the following:
'rowCount': 1343,
'pageNum': 1,
'pageSize': '2000',
'pageCount': 1,
You can convert to a table using pandas:
import requests
import pandas as pd
r = requests.get('https://cpj.org/api/datamanager/reports/entries?distinct(personId)&includes=organizations,fullName,location,status,typeOfDeath,charges,startDisplay,mtpage,country,type,motiveConfirmed&sort=fullName&pageNum=1&pageSize=2000&in(status,%27Killed%27)&or(eq(type,%22media%20worker%22),in(motiveConfirmed,%27Confirmed%27))&in(type,%27Journalist%27)&ge(year,1992)&le(year,2019)').json()
df = pd.DataFrame(r['data'])
print(df)
Sample of df:
Example of checking actual count and make additional request for request of records:
import requests
import pandas as pd
request_number = 1000
with requests.Session() as s:
r = s.get('https://cpj.org/api/datamanager/reports/entries?distinct(personId)&includes=organizations,fullName,location,status,typeOfDeath,charges,startDisplay,mtpage,country,type,motiveConfirmed&sort=fullName&pageNum=1&pageSize=' + str(request_number) + '&in(status,%27Killed%27)&or(eq(type,%22media%20worker%22),in(motiveConfirmed,%27Confirmed%27))&in(type,%27Journalist%27)&ge(year,1992)&le(year,2019)').json()
df = pd.DataFrame(r['data'])
actual_number = r['rowCount']
if actual_number > request_number:
request_number = actual_number - request_number
r = s.get('https://cpj.org/api/datamanager/reports/entries?distinct(personId)&includes=organizations,fullName,location,status,typeOfDeath,charges,startDisplay,mtpage,country,type,motiveConfirmed&sort=fullName&pageNum=2&pageSize=' + str(request_number) + '&in(status,%27Killed%27)&or(eq(type,%22media%20worker%22),in(motiveConfirmed,%27Confirmed%27))&in(type,%27Journalist%27)&ge(year,1992)&le(year,2019)').json()
df2 = pd.DataFrame(r['data'])
final = pd.concat([df,df2])
else:
final = df
To get the tabular content using the selectors you see by inspecting elements, you can try using this pyppeteer which I've shown below how to work with. The following approach is an asynchronous one. So, I suggest you to go for this unless you find any api to play with:
import asyncio
from pyppeteer import launch
url = "https://cpj.org/data/killed/?status=Killed&motiveConfirmed%5B%5D=Confirmed&type%5B%5D=Journalist&start_year=1992&end_year=2019&group_by=year"
async def get_table(link):
browser = await launch(headless=False)
[page] = await browser.pages()
await page.goto(link)
await page.waitForSelector("table.js-report-builder-table tr td")
for tr in await page.querySelectorAll("table.js-report-builder-table tr"):
tds = [await page.evaluate('e => e.innerText',td) for td in await tr.querySelectorAll("th,td")]
print(tds)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(get_table(url))
Output are like:
['Name', 'Organization', 'Date', 'Location', 'Attack', 'Type of Death', 'Charge']
['Abadullah Hananzai', 'Radio Azadi,Radio Free Europe/Radio Liberty', 'April 30, 2018', 'Afghanistan', 'Killed', 'Murder', '']
['Abay Hailu', 'Agiere', 'February 9, 1998', 'Ethiopia', 'Killed', 'Dangerous Assignment', '']
['Abd al-Karim al-Ezzo', 'Freelance', 'December 21, 2012', 'Syria', 'Killed', 'Crossfire', '']
['Abdallah Bouhachek', 'Révolution et Travail', 'February 10, 1996', 'Algeria', 'Killed', 'Murder', '']
['Abdel Aziz Mahmoud Hasoun', 'Masar Press', 'September 5, 2013', 'Syria', 'Killed', 'Crossfire', '']
['Abdel Karim al-Oqda', 'Shaam News Network', 'September 19, 2012', 'Syria', 'Killed', 'Murder', '']

Json output from Vegeta HTTP load testing

I am using Vegeta to make some stress test but I am having some trouble while generating a json report. Running the following command I am able to see text results:
vegeta attack -targets="./vegeta_sagemaker_True.txt" -rate=10 -duration=2s | vegeta report -output="attack.json" -type=text
Requests [total, rate] 20, 10.52
Duration [total, attack, wait] 2.403464884s, 1.901136s, 502.328884ms
Latencies [mean, 50, 95, 99, max] 945.385864ms, 984.768025ms, 1.368113304s, 1.424427549s, 1.424427549s
Bytes In [total, mean] 5919, 295.95
Bytes Out [total, mean] 7104, 355.20
Success [ratio] 95.00%
Status Codes [code:count] 200:19 400:1
Error Set:
400
When I run the same command changing -type-text to -type=json I receive really weird numbers ad they don't make sense for me:
{
"latencies": {
"total": 19853536952,
"mean": 992676847,
"50th": 972074984,
"95th": 1438787021,
"99th": 1636579198,
"max": 1636579198
},
"bytes_in": {
"total": 5919,
"mean": 295.95
},
"bytes_out": {
"total": 7104,
"mean": 355.2
},
"earliest": "2019-04-24T14:32:23.099072+02:00",
"latest": "2019-04-24T14:32:25.00025+02:00",
"end": "2019-04-24T14:32:25.761337546+02:00",
"duration": 1901178000,
"wait": 761087546,
"requests": 20,
"rate": 10.519793517492838,
"success": 0.95,
"status_codes": {
"200": 19,
"400": 1
},
"errors": [
"400 "
]
}
Does anyone know why this should be happening?
Thanks!
These numbers are nanoseconds -- the internal representation of time.Duration in Go.
For example, the latencies.mean in the JSON is 992676847, which means 992676847 nanoseconds, that is 992676847/1000/1000 = 992.676847ms.
Actually, in vegeta, if you declare type as text (-type=text), it will use NewTextReporter, and print the time.Duration as a user-friendly string. If you declare type as json (-type=json), it will use NewJSONReporter and return time.Duration's internal representation:
A Duration represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years.

Neo.DatabaseError.General.UnknownError GC overhead limit exceeded in R 10.12.1

Totally new to neo4j, I was running the csv file when this issue occurred, how can I fix this? thanks so much!!
library("RNeo4j")
library("curl")
graph <- startGraph("http://localhost:7474/db/data", username = "neo4j", password = "")
clear(graph, input = F)
query <- "LOAD CSV WITH HEADERS FROM {csv} AS row CREATE (n:flights {year: row.year, month: row.mo, dep_time: row.dep_time, arr_time: row.arr_time, carrier: row.carrier, tailnum: row.tailnum, flight: row.flight, origin: row.origin, dest: row.dest, air_time: row.air_time, distance: row.distance, hour: row.hour, minute: row.minute })
cypher(graph, query, csv = "file:///flights1/flights.csv")
Error: Client error: (400) Bad Request
Neo.DatabaseError.General.UnknownError
GC overhead limit exceeded

Resources