Can someone help me get started with doing some basic things with IBPY? Using IBPY, I just want to be able to enquire the current bidding price for a commodity such as the price of a single share in Google - or the current Eur/dollar exchange rate.
I found the example at the bottom of the page here:
Fundamental Data Using IbPy
useful - but the output is somewhat confusing. How do I print to screen just the current bid/asking price of a single contract?
(Just some bio info - yes I am new to IBPY and python - but I do have over 20 years experience with C)
Many kind thanks in advance!
Using the example you referred to, with slightly changes:
import signal
from ib.opt import ibConnection, message
from ib.ext.Contract import Contract
def price_handler(msg):
if msg.field == 1:
print("bid price = %s" % msg.price)
elif msg.field == 2:
print("ask price = %s" % msg.price)
def main():
tws = ibConnection(port=7497)
tws.register(price_handler, message.tickPrice)
tws.connect()
tick_id = 1
c = Contract()
c.m_symbol = 'AAPL'
c.m_secType = 'STK'
c.m_exchange = "SMART"
c.m_currency = "USD"
tws.reqMktData(tick_id, c, '', False)
signal.pause()
if __name__ == '__main__':
main()
Output:
bid price = 149.55
ask price = 149.56
bid price = 149.59
ask price = 149.61
...
Related
thanks for reading!
I'm trying to backtest a strategy that I wrote in Pinescrpt, and I struggle to create my entry conditions.
So this is the code
import numpy as np
import pandas as pd
import vectorbt as vbt
from datetime import datetime
from binance.client import Client
symbols=['SOLUSDT', 'BNBUSDT']
price =
vbt.BinanceData.download(symbols,
start= '5 days ago UTC',
end= 'Now UTC',
interval='30m', missing_index='drop'
).get(['High', 'Low', 'Open', 'Close'])
high = price[0]
low = price[1]
open = price[2]
close = price[3]
stoch = vbt.STOCH.run(
high=high,
low=low,
close = close,
k_window = 14
)
And I want to add
entries = abs(stoch.percent_k['SOLUSDT'] -
stoch.percent_k['SOLUSDT']) > 50 # (mi intention with abs is to get the absolute value)
exits = abs(stoch.percent_k['SOLUSDT'] -
stoch.percent_k['SOLUSDT']) < 5
portfolio = vbt.Portfolio.from_signals(price[3], entries, exits, init_cash=10000)
I pretend to trigger a short order in a symbol and a long order in the second simultaneously with those signals.
And if anyone has a recommendation about where to find educational resources about this particular package (besides the official web) is welcome. I have read the examples in the doc, but it still fills a bit too complex for my level.
Two simple questions:
Does Warp10 integrate into streamlit to feed visualisations?
If so, please would you specify how this can be accomplished?
Thanking you in advance.
Best wishes,
There's no direct integration of Warp 10 in streamlit.
Although streamlit can handle any kind of data, it's mainly focused on pandas DataFrame. DataFrames are tables whereas Warp 10 Geo Time Series are time series. So even if Warp 10 was integrated in streamlit, it would require some code to properly format the data for streamlit to give its full potential.
That being said, here is a small example on how to display data stored in Warp 10 with streamlit:
import json
from datetime import datetime, timedelta
import requests
import streamlit as st
from bokeh.palettes import Category10_10 as palette
from bokeh.plotting import figure
# Should be put in a configuration file.
fetch_endpoint = 'http://localhost:8080/api/v0/fetch'
token = 'READ' # Change that to your actual token
def load_data_as_json(selector, start, end):
headers = {'X-Warp10-Token': token}
params = {'selector': selector, 'start': start, 'end': end, 'format': 'json'}
r = requests.get(fetch_endpoint, params=params, headers=headers)
return r.text
st.title('Warp 10 Test')
# Input parameters
selector = st.text_input('Selector', value="~streamlit.*{}")
start_date = st.date_input('Start date', value=datetime.now() - timedelta(days=10))
start_time = st.time_input('Start time')
end_date = st.date_input('End date')
end_time = st.time_input('End time')
# Convert datetime.dates and datetime.times to microseconds (default time unit in Warp 10)
start = int(datetime.combine(start_date, start_time).timestamp()) * 1000000
end = int(datetime.combine(end_date, end_time).timestamp()) * 1000000
# Make the query to Warp 10 and get back a json.
json_data = load_data_as_json(selector, start, end)
gtss = json.loads(json_data)
# Iterate through the json and populate a Bokeh graph.
p = figure(title='GTSs', x_axis_label='time', y_axis_label='value')
for gts_index, gts in enumerate(gtss):
tss = []
vals = []
for point in gts['v']:
tss.append(point[0])
vals.append(point[-1])
p.line(x=tss, y=vals, legend_label=gts['c'] + json.dumps(gts['l']), color=palette[gts_index % len(palette)])
st.bokeh_chart(p, use_container_width=True)
# Also display the json.
st.json(json_data)
I need help using instaloader to data scrape posts from Instagram that include #slowfashion from a specific timeframe.
I want to scrape the visual and textual data from the posts (specifically, the image/s posted, their descriptions, and comments).
from datetime import datetime
from itertools import dropwhile, takewhile
import instaloader
# Use parameters to save diffrent metadata
L = instaloader.Instaloader(download_pictures=True,download_videos=False,download_comments=False,save_metadata=True)
# Login
username = input("Enter your username: ")
L.interactive_login(username=username)
# User Query
search = input("Enter Hashtag: ")
limit = int(input("How many posts to download: "))
# Hashtag object
hashtags = instaloader.Hashtag.from_name(L.context, search).get_posts()
# Download Period
SINCE = datetime(2021, 5, 1)
UNTIL = datetime(2021, 3, 1)
no_of_downloads = 0
for post in takewhile(lambda p: p.date > UNTIL, dropwhile(lambda p: p.date > SINCE, hashtags)):
if no_of_downloads == limit:
break
print(post.date)
L.download_post(post, "#"+search)
no_of_downloads += 1
I am currently doing a project on football players. I am trying to scrape some public comments on football players for sentiment analysis. However I can't seem to scrape the comment. Any help would be MUCH appreciated. It is the comments part I can't seem to do. Weirdly enough I had it working but then it stopped and I cant seem to get scraping comments again. The website I am scraping from is : https://sofifa.com/player/192985/kevin-de-bruyne/200025/
likes = []
dislikes = []
follows = []
comments = []
driver_path = '/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/week05/day2_web_scraping_and_apis/web_scraping/selenium-examples/chromedriver'
driver = webdriver.Chrome(executable_path=driver_path)
# i = 0
for url in tqdm_notebook(urls):
driver.get(url)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
sleep(0.2)
soup1 = BeautifulSoup(driver.page_source,'lxml')
try:
dislike = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-danger dislike-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
dislikes.append(dislike)
except:
pass
try:
like = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-success like-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
likes.append(like)
except:
pass
try:
follow = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal follow-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
follows.append(follow)
except:
pass
try:
comment = soup1.find_all('p').text[0:10]
comments.append(comment)
except:
pass
# i += 1
# if i % 5 == 0:
# sentiment = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# sentiment.to_csv('/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/projects/cap-csv/sentiment.csv')
sentiment_final = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# df_sent = pd.merge(df, sentiment, left_index=True, right_index=True)
The comments section is dynamically loaded. you can try to capture it using the driver,
try:
comment_elements = driver.find_elements_by_tag_name('p')
for comment in comment_elements:
comments.append(comment.text)
except:
pass
print(Comments)
I haven't changed my bot in weeks and for some reason I would get an error message like this every day for the past 5 or so days https://imgur.com/VCLx2kv
I don't think the error is caused by my code besides the whole loop thing which I don't know how to fix and it hasn't caused me any problems before, but if you're curious about that part I have the part of code that causes that issue below
I already tried regenerating my token.
#client.event
async def dead_check():
i = 1
d = datetime.now()
date = str(d.strftime("%Y-%m-%d"))
server = client.get_server(id = '105388450575859712')
while i == 1:
async for message in client.logs_from(discord.Object(id='561667365927124992'), limit=9999999):
if date in message.content:
usid = message.content.split('=')
usid1 = usid[1].split(' ')
count = message.content.split('#')
cd = message.content.split('?')
ev = cd[1]
if ev == '00':
number = 0
elif ev == '01':
number = 1
elif ev == '10':
number = 2
elif ev == '11':
number = 3
name = count[0]
await client.send_message(discord.Object(id='339182193911922689'), '#here\n' + name + ' has reached the deadline for the **FRICKLING** program.\nThe user has attended ' + str(number) + ' events.')
async for message in client.logs_from(discord.Object(id='567328773922619392'), limit=9999):
if date in message.content and message.reactions:
usid = message.content.split(' ')
user=await client.get_user_info(usid[0])
await client.send_message(discord.Object(id='567771853796540465'), user.mention + ' needs to be paid, if you have already paid him - react with :HYPERS:')
await client.delete_message(message)
await asyncio.sleep(60*60*24)
#client.event
async def on_ready():
await client.change_presence(game=Game(name='with nuclear waste'))
print('Ready, bitch')
asyncio.get_event_loop().run_until_complete(dead_check())
Have you tried reducing the limit of those logs_from calls? 9999999 is a pretty big number, and it may have slowed things down enough that the heartbeat isn't being sent at the proper times. You should also sanitize that image of the error message, it contains your bot token.
Credit to Patrick Haugh, but I wanted to close this thread and he didnt post it as an answer