Control display of an ipyvuetify page by a dropdown works in notebook not in voila - jupyter-notebook

I have encountered another working in notebook but not in voila issue. I have tried for a couple of hours but feel like I am still missing something and therefore seeking expert opinions here.
I have a function create_pages_and_run() that takes in a dictionary, d, as an input to generate a dashboard (the data type is ipyvuetify.generated.App.App). The dictionary can be retrieved from a json file scenario_dict using a country name as key where I designed a dropdown to collect the country name.
The purpose is to ask user to select a country name and the page will be redrawn/refreshed. I have the following code that works in notebook but not in Voila. (Works means that when a new country name is selected, the dashboard is displayed with the widgets using data from that countries)
scenario_dropdown = widgets.Dropdown(
options=all_scenarios,
value=initial_scenario,
description="Scenario",
layout=widgets.Layout(margin="0 20px 0 0", height="39px", width="15%"),
)
d = scenario_dict[initial_scenario]
app = create_pages_and_run(d)
#the below code works for notebook
def on_change(change):
global d, app
if change["name"] == "value" and (change["new"] != change["old"]):
d = scenario_dict[change["new"]]
app = create_pages_and_run(d)
clear_output()
display(app)
scenario_dropdown.observe(on_change)
My failed code using ipywidgets.Output is as below. (Failed in the sense, after selecting country name in the dropdown no change is observed).
scenario_dropdown = widgets.Dropdown(
options=all_scenarios,
value=initial_scenario,
description="Scenario",
layout=widgets.Layout(margin="0 20px 0 0", height="39px", width="15%"),
)
d = scenario_dict[initial_scenario]
app = create_pages_and_run(d)
out = widgets.Output()
with out:
display(app)
# the code works failed for voila
def on_change(change):
global d, app, out
if change["name"] == "value" and (change["new"] != change["old"]):
d = scenario_dict[change["new"]]
app = create_pages_and_run(d)
out.clear_output()
with out:
display(app)
display(out)
scenario_dropdown.observe(on_change)
I appreciate your help, thanks.

I'm not sure why your code didn't work. Maybe it was the use of globals which can be avoided. Could you provide a working example to test. Here is a working example based on your code that works in Voila.
import ipywidgets as widgets
all_scenarios = ['aa','bb','cc']
initial_scenario = all_scenarios[0]
scenario_dict = {}
scenario_dict['aa'] = 'do_this'
scenario_dict['bb'] = 'do_that'
scenario_dict['cc'] = 'do_what'
def create_pages_and_run(action):
print(action)
scenario_dropdown = widgets.Dropdown(
options=all_scenarios,
value=initial_scenario,
description="Scenario",
layout=widgets.Layout(margin="0 20px 0 0", height="39px", width="15%"),
)
d = scenario_dict[initial_scenario]
out = widgets.Output()
with out:
create_pages_and_run(d)
app = widgets.VBox([scenario_dropdown, out])
def on_change(change):
if change["name"] == "value" and (change["new"] != change["old"]):
d = scenario_dict[change["new"]]
out.clear_output()
with out:
create_pages_and_run(d)
scenario_dropdown.observe(on_change)
app

Related

Tkinter create RadioButtons from dictionary issue

Here is my problem: I've made a tkinter program which calculate the cost of a creation my wife make. She sews accessories. So far my program is working pretty well. I search in a JSON file if this fabric already exist in some kind of catalogue.
When my catalogue have the same fabric name, I want to make a popup window with radiobutton. By checking the correct radiobutton I chose the correct fabric. So I've extracted the dictionary of same fabric names, when I type the fabric name entry and then click on enter, it sends the dictionnary to the popup window which creates radiobutton in a for loop.
The problem comes here, my radiobutton are correctly made but impossible to get there state nor which button is checked...
First, here's the dictionary :
{"Tissu 1": {"Type": "Tissu", "Matière": "coton", "Prix du coupon": "8",
"Laise du coupon (cm)": "150",
"Prix au mètre (€)": 8.0,
"Fournisseur": "lol"},
"Tissu 2": {"Type": "Tissu", "Matière": "molleton", "Prix du coupon": "10",
"Laise du coupon (cm)": "150",
"Prix au mètre (€)": 10.0,
"Fournisseur": "lol"},
"Tissu 3": {"Type": "Tissu", "Matière": "coton", "Prix du coupon": "12",
"Laise du coupon (cm)": "150",
"Prix au mètre (€)": 12.0,
"Fournisseur": "mol"}}
Here's my code:
First there is the main class:
class ProgramCalcul(tk.Tk):
def __init__(self, *args, **kwargs):
self.frames = {}
for F in (StartPage, CalculTissuPage, CalculMercPage):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row = 0, column = 0, sticky = "nsew")
self.show_frameSP(StartPage)
def show_frameSP(self, cont):
frame = self.frames[cont]
frame.tkraise()
Then I have a class for each other calcul page
Here is the fabric calcul page (I've removed some entries and other widgets for legibility and because they work fine):
class CalculTissuPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
tissuType = tk.StringVar()
TissuNameEntry = tk.Entry(self, textvariable = tissuType, width = 50)
TissuNameEntry.grid(row = 2, padx = 20, pady = 5, sticky = "ew")
def keyPress(arg):
checkCatalogueTissu(TissuNameEntry.get())
TissuNameEntry.bind('<Return>', keyPress)
def checkCatalogueTissu(txt):
tmpDict = {}
with open("catalogue tissu.json") as json_file:
data = json.load(json_file)
dictLen = len(data)
for i in range(1, dictLen+1):
if "Tissu {}".format(i) not in data:
pass
else:
if data["Tissu {}".format(i)]["Matière"] == txt:
tmpDict["Tissu {}".format(i)] = data["Tissu {}".format(i)]
popupRadio(tmpDict)
def setText(txtPrix, txtFournisseur, txtLaise, txtLong):
couponPrixEntry.delete(0,tk.END)
couponPrixEntry.insert(0,txtPrix)
fournisseurEntry.delete(0, tk.END)
fournisseurEntry.insert(0, txtFournisseur)
laiseDimEntry.delete(0, tk.END)
laiseDimEntry.insert(0, txtLaise)
longCouponEntry.delete(0,tk.END)
longCouponEntry.insert(0,txtLong)
And finally the function which create the popup in which appear the radiobuttons:
def popupRadio(tmpDict):
popup = tk.Toplevel()
popup.wm_title("!")
popupVar = tk.IntVar()
def checkState(popupVar):
print(popupVar.get())
for (key, val) in tmpDict.items():
tk.Radiobutton(popup, text = key, variable = popupVar, value = val, command = checkState(popupVar)).grid()
print(key, popupVar.get())
chk = ttk.Button(popup, text = "Select this", command = checkState(popupVar))
chk.grid()
popup.mainloop()
When I run it, the checkState() method seems to run automatically and then the chk button doesn't do anything... I can check the radiobuttons but nothing happens (even if I pass the command=chechState in the radiobutton settings. I want to get which button is checked to use this value in my setText() method further.
for the moment, here is the output the print() send in my console :
0
Tissu 1 0
0
Tissu 3 0
0
So here I am, it's been 2 days since I've tried evrything I found on the web but nothing...
A little help would be very appreciated
PS: sorry if my english is not perfect :)

How to import data from a HTML table on a website to excel?

I would like to do some statistical analysis with Python on the live casino game called Crazy Time from Evolution Gaming. There is a website that has the data to do this: https://tracksino.com/crazytime. I want the data of the lowest table 'Spin History' to be imported into excel. However, I do not now how this can be done. Could anyone give me an idea where to start?
Thanks in advance!
Try the below code:
import json
import requests
from urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import csv
import datetime
def scrap_history():
csv_headers = []
file_path = '' #mention your system where you have to save the file
file_name = 'spin_history.csv' # filename
page_number = 1
while True:
#Dynamic URL fetching data in chunks of 100
url = 'https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=' + str(page_number) + '&per_page=100&period=24hours'
print('-' * 100)
print('URL created : ',url)
response = requests.get(url,verify=False)
result = json.loads(response.text) # loading data to convert in JSON.
history_data = result['data']
print(history_data)
if history_data != []:
with open(file_path + file_name ,'a+') as history:
#Headers for file
csv_headers = ['Occured At','Slot Result','Spin Result','Total Winners','Total Payout',]
csvwriter = csv.DictWriter(history, delimiter=',', lineterminator='\n',fieldnames=csv_headers)
if page_number == 1:
print('Writing CSV header now...')
csvwriter.writeheader()
#write exracted data in to csv file one by one
for item in history_data:
value = datetime.datetime.fromtimestamp(item['when'])
occured_at = f'{value:%d-%B-%Y # %H:%M:%S}'
csvwriter.writerow({'Occured At':occured_at,
'Slot Result': item['slot_result'],
'Spin Result': item['result'],
'Total Winners': item['total_winners'],
'Total Payout': item['total_payout'],
})
print('-' * 100)
page_number +=1
print(page_number)
print('-' * 100)
else:
break
Explanation:
I have implemented the above script using python requests way. The API url https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=1&per_page=50&period=24hours extarcted from the web site itself(refer screenshot). In the very first step script will take the dynamic URL where page number is dynamic and changed upon on every iteration. For ex:- first it will be page_num = 1 then page_num = 2 and so on till all the data will get extracted.

I need help web scraping comments section

I am currently doing a project on football players. I am trying to scrape some public comments on football players for sentiment analysis. However I can't seem to scrape the comment. Any help would be MUCH appreciated. It is the comments part I can't seem to do. Weirdly enough I had it working but then it stopped and I cant seem to get scraping comments again. The website I am scraping from is : https://sofifa.com/player/192985/kevin-de-bruyne/200025/
likes = []
dislikes = []
follows = []
comments = []
driver_path = '/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/week05/day2_web_scraping_and_apis/web_scraping/selenium-examples/chromedriver'
driver = webdriver.Chrome(executable_path=driver_path)
# i = 0
for url in tqdm_notebook(urls):
driver.get(url)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
sleep(0.2)
soup1 = BeautifulSoup(driver.page_source,'lxml')
try:
dislike = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-danger dislike-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
dislikes.append(dislike)
except:
pass
try:
like = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-success like-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
likes.append(like)
except:
pass
try:
follow = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal follow-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
follows.append(follow)
except:
pass
try:
comment = soup1.find_all('p').text[0:10]
comments.append(comment)
except:
pass
# i += 1
# if i % 5 == 0:
# sentiment = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# sentiment.to_csv('/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/projects/cap-csv/sentiment.csv')
sentiment_final = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# df_sent = pd.merge(df, sentiment, left_index=True, right_index=True)
The comments section is dynamically loaded. you can try to capture it using the driver,
try:
comment_elements = driver.find_elements_by_tag_name('p')
for comment in comment_elements:
comments.append(comment.text)
except:
pass
print(Comments)

Python ArcPy - Print Layer with highest field value

I have some python code that goes through layers in my ArcGIS project and prints out the layer names and their corresponding highest value within the field "SUM_USER_VisitCount".
Output Picture
What I want the code to do is only print out the layer name and SUM_USER_VisitCount field value for the one layer with the absolute highest value.
Desired Output
I have been unable to figure out how to achieve this and can't find anything online either. Can someone help me achieve my desired output?
Sorry if the code layout is a little weird. It got messed up when I pasted it into the "code sample"
Here is my code:
import arcpy
import datetime
from datetime import timedelta
import time
#Document Start Time in-order to calculate Run Time
time1 = time.clock()
#assign project and map frame
p =
arcpy.mp.ArcGISProject(r'E:\arcGIS_Shared\Python\CumulativeHeatMaps.aprx')
m = p.listMaps('Map')[0]
Markets = [3000]
### Centers to loop through
CA_Centers = ['Castro', 'ColeValley', 'Excelsior', 'GlenPark',
'LowerPacificHeights', 'Marina', 'NorthBeach', 'RedwoodCity', 'SanBruno',
'DalyCity']
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
You can create a dictonary that stores the results from each query and then print out the highest one at the end.
results_dict = {}
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
results_dict[Layer] = max(searchCursor)
# get key for dictionary item with the highest value
highest_count_layer = max(results_dict, key=results_dict.get)
print(highest_count_layer)
print(results_dict[highest_count_layer])

How to display WebDynpro ABAP in ABAP report?

I've just started coding ABAP for a few days and I have a task to call the report from transaction SE38 and have
the report's result shown on the screen of the WebDynPro application SE80.
The report take the user input ( e.g: Material Number, Material Type, Plant, Sale Org. ) as a condition for querying, so the WebDynPro application must allow user to key in this parameters.
In some related article they were talking about using SUBMIT rep EXPORTING LIST TO MEMORY and CALL FUNCTION 'LIST_FROM_MEMORY' but so far I really have no idea to implement it.
Any answers will be appreciated. Thanks!
You can export it to PDF. Therefore, when a user clicks on a link, you run the conversion and display the file in the browser window.
To do so, you start by creating a JOB using the following code below:
constants c_name type tbtcjob-jobname value 'YOUR_JOB_NAME'.
data v_number type tbtcjob-jobcount.
data v_print_parameters type pri_params.
call function 'JOB_OPEN'
exporting
jobname = c_name
importing
jobcount = v_number
exceptions
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
others = 4.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Then, you need to get the printer parameters in order to submit the report:
call function 'GET_PRINT_PARAMETERS'
exporting
destination = 'LP01'
immediately = space
new_list_id = 'X'
no_dialog = 'X'
user = sy-uname
importing
out_parameters = v_print_parameters
exceptions
archive_info_not_found = 1
invalid_print_params = 2
invalid_archive_params = 3
others = 4.
v_print_parameters-linct = 55.
v_print_parameters-linsz = 1.
v_print_parameters-paart = 'LETTER'.
Now you submit your report using the filters that apply. Do not forget to add the job parameters to it, as the code below shows:
submit your_report_name
to sap-spool
spool parameters v_print_parameters
without spool dynpro
with ...(insert all your filters here)
via job c_name number v_number
and return.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
After that, you close the job:
call function 'JOB_CLOSE'
exporting
jobcount = v_number
jobname = c_name
strtimmed = 'X'
exceptions
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
others = 8.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Now the job will proceed and you'll need to wait for it to complete. Do it with a loop. Once the job is completed, you can get it's spool output and convert to PDF.
data v_rqident type tsp01-rqident.
data v_job_head type tbtcjob.
data t_job_steplist type tbtcstep occurs 0 with header line.
data t_pdf like tline occurs 0 with header line.
do 200 times.
wait up to 1 seconds.
call function 'BP_JOB_READ'
exporting
job_read_jobcount = v_number
job_read_jobname = c_name
job_read_opcode = '20'
importing
job_read_jobhead = v_job_head
tables
job_read_steplist = t_job_steplist
exceptions
invalid_opcode = 1
job_doesnt_exist = 2
job_doesnt_have_steps = 3
others = 4.
read table t_job_steplist index 1.
if not t_job_steplist-listident is initial.
v_rqident = t_job_steplist-listident.
exit.
else.
clear v_job_head.
clear t_job_steplist.
clear t_job_steplist[].
endif.
enddo.
check not v_rqident is initial.
call function 'CONVERT_ABAPSPOOLJOB_2_PDF'
exporting
src_spoolid = v_rqident
dst_device = 'LP01'
tables
pdf = t_pdf
exceptions
err_no_abap_spooljob = 1
err_no_spooljob = 2
err_no_permission = 3
err_conv_not_possible = 4
err_bad_destdevice = 5
user_cancelled = 6
err_spoolerror = 7
err_temseerror = 8
err_btcjob_open_failed = 9
err_btcjob_submit_failed = 10
err_btcjob_close_failed = 11
others = 12.
If you're going to send it via HTTP, you may need to convert it to BASE64 as well.
field-symbols <xchar> type x.
data v_offset(10) type n.
data v_char type c.
data v_xchar(2) type x.
data v_xstringdata_aux type xstring.
data v_xstringdata type xstring.
data v_base64data type string.
data v_base64data_aux type string.
loop at t_pdf.
do 134 times.
v_offset = sy-index - 1.
v_char = t_pdf+v_offset(1).
assign v_char to <xchar> casting type x.
concatenate v_xstringdata_aux <xchar> into v_xstringdata_aux in byte mode.
enddo.
concatenate v_xstringdata v_xstringdata_aux into v_xstringdata in byte mode.
clear v_xstringdata_aux.
endloop.
call function 'SCMS_BASE64_ENCODE_STR'
exporting
input = v_xstringdata
importing
output = v_base64data.
v_base64data_aux = v_base64data.
while strlen( v_base64data_aux ) gt 255.
clear t_base64data.
t_base64data-data = v_base64data_aux.
v_base64data_aux = v_base64data_aux+255.
append t_base64data.
endwhile.
if not v_base64data_aux is initial.
t_base64data-data = v_base64data_aux.
append t_base64data.
endif.
And you're done!
Hope it helps.
As previous speakers said, you should do extensive training before implementing such stuff in productive environment.
However, calling WebdynPro ABAP within report can be done with the help of WDY_EXECUTE_IN_PLACE function module. You should pass there Webdyn Pro application and necessary parameters.
CALL FUNCTION 'WDY_EXECUTE_IN_PLACE'
EXPORTING
* PROTOCOL =
INTERNALMODE = ' '
* SMARTCLIENT =
APPLICATION = 'Z_MY_WEBDYNPRO'
* CONTAINER_NAME =
PARAMETERS = lt_parameters
SUPPRESS_OUTPUT =
TRY_TO_USE_SAPGUI_THEME = ' '
IMPORTING
OUT_URL = ex_url
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.

Resources