I'm trying to make a certain BLF CAN data file.
After I created an arbitary measurements table, I try to encode messages and write on BLF format by folloing codes.
The BLF file was made, however, it doesn't have any data at all.
Please let me know what the problem is.
What I tried :
import cantools
import can
db = cantools.database.load_file('T_Fuel_HB.dbc')
ex_msg = db.get_message_by_name("DEVICE_56604591_0")
time = 0
write_created = can.BLFWriter("sample_created.blf")
for i in range(10) :
r_int = np.random.randint(0, 100)
data_created = ex_msg.encode({'C_1_T_Air' : r_int, 'C_2_T_EG_room' : r_int, 'C_3_T_Pump_room' : r_int, 'C_4_T_Fuel_tank' : r_int})
msg_created = can.Message(timestamp = time, arbitration_id = ex_msg.frame_id, data = data, channel=0)
print(msg_created, r_int)
time += 2
write_created.on_message_received(msg_created)
What I expected :
filename = "VDK14.blf"
log = can.BLFReader(filename)
log = list(log)
for msg in log :
write.on_message_received(msg)
-> When I use the BLF file with existing log data, it's no problem to read the file thru CANanalyzer.
Related
I have Telegraf pulling data from OpenWeather API and returning JSON. On occasion there is a missing tag such as rain. If there is no rain predicted then it's not there
My config:
[[inputs.http]] urls =
["https://api.openweathermap.org/data/3.0/onecall?units=metric&lat=51.123456&lon=-0.123456&appid=xxxx&exclude=current,hourly,alerts,minutely"]
interval = "30s"
tagexclude = ["url", "host"]
data_format = "json_v2"
[[inputs.http.json_v2]]
measurement_name = "openweather"
timestamp_path = "daily.0.dt"
timestamp_format = "unix"
[[inputs.http.json_v2.field]]
path = "daily.0.clouds"
[[inputs.http.json_v2.field]]
path = "daily.0.temp.day"
[[inputs.http.json_v2.field]]
path = "daily.0.rain"
I cannot work out how to tell the input that the tag is optional. Either null or 0 would would be fine
I want to enter two values on this website https://hausratversicherung.friday.de/ and retrieve the value after submitting it. I wrote the following code
import requests, re
from robobrowser import RoboBrowser
br = RoboBrowser(parser='html.parser')
br.open("https://hausratversicherung.friday.de/")
form = br.get_form()
form['area'] = 100
form['postalCode'] = 44326
br.submit_form(form)
src = str(br.parsed())
start = '<div class="Typography-sc-3c3fuf-0 jEIicc" data-testid="totalPrice">'
end = ' €</div>'
result = re,search('%s(.*)%s' % (start, end),src).group(1)
print(result)
But the browser br is not opening the mentioned page and taking these values.
The postal code 44326 isn't accepted by the server. For other postal codes you can query their API directly:
import json
import requests
area = 100
postalcode = 44309
url = 'https://fdy2-policycenter-production.k8s.blue.friday-prod.de/rest/friday/hc/price?area={area}&postalCode={postalcode}'
data = requests.get(url.format(area=area, postalcode=postalcode)).json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4))
# print some info to screen:
print(data['basicCoverages']['coverages'][0]['insuredSum']['amount'])
print(data['basicCoverages']['coverages'][0]['price']['amount'])
Prints:
65000.0
7.81
I would like to do some statistical analysis with Python on the live casino game called Crazy Time from Evolution Gaming. There is a website that has the data to do this: https://tracksino.com/crazytime. I want the data of the lowest table 'Spin History' to be imported into excel. However, I do not now how this can be done. Could anyone give me an idea where to start?
Thanks in advance!
Try the below code:
import json
import requests
from urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import csv
import datetime
def scrap_history():
csv_headers = []
file_path = '' #mention your system where you have to save the file
file_name = 'spin_history.csv' # filename
page_number = 1
while True:
#Dynamic URL fetching data in chunks of 100
url = 'https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=' + str(page_number) + '&per_page=100&period=24hours'
print('-' * 100)
print('URL created : ',url)
response = requests.get(url,verify=False)
result = json.loads(response.text) # loading data to convert in JSON.
history_data = result['data']
print(history_data)
if history_data != []:
with open(file_path + file_name ,'a+') as history:
#Headers for file
csv_headers = ['Occured At','Slot Result','Spin Result','Total Winners','Total Payout',]
csvwriter = csv.DictWriter(history, delimiter=',', lineterminator='\n',fieldnames=csv_headers)
if page_number == 1:
print('Writing CSV header now...')
csvwriter.writeheader()
#write exracted data in to csv file one by one
for item in history_data:
value = datetime.datetime.fromtimestamp(item['when'])
occured_at = f'{value:%d-%B-%Y # %H:%M:%S}'
csvwriter.writerow({'Occured At':occured_at,
'Slot Result': item['slot_result'],
'Spin Result': item['result'],
'Total Winners': item['total_winners'],
'Total Payout': item['total_payout'],
})
print('-' * 100)
page_number +=1
print(page_number)
print('-' * 100)
else:
break
Explanation:
I have implemented the above script using python requests way. The API url https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=1&per_page=50&period=24hours extarcted from the web site itself(refer screenshot). In the very first step script will take the dynamic URL where page number is dynamic and changed upon on every iteration. For ex:- first it will be page_num = 1 then page_num = 2 and so on till all the data will get extracted.
I have the following problem: the following code works perfectly:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable);
inventJournalTrans.ItemId = "100836M";
frominventDim.InventLocationId="SD";
frominventDim.wMSLocationId = '11_RECEPTION';
fromInventDim.InventSizeId = '1000';
fromInventDim.inventBatchId = 'ID057828-CN';
ToinventDim.InventLocationId = "SD";
ToInventDim.wMSLocationId = '11_A2';
ToInventDim.InventSizeId = '1000';
ToInventDim.inventBatchId = 'T20/0001/1';
ToinventDim = InventDim::findOrCreate(ToinventDim);
frominventDim = InventDim::findOrCreate(frominventDim);
inventJournalTrans.InventDimId = frominventDim.inventDimId;
inventJournalTrans.initFromInventTable(InventTable::find("100836M"));
inventJournalTrans.Qty = -0.5;
inventJournalTrans.ToInventDimId = ToinventDim.inventDimId;
inventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace('-0.5',',','.'))));
inventJournalTrans.TransDate = SystemDateget();
inventJournalTrans.insert();
inventJournalCheckPost = InventJournalCheckPost::newJournalCheckPost(JournalCheckpostType::Post,inventJournalTable);
inventJournalCheckPost.parmThrowCheckFailed(_throwserror);
inventJournalCheckPost.parmShowInfoResult(_showinforesult);
inventJournalCheckPost.run();
It creates transfer journal line correctly and posts the transfer journal successfully.
My requirement is to import journal lines from a csv file. I wrote the following code:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable::find(_journalID));
InventJournaltrans.ItemId = conpeek(_filerecord,4);
inventDim_From.InventLocationId = 'SD';
inventDim_From.wMSLocationId = '11_RECEPTION';
InventDim_from.InventSizeId = conpeek(_fileRecord,11);
InventDim_From.inventBatchId = strfmt("%1",conpeek(_fileRecord,5));
InventDim_To.InventLocationId = 'SD';
inventDim_To.wMSLocationId = strfmt("%1",conpeek(_fileRecord,10));
InventDim_To.InventSizeId = conpeek(_fileRecord,11);
InventDim_To.inventBatchId = strfmt("%1",conpeek(_fileRecord,6));
InventDim_From = InventDim::findOrCreate(inventDim_From);
inventDim_To = InventDim::findOrCreate(inventDim_To);
InventJournalTrans.InventDimId = inventDim_From.inventDimId;
InventJournalTrans.initFromInventTable(InventTable::find(conpeek(_filerecord,4)));
inventJournalTrans.Qty = -abs(any2real(strReplace(conpeek(_fileRecord,8),',','.')));
inventJournalTrans.ToInventDimId = inventDim_To.inventDimId;
InventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace(conpeek(_fileRecord,8),',','.'))));
inventJournalTrans.TransDate = str2date(conpeek(filerecord,9),123);
InventJournalTrans.insert();
I have the following error when I use the insert() method : size does not exists for the itemId. When I have a look in inventSize table for my itemId, the size exists, I thought it was an inventDimId problem in inventJournalTrans but they're strictly similar to the first code exemple. All my datas are the same as the first exemple but are not hard-coded and come from reading my csv file.
I spend a lot of time debugging and found nothing wrong but error message remains
I'm using Dynamics AX V4 SP1.
Thanks a lot for any help.
When you read from files always trim trailing spaces. You can do that using the strRtrim function.
Like so:
InventDim_To.InventSizeId = strRtrim(conpeek(_fileRecord,11));
I've just started coding ABAP for a few days and I have a task to call the report from transaction SE38 and have
the report's result shown on the screen of the WebDynPro application SE80.
The report take the user input ( e.g: Material Number, Material Type, Plant, Sale Org. ) as a condition for querying, so the WebDynPro application must allow user to key in this parameters.
In some related article they were talking about using SUBMIT rep EXPORTING LIST TO MEMORY and CALL FUNCTION 'LIST_FROM_MEMORY' but so far I really have no idea to implement it.
Any answers will be appreciated. Thanks!
You can export it to PDF. Therefore, when a user clicks on a link, you run the conversion and display the file in the browser window.
To do so, you start by creating a JOB using the following code below:
constants c_name type tbtcjob-jobname value 'YOUR_JOB_NAME'.
data v_number type tbtcjob-jobcount.
data v_print_parameters type pri_params.
call function 'JOB_OPEN'
exporting
jobname = c_name
importing
jobcount = v_number
exceptions
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
others = 4.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Then, you need to get the printer parameters in order to submit the report:
call function 'GET_PRINT_PARAMETERS'
exporting
destination = 'LP01'
immediately = space
new_list_id = 'X'
no_dialog = 'X'
user = sy-uname
importing
out_parameters = v_print_parameters
exceptions
archive_info_not_found = 1
invalid_print_params = 2
invalid_archive_params = 3
others = 4.
v_print_parameters-linct = 55.
v_print_parameters-linsz = 1.
v_print_parameters-paart = 'LETTER'.
Now you submit your report using the filters that apply. Do not forget to add the job parameters to it, as the code below shows:
submit your_report_name
to sap-spool
spool parameters v_print_parameters
without spool dynpro
with ...(insert all your filters here)
via job c_name number v_number
and return.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
After that, you close the job:
call function 'JOB_CLOSE'
exporting
jobcount = v_number
jobname = c_name
strtimmed = 'X'
exceptions
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
others = 8.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Now the job will proceed and you'll need to wait for it to complete. Do it with a loop. Once the job is completed, you can get it's spool output and convert to PDF.
data v_rqident type tsp01-rqident.
data v_job_head type tbtcjob.
data t_job_steplist type tbtcstep occurs 0 with header line.
data t_pdf like tline occurs 0 with header line.
do 200 times.
wait up to 1 seconds.
call function 'BP_JOB_READ'
exporting
job_read_jobcount = v_number
job_read_jobname = c_name
job_read_opcode = '20'
importing
job_read_jobhead = v_job_head
tables
job_read_steplist = t_job_steplist
exceptions
invalid_opcode = 1
job_doesnt_exist = 2
job_doesnt_have_steps = 3
others = 4.
read table t_job_steplist index 1.
if not t_job_steplist-listident is initial.
v_rqident = t_job_steplist-listident.
exit.
else.
clear v_job_head.
clear t_job_steplist.
clear t_job_steplist[].
endif.
enddo.
check not v_rqident is initial.
call function 'CONVERT_ABAPSPOOLJOB_2_PDF'
exporting
src_spoolid = v_rqident
dst_device = 'LP01'
tables
pdf = t_pdf
exceptions
err_no_abap_spooljob = 1
err_no_spooljob = 2
err_no_permission = 3
err_conv_not_possible = 4
err_bad_destdevice = 5
user_cancelled = 6
err_spoolerror = 7
err_temseerror = 8
err_btcjob_open_failed = 9
err_btcjob_submit_failed = 10
err_btcjob_close_failed = 11
others = 12.
If you're going to send it via HTTP, you may need to convert it to BASE64 as well.
field-symbols <xchar> type x.
data v_offset(10) type n.
data v_char type c.
data v_xchar(2) type x.
data v_xstringdata_aux type xstring.
data v_xstringdata type xstring.
data v_base64data type string.
data v_base64data_aux type string.
loop at t_pdf.
do 134 times.
v_offset = sy-index - 1.
v_char = t_pdf+v_offset(1).
assign v_char to <xchar> casting type x.
concatenate v_xstringdata_aux <xchar> into v_xstringdata_aux in byte mode.
enddo.
concatenate v_xstringdata v_xstringdata_aux into v_xstringdata in byte mode.
clear v_xstringdata_aux.
endloop.
call function 'SCMS_BASE64_ENCODE_STR'
exporting
input = v_xstringdata
importing
output = v_base64data.
v_base64data_aux = v_base64data.
while strlen( v_base64data_aux ) gt 255.
clear t_base64data.
t_base64data-data = v_base64data_aux.
v_base64data_aux = v_base64data_aux+255.
append t_base64data.
endwhile.
if not v_base64data_aux is initial.
t_base64data-data = v_base64data_aux.
append t_base64data.
endif.
And you're done!
Hope it helps.
As previous speakers said, you should do extensive training before implementing such stuff in productive environment.
However, calling WebdynPro ABAP within report can be done with the help of WDY_EXECUTE_IN_PLACE function module. You should pass there Webdyn Pro application and necessary parameters.
CALL FUNCTION 'WDY_EXECUTE_IN_PLACE'
EXPORTING
* PROTOCOL =
INTERNALMODE = ' '
* SMARTCLIENT =
APPLICATION = 'Z_MY_WEBDYNPRO'
* CONTAINER_NAME =
PARAMETERS = lt_parameters
SUPPRESS_OUTPUT =
TRY_TO_USE_SAPGUI_THEME = ' '
IMPORTING
OUT_URL = ex_url
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.