Issue importing transfer journal through x++ - axapta

I have the following problem: the following code works perfectly:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable);
inventJournalTrans.ItemId = "100836M";
frominventDim.InventLocationId="SD";
frominventDim.wMSLocationId = '11_RECEPTION';
fromInventDim.InventSizeId = '1000';
fromInventDim.inventBatchId = 'ID057828-CN';
ToinventDim.InventLocationId = "SD";
ToInventDim.wMSLocationId = '11_A2';
ToInventDim.InventSizeId = '1000';
ToInventDim.inventBatchId = 'T20/0001/1';
ToinventDim = InventDim::findOrCreate(ToinventDim);
frominventDim = InventDim::findOrCreate(frominventDim);
inventJournalTrans.InventDimId = frominventDim.inventDimId;
inventJournalTrans.initFromInventTable(InventTable::find("100836M"));
inventJournalTrans.Qty = -0.5;
inventJournalTrans.ToInventDimId = ToinventDim.inventDimId;
inventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace('-0.5',',','.'))));
inventJournalTrans.TransDate = SystemDateget();
inventJournalTrans.insert();
inventJournalCheckPost = InventJournalCheckPost::newJournalCheckPost(JournalCheckpostType::Post,inventJournalTable);
inventJournalCheckPost.parmThrowCheckFailed(_throwserror);
inventJournalCheckPost.parmShowInfoResult(_showinforesult);
inventJournalCheckPost.run();
It creates transfer journal line correctly and posts the transfer journal successfully.
My requirement is to import journal lines from a csv file. I wrote the following code:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable::find(_journalID));
InventJournaltrans.ItemId = conpeek(_filerecord,4);
inventDim_From.InventLocationId = 'SD';
inventDim_From.wMSLocationId = '11_RECEPTION';
InventDim_from.InventSizeId = conpeek(_fileRecord,11);
InventDim_From.inventBatchId = strfmt("%1",conpeek(_fileRecord,5));
InventDim_To.InventLocationId = 'SD';
inventDim_To.wMSLocationId = strfmt("%1",conpeek(_fileRecord,10));
InventDim_To.InventSizeId = conpeek(_fileRecord,11);
InventDim_To.inventBatchId = strfmt("%1",conpeek(_fileRecord,6));
InventDim_From = InventDim::findOrCreate(inventDim_From);
inventDim_To = InventDim::findOrCreate(inventDim_To);
InventJournalTrans.InventDimId = inventDim_From.inventDimId;
InventJournalTrans.initFromInventTable(InventTable::find(conpeek(_filerecord,4)));
inventJournalTrans.Qty = -abs(any2real(strReplace(conpeek(_fileRecord,8),',','.')));
inventJournalTrans.ToInventDimId = inventDim_To.inventDimId;
InventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace(conpeek(_fileRecord,8),',','.'))));
inventJournalTrans.TransDate = str2date(conpeek(filerecord,9),123);
InventJournalTrans.insert();
I have the following error when I use the insert() method : size does not exists for the itemId. When I have a look in inventSize table for my itemId, the size exists, I thought it was an inventDimId problem in inventJournalTrans but they're strictly similar to the first code exemple. All my datas are the same as the first exemple but are not hard-coded and come from reading my csv file.
I spend a lot of time debugging and found nothing wrong but error message remains
I'm using Dynamics AX V4 SP1.
Thanks a lot for any help.

When you read from files always trim trailing spaces. You can do that using the strRtrim function.
Like so:
InventDim_To.InventSizeId = strRtrim(conpeek(_fileRecord,11));

Related

How can I create a BLf file based on a measurement data?

I'm trying to make a certain BLF CAN data file.
After I created an arbitary measurements table, I try to encode messages and write on BLF format by folloing codes.
The BLF file was made, however, it doesn't have any data at all.
Please let me know what the problem is.
What I tried :
import cantools
import can
db = cantools.database.load_file('T_Fuel_HB.dbc')
ex_msg = db.get_message_by_name("DEVICE_56604591_0")
time = 0
write_created = can.BLFWriter("sample_created.blf")
for i in range(10) :
r_int = np.random.randint(0, 100)
data_created = ex_msg.encode({'C_1_T_Air' : r_int, 'C_2_T_EG_room' : r_int, 'C_3_T_Pump_room' : r_int, 'C_4_T_Fuel_tank' : r_int})
msg_created = can.Message(timestamp = time, arbitration_id = ex_msg.frame_id, data = data, channel=0)
print(msg_created, r_int)
time += 2
write_created.on_message_received(msg_created)
What I expected :
filename = "VDK14.blf"
log = can.BLFReader(filename)
log = list(log)
for msg in log :
write.on_message_received(msg)
-> When I use the BLF file with existing log data, it's no problem to read the file thru CANanalyzer.

I need help web scraping comments section

I am currently doing a project on football players. I am trying to scrape some public comments on football players for sentiment analysis. However I can't seem to scrape the comment. Any help would be MUCH appreciated. It is the comments part I can't seem to do. Weirdly enough I had it working but then it stopped and I cant seem to get scraping comments again. The website I am scraping from is : https://sofifa.com/player/192985/kevin-de-bruyne/200025/
likes = []
dislikes = []
follows = []
comments = []
driver_path = '/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/week05/day2_web_scraping_and_apis/web_scraping/selenium-examples/chromedriver'
driver = webdriver.Chrome(executable_path=driver_path)
# i = 0
for url in tqdm_notebook(urls):
driver.get(url)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
sleep(0.2)
soup1 = BeautifulSoup(driver.page_source,'lxml')
try:
dislike = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-danger dislike-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
dislikes.append(dislike)
except:
pass
try:
like = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal bp3-intent-success like-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
likes.append(like)
except:
pass
try:
follow = soup1.find('button', attrs = {'class':'bp3-button bp3-minimal follow-btn need-sign-in'}).find('span',{'class':'count'}).text.strip()
follows.append(follow)
except:
pass
try:
comment = soup1.find_all('p').text[0:10]
comments.append(comment)
except:
pass
# i += 1
# if i % 5 == 0:
# sentiment = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# sentiment.to_csv('/Users/niallmcnulty/Desktop/GeneralAssembly/Lessons/DSI11-lessons/projects/cap-csv/sentiment.csv')
sentiment_final = pd.DataFrame({"dislikes":dislikes,"likes":likes,"follows":follows,"comments":comments})
# df_sent = pd.merge(df, sentiment, left_index=True, right_index=True)
The comments section is dynamically loaded. you can try to capture it using the driver,
try:
comment_elements = driver.find_elements_by_tag_name('p')
for comment in comment_elements:
comments.append(comment.text)
except:
pass
print(Comments)

Why more number of duplicated data is saving in my excel sheet for my code?

Actually this code is generally used to scrape data from websites but the problem is more number of duplicated data is producing and saving in my excel sheet.
def extractor():
time.sleep(10)
souptree = html.fromstring(driver.page_source)
tburl = souptree.xpath("//table[contains(#id, 'theDataTable')]//tbody//tr//td[4]//a//#href")
for tbu in tburl:
allurl = []
allurl.append(urllib.parse.urljoin(siteurl, tbu))
for tb in allurl:
get_url = requests.get(tb)
get_soup = html.fromstring(get_url.content)
pattern = re.compile("^\s+|\s*,\s*|\s+$")
name = get_soup.xpath('//td[#headers="contactName"]//text()')
phone = get_soup.xpath('//td[#headers="contactPhone"]//text()')
mail = get_soup.xpath('//td[#headers="contactEmail"]//a//text()')
artitle = get_soup.xpath('//td[#headers="contactEmail"]//a//#href')
artit = ([x for x in pattern.split(str(artitle)) if x][-1])
title = artit[:-2]
for (nam, pho, mai) in zip(name, phone, mail):
fname = nam[9:]
allmails.append(mai)
allnames.append(fname)
allphone.append(pho)
alltitles.append(title)
fullfile = pd.DataFrame({'Names': allnames, 'Mails': allmails, 'Title': alltitles, 'Phone Numbers': allphone})
writer = ExcelWriter('G:\\Sheet_Name.xlsx')
fullfile.to_excel(writer, 'Sheet1', index=False)
writer.save()
print(fname, pho, mai, title, sep='\t')
while True:
time.sleep(10)
extractor()
try:
nextbutton()
except (WebDriverException):
driver.refresh()
except(NoSuchElementException):
time.sleep(10)
driver.quit()
I want the output should not be duplicated but almost half and more number of data are duplicating each time i run the code.

choose.files() defaulting to last directory used

According to the documentation for the choose.files function:
choose.files(default = "", caption = "Select files",
multi = TRUE, filters = Filters,
index = nrow(Filters))
If you would like to display files in a particular directory, give a
fully qualified file mask (e.g., "c:\*.*") in the default argument.
If a directory is not given, the dialog will start in the current
directory the first time, and remember the last directory used on
subsequent invocations.
My code is as follows:
AZPic = choose.files(default = "", caption = "Select Azimuth Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
STSPic = choose.files(default = "", caption = "Select Side to Side Elevation Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
FTBPic = choose.files(default = "", caption = "Select Front to Back Elevation Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
OrientationPic = choose.files(default = "", caption = "Select Orientation Picture", filters = Filters[c("All","jpeg"),])
Now when I run this code, it defaults to my home directory for all 4 calls.
It starts there for the first call of course, but afterwards should it not remember the folder I navigate to?
I typically will hop through another 3 or 4 folders to find the pictures for the job, but all of them for each program use will be in the same folder.
Is there something I'm missing?
Thanks in advance.

How to display WebDynpro ABAP in ABAP report?

I've just started coding ABAP for a few days and I have a task to call the report from transaction SE38 and have
the report's result shown on the screen of the WebDynPro application SE80.
The report take the user input ( e.g: Material Number, Material Type, Plant, Sale Org. ) as a condition for querying, so the WebDynPro application must allow user to key in this parameters.
In some related article they were talking about using SUBMIT rep EXPORTING LIST TO MEMORY and CALL FUNCTION 'LIST_FROM_MEMORY' but so far I really have no idea to implement it.
Any answers will be appreciated. Thanks!
You can export it to PDF. Therefore, when a user clicks on a link, you run the conversion and display the file in the browser window.
To do so, you start by creating a JOB using the following code below:
constants c_name type tbtcjob-jobname value 'YOUR_JOB_NAME'.
data v_number type tbtcjob-jobcount.
data v_print_parameters type pri_params.
call function 'JOB_OPEN'
exporting
jobname = c_name
importing
jobcount = v_number
exceptions
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
others = 4.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Then, you need to get the printer parameters in order to submit the report:
call function 'GET_PRINT_PARAMETERS'
exporting
destination = 'LP01'
immediately = space
new_list_id = 'X'
no_dialog = 'X'
user = sy-uname
importing
out_parameters = v_print_parameters
exceptions
archive_info_not_found = 1
invalid_print_params = 2
invalid_archive_params = 3
others = 4.
v_print_parameters-linct = 55.
v_print_parameters-linsz = 1.
v_print_parameters-paart = 'LETTER'.
Now you submit your report using the filters that apply. Do not forget to add the job parameters to it, as the code below shows:
submit your_report_name
to sap-spool
spool parameters v_print_parameters
without spool dynpro
with ...(insert all your filters here)
via job c_name number v_number
and return.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
After that, you close the job:
call function 'JOB_CLOSE'
exporting
jobcount = v_number
jobname = c_name
strtimmed = 'X'
exceptions
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
others = 8.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Now the job will proceed and you'll need to wait for it to complete. Do it with a loop. Once the job is completed, you can get it's spool output and convert to PDF.
data v_rqident type tsp01-rqident.
data v_job_head type tbtcjob.
data t_job_steplist type tbtcstep occurs 0 with header line.
data t_pdf like tline occurs 0 with header line.
do 200 times.
wait up to 1 seconds.
call function 'BP_JOB_READ'
exporting
job_read_jobcount = v_number
job_read_jobname = c_name
job_read_opcode = '20'
importing
job_read_jobhead = v_job_head
tables
job_read_steplist = t_job_steplist
exceptions
invalid_opcode = 1
job_doesnt_exist = 2
job_doesnt_have_steps = 3
others = 4.
read table t_job_steplist index 1.
if not t_job_steplist-listident is initial.
v_rqident = t_job_steplist-listident.
exit.
else.
clear v_job_head.
clear t_job_steplist.
clear t_job_steplist[].
endif.
enddo.
check not v_rqident is initial.
call function 'CONVERT_ABAPSPOOLJOB_2_PDF'
exporting
src_spoolid = v_rqident
dst_device = 'LP01'
tables
pdf = t_pdf
exceptions
err_no_abap_spooljob = 1
err_no_spooljob = 2
err_no_permission = 3
err_conv_not_possible = 4
err_bad_destdevice = 5
user_cancelled = 6
err_spoolerror = 7
err_temseerror = 8
err_btcjob_open_failed = 9
err_btcjob_submit_failed = 10
err_btcjob_close_failed = 11
others = 12.
If you're going to send it via HTTP, you may need to convert it to BASE64 as well.
field-symbols <xchar> type x.
data v_offset(10) type n.
data v_char type c.
data v_xchar(2) type x.
data v_xstringdata_aux type xstring.
data v_xstringdata type xstring.
data v_base64data type string.
data v_base64data_aux type string.
loop at t_pdf.
do 134 times.
v_offset = sy-index - 1.
v_char = t_pdf+v_offset(1).
assign v_char to <xchar> casting type x.
concatenate v_xstringdata_aux <xchar> into v_xstringdata_aux in byte mode.
enddo.
concatenate v_xstringdata v_xstringdata_aux into v_xstringdata in byte mode.
clear v_xstringdata_aux.
endloop.
call function 'SCMS_BASE64_ENCODE_STR'
exporting
input = v_xstringdata
importing
output = v_base64data.
v_base64data_aux = v_base64data.
while strlen( v_base64data_aux ) gt 255.
clear t_base64data.
t_base64data-data = v_base64data_aux.
v_base64data_aux = v_base64data_aux+255.
append t_base64data.
endwhile.
if not v_base64data_aux is initial.
t_base64data-data = v_base64data_aux.
append t_base64data.
endif.
And you're done!
Hope it helps.
As previous speakers said, you should do extensive training before implementing such stuff in productive environment.
However, calling WebdynPro ABAP within report can be done with the help of WDY_EXECUTE_IN_PLACE function module. You should pass there Webdyn Pro application and necessary parameters.
CALL FUNCTION 'WDY_EXECUTE_IN_PLACE'
EXPORTING
* PROTOCOL =
INTERNALMODE = ' '
* SMARTCLIENT =
APPLICATION = 'Z_MY_WEBDYNPRO'
* CONTAINER_NAME =
PARAMETERS = lt_parameters
SUPPRESS_OUTPUT =
TRY_TO_USE_SAPGUI_THEME = ' '
IMPORTING
OUT_URL = ex_url
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.

Resources