I am using the MYOB ODBC Driver to insert new customer records into MYOB (AccountRight Premier V19) from a .NET application but the records are not being imported correctly - specifically the address part of the insert is not working correctly and that then appears to impact on the location of all fields after those address fields. Everything seems to be shifted back one column.
Here is an example SQL statement as generated by the application:
Insert Into Import_Customer_Cards (CoLastName, CardStatus, CurrencyCode, Address1AddressLine1, Address1City, Address1State, Address1PostCode, Address1Country, Address1Phone1, Address1Phone2, Address1Phone3, Address1Fax, Address1Email, Address1Website, Address1ContactName, Address1Salutation, ABN) VALUES ('1 AAA TEST', 'N', 'AUD', '116 My Street', 'My Suburb', 'QLD', '4000', 'Australia', '31033383', '', '', '', '', '', 'This Bloke', '', '12345678910')
The value of '116 My Street' is NOT being imported and all subsequent fields are moving "up" one column so that the city winds up in the Address1StreetLine4 column, The state winds up in the city column etc. within MYOB itself.
Also, the phone numbers and ABN fields just disappear! I cannot find them in MYOB anywhere in the customer record after the import has completed.
I have checked the MYOB error log file and there is nothing there to suggest something major has gone wrong.
I have tried everything I know to try and get this to work but I am now stumped.
Does anyone here have any idea at all as to what might be causing this?
My guess is that these address fields need some sort of "special" formatting. Am I close?
v19 address lines (1 to 4) are presented as if they are separate fields, but actually the driver maps them to a single field. One thing you could try is include Address1AddressLine[2..4] in your SQL statement, setting each to empty string. Be careful not to exceed 255 characters for all four (noted you're in no danger of that here).
Do not try to import a currency code unless you are sure your flavour of v19 supports currency settings. Strange things can happen if you try to import values that are not recognised by a specific version of v19.
The v19 ODBC driver actually invokes the same import/export routines exposed in the UI under the file menu. If you're not sure whether you're setting values correctly, try importing them using the UI (to do this take your import data and turn it into a csv). You will get better error responses.
It turns out this is a bug in the MYOB ODBC driver.
Rather than persevere with trying to find a resolution, we are upgrading to the latest version of MYOB so we can use the new API procedures and do away with the ODBC layer altogether.
This would appear to be the quickest and most reliable solution.
Related
I have a Power Bi Dashboard I've made that pulls its Data from a Redcap Database using an API. It looks like this, with mostly Text in the various columns:
What I'd love to do is make it so that thee fields circled in red were real files that could be clicked and downloaded. I know that the API allows me to pull files from it. I've used R with code like this (that individually mentions what record and field I want):
library(REDCapR)
redcap_download_file_oneshot( redcap_uri="https://redcap.company.org/redcap/api/", token="################", record="1", field='full_protocol_attachment_t_v2', event = "", repeat_instrument = NULL, repeat_instance = NULL, verbose = TRUE, config_options = NULL, overwrite = TRUE )
To individually download files one at a time. The problem is twofold:
If I were to use R, I have no idea how to automate that snippet of code for every row I may pull from the database (and if there are new rows)
My understanding of PowerBi is that if I do use R, it makes refreshing the data harder when the report is published online. Right now given all the data just comes from an api directly into PowerBi, I don't have to setup any fancy permissions or gateways to have automated refreshes.
So my question is: is there a way to do this directly within PowerBi? Like a calculated column or something that would pull a particular records file based on what row it was in?
The only thing you can do in native PBI is have a URL which when clicked will open the destination for you. Can you create a full url for the file download?
In FSCM I am looking to modify the Search view on Add/Update PO page (Main Menu--> Purchasing--> Purchase Orders--> Add/Update POs) to display the Requisition ID associated with the PO in the search results page. The only table I have found that has both PO_ID and REQ_ID is PS_PO_LINE_DISTRIB however unless I use a SELECT DISTINCT clause I will get multiple PO_ID rows when there are more than 1 line on a PO.
Within Purchase Order Inquiry you can see the related Requisition ID's related to a PO by clicking on Document Status link inside the Purchase Order inquiry details page.
I started looking at the PeopleCode within the the Purchase Order Inquiry to see how they are linking the PO to a Requisition and it appears to use work tables with related PeopleCode function libraries, but I wasn't able to figure our how they get linked. I am hoping someone else may know the answer to this. Thank you.
I'm on an old version of PeopleSoft (SCM 8.80, Tools 8.51), so your mileage may vary. I'm assuming you're familiar with App Designer. If not, comment below and I'll add some details about what I'm clicking on.
Find the name of the Add/Update PO component.
Open the PURCHASE_ORDER component in App Designer. Now let's find the name of the search record. Note that there is a different record for the Add Search Record, so if you want to change that too, do all of this for that record as well.
Open the PO_SRCH record, and add the REQ_ID field to it. Make sure you mark the field as a key. You should consider saving your modified PO_SRCH under a new name in case you want to be able to revert to vanilla PeopleSoft. If you do, change the Search Record in the component to your new record name.
We can see that PO_SRCH is a view. So let's modify the view to pull in REQ_ID from PO_LINE_DISTRIB. As you mentioned above, there doesn't appear to be another table with both PO_ID and REQ_ID, so you'll have to do a SELECT DISTINCT.
We should do a LEFT OUTER JOIN instead of a standard join because if you do a standard join and you enter a purchase order with no lines and save it, then you'll never be able to retrieve that purchase order in this window. Since REQ_ID is a key field, we can't have a null, so we have to do the CASE.
One odd thing that I ran into here was building the view now gave me an error about selecting fewer columns in the SQL than I had in my record definition. I solved it by modifying the view for SQL Server. I've never had to do that before and I don't know why I had to do it for this specific record. But anyway, I entered the same SQL under the record's "Microsoft SQL Server" definition.
In the properties of PO_SRCH, we can see that it has a related language record. If you're only using one language, you can probably get away without changing this, but I'll do it for completeness. Open PO_SRCHLN. Now add REQ_ID to it (mark it as a key field like you did above), and save it as PO_SRCHLN2 (I'm saving it under a new name so I don't break anything else that may be using PO_SRCHLN).
Edit the SQL the same was as you did above. Note: I didn't have to also change the Microsoft SQL Server definition like I did above. I have no idea why.
Now build PO_SRCHLN2.
Go back to PO_SRCH and change its related language record to PO_SRCHLN2.
Now build PO_SRCH.
Hopefully you didn't get any errors and your search page has the requisition ID in it now. My system doesn't use requisitions so they're all blank in the example below, but the new field is there.
I am trying to build a PowerApp to log setup times of our machines by our fitters.
This is what my app looks like:
There are buttons named "Uhrzeit". Pressing these will write the current date and time into the Date/Time fields. I am using the following code:
UpdateContext({Total8:(Text( Now(); "[$-de-DE]dd/mm/yyyy hh:mm:ss" ))})
The Date/Time field is named Total8.
The code is working well but after saving the form and opening a new record the old data is still available in the fields. By clicking on the button "Zeiten zurücksetzen" I can "delete" the old data.
UpdateContext({Total8:""})
Problem: When I open one of the older records the old data is not available in the form. There is only the value of the last record. In the Common Data Service where my records are saved the values are correct.
As an example, I am saving this record:
When I open a new record, the values of the record 1 are still available. This should not be the case if my app worked properly.
For your Information:
If I enter the date/time without tapping the button, saving the record and opening a new record I don't have the problem. I think the "UpdateContext" code is not the code I should use here.
Can anyone help me solve the problem?
I don't think there's a problem with using the contexts in this way -- but remember that a context is just a variable. It isn't automatically linked to a datasource in any special way - so if you set it equal to Now(), it's going to keep that value until you do something different.
When you view an old record, you need to get the data from CDS and update your contexts to match the CDS data. Does this make sense?
Yeah thats my problem.
I want the variable to be linked to a datasource. Or is it possible to write the date/time into the fields without using a context variable?
I'm looking to automate an import to remedy ARS 8.1, and I'm 99.9999% there... I just need to change what the import does with duplicate records, as everything else seems to be working exactly as desired.
In remedy armx files (mapping file for the dataimporttool), there is a <datahandling> node with a duplicaterecords attribute, the only documentation I can find on it mentions the value GEN_NEW_ID, which would logically map to the "Generate New ID for Duplicate Records" option in the GUI import tool. I need the value to logically map to the "Update Old Record with New Record's Data" GUI option (both of these options and the other three possible options are described on the Defining Data Import preferences page in BMC docs.
Other than that one page (Importing in...), and the several local versions of the exact same paragraph in all the remedy documentation I have, Google turns up nothing. Please tell me someone has this information somewhere!
By saving .armx files from the GUI, I have found the following options and values for the duplicaterecords attribute in the <datahandling> tag.
Title Value in .armx file
Generate New ID for All Records GEN_NEW_ID
Reject Duplicate Records DUP_ERROR
Generate New ID for Duplicate Records DUP_NEW_ID
Replace Old Record with New Record DUP_OVERWRITE
Update Old Record with New Record's Data DUP_MERGE
I have a requirement to load the csv into DB using oracle apex or pl/sql code, but the problem is they are asking to load the csv file which will not come with same number of columns and column names .
I should create table & upload data dynamically based on the file name and data that i'm uploading.
For every file i need to create a new table dynamically and insert data that are present in csv file.
For Example:
File1:
col1 col2 col3 col4 (NOTE: If i upload File 1, Table should be created dynamically based on the file name and table should contain same column name and data same as column headers of csv file . )
file 2:
col1 col2 col3 col4 col 5
file 3:
col4 col2 col1 col3
Depending on the columns and file name i need to create table for every file upload.
Can we load like this or not?
If yes, Please help me on this.
Regards,
Sachin.
((Where's the PL/SQL code in this solution!!??! Bear with me... the
answer is buried in here somewhere... I introduced some considerations
and assumptions you will need to think about before going into the
task. In the end, you'll find that Oracle APEX actually has a
built-in solution that satisfies exactly what you've specified... with
some caveats.))
If you are working within the Oracle APEX platform, you will have some advantages. APEX Version 4.2 and higher has a new page element called "Data Loading". The disadvantage however is that the definition of the upload target is fixed and not dynamic. You will need to know how your table is structured prior to loading the data.
One approach to overcome this is to build a generic, two-column table as your target, which will serve for all uploads. Column 1 will be your file-name and column two will be a single clob data type, which will contain the entire data file's contents including the header row. The "Data Loading" element will give the user the opportunity to verify and select this mapping convention in a couple of clicks.
At this point, it's mostly PL/SQL backend work doing the heavy lifting to parse and transform the data uploaded. As far as the dynamic table creation, I have noticed that the Oracle package, DBMS_SQL allows the execution of DDL SQL commands, which could be the route to making custom tables.
Alex Poole's comment is important as well, you will need to make some blanket assumption about the data type or have a provision to give more clues about what kind of data is contained. Assuming you can rely on a sample of existing data values is not good... what if all the values in your upload are null? I recommend perhaps a second column in the data input with a clue about the type of data for each column... just like the intended header names, maybe: AAAAA = for a five character column, # = for a numeric, MM/DD/YYYY = for a date with a specific masking.
The easier route:
You will need to allow your end-user access to a developer-role account on a workspace of your APEX server. It is not as scary as you think. With careful instruction and some simple precautions, I have been able to make this work with even the most non-technical of users. The reason for this is that there is a more powerful upload tool found under the following menu item:
SQL Workshop --> Utilities --> Data Workshop
There is a choice under "Data Load" --> "Spreadsheet Data"
The data load tool will automatically do the following:
Accept a CSV formatted file through a browse function on your client machine
Upload the file and parse the first record for the column layout (names)
Allow the user to create a new table from the uploaded file, or to map to an existing one.
For new tables, each column data type can be declared and also a specific numeric/date mask if additional conversion from the uploaded data is necessary.
Delimiter type, optional enclosures (like double quotes), decimal conventions and currency types can also be declared prior to parsing the uploaded file.
Once the user has identified all these mappings and settings, the table is created with the uploaded data. Any errors in record upload are reported immediately afterwards with detailed feedback on the failed records.
A security consideration to note:
You probably do not want to give end users access to your APEX server's backend... but you CAN create a new workspace... just for your end users... create a new database schema for receiving their uploads, maybe with some careful resource controls. Developer is the minimum role needed... but even if the end users see the other stuff there won't be access to anything important from an isolated workspace.
I have implemented the isolated workspace approach on a 4.0/4.1 release APEX platform a few years back, and it worked nicely. Our end user had control over the staging and quality checking of her data inputs (from excel spreadsheet/csv exports collected from a combination of sources). I suppose it may have been even better to cut her out of the picture entirely and focused on automating the export-review-upload process between our database and her other sources. In this case, the volume of data involved was not great enough (100's to 1000's of records) and the need for manual review and edit of the exported data was very important prior to pushing it into the database... so the human element was still important in this case - it is something you'll want to think about now.