What are the possible values for the duplicaterecords attribute in armx files - remedy

I'm looking to automate an import to remedy ARS 8.1, and I'm 99.9999% there... I just need to change what the import does with duplicate records, as everything else seems to be working exactly as desired.
In remedy armx files (mapping file for the dataimporttool), there is a <datahandling> node with a duplicaterecords attribute, the only documentation I can find on it mentions the value GEN_NEW_ID, which would logically map to the "Generate New ID for Duplicate Records" option in the GUI import tool. I need the value to logically map to the "Update Old Record with New Record's Data" GUI option (both of these options and the other three possible options are described on the Defining Data Import preferences page in BMC docs.
Other than that one page (Importing in...), and the several local versions of the exact same paragraph in all the remedy documentation I have, Google turns up nothing. Please tell me someone has this information somewhere!

By saving .armx files from the GUI, I have found the following options and values for the duplicaterecords attribute in the <datahandling> tag.
Title Value in .armx file
Generate New ID for All Records GEN_NEW_ID
Reject Duplicate Records DUP_ERROR
Generate New ID for Duplicate Records DUP_NEW_ID
Replace Old Record with New Record DUP_OVERWRITE
Update Old Record with New Record's Data DUP_MERGE

Related

Get Requsition ID based on PO

In FSCM I am looking to modify the Search view on Add/Update PO page (Main Menu--> Purchasing--> Purchase Orders--> Add/Update POs) to display the Requisition ID associated with the PO in the search results page. The only table I have found that has both PO_ID and REQ_ID is PS_PO_LINE_DISTRIB however unless I use a SELECT DISTINCT clause I will get multiple PO_ID rows when there are more than 1 line on a PO.
Within Purchase Order Inquiry you can see the related Requisition ID's related to a PO by clicking on Document Status link inside the Purchase Order inquiry details page.
I started looking at the PeopleCode within the the Purchase Order Inquiry to see how they are linking the PO to a Requisition and it appears to use work tables with related PeopleCode function libraries, but I wasn't able to figure our how they get linked. I am hoping someone else may know the answer to this. Thank you.
I'm on an old version of PeopleSoft (SCM 8.80, Tools 8.51), so your mileage may vary. I'm assuming you're familiar with App Designer. If not, comment below and I'll add some details about what I'm clicking on.
Find the name of the Add/Update PO component.
Open the PURCHASE_ORDER component in App Designer. Now let's find the name of the search record. Note that there is a different record for the Add Search Record, so if you want to change that too, do all of this for that record as well.
Open the PO_SRCH record, and add the REQ_ID field to it. Make sure you mark the field as a key. You should consider saving your modified PO_SRCH under a new name in case you want to be able to revert to vanilla PeopleSoft. If you do, change the Search Record in the component to your new record name.
We can see that PO_SRCH is a view. So let's modify the view to pull in REQ_ID from PO_LINE_DISTRIB. As you mentioned above, there doesn't appear to be another table with both PO_ID and REQ_ID, so you'll have to do a SELECT DISTINCT.
We should do a LEFT OUTER JOIN instead of a standard join because if you do a standard join and you enter a purchase order with no lines and save it, then you'll never be able to retrieve that purchase order in this window. Since REQ_ID is a key field, we can't have a null, so we have to do the CASE.
One odd thing that I ran into here was building the view now gave me an error about selecting fewer columns in the SQL than I had in my record definition. I solved it by modifying the view for SQL Server. I've never had to do that before and I don't know why I had to do it for this specific record. But anyway, I entered the same SQL under the record's "Microsoft SQL Server" definition.
In the properties of PO_SRCH, we can see that it has a related language record. If you're only using one language, you can probably get away without changing this, but I'll do it for completeness. Open PO_SRCHLN. Now add REQ_ID to it (mark it as a key field like you did above), and save it as PO_SRCHLN2 (I'm saving it under a new name so I don't break anything else that may be using PO_SRCHLN).
Edit the SQL the same was as you did above. Note: I didn't have to also change the Microsoft SQL Server definition like I did above. I have no idea why.
Now build PO_SRCHLN2.
Go back to PO_SRCH and change its related language record to PO_SRCHLN2.
Now build PO_SRCH.
Hopefully you didn't get any errors and your search page has the requisition ID in it now. My system doesn't use requisitions so they're all blank in the example below, but the new field is there.

Powerapps - get stuck with UpdateContext

I am trying to build a PowerApp to log setup times of our machines by our fitters.
This is what my app looks like:
There are buttons named "Uhrzeit". Pressing these will write the current date and time into the Date/Time fields. I am using the following code:
UpdateContext({Total8:(Text( Now(); "[$-de-DE]dd/mm/yyyy hh:mm:ss" ))})
The Date/Time field is named Total8.
The code is working well but after saving the form and opening a new record the old data is still available in the fields. By clicking on the button "Zeiten zurücksetzen" I can "delete" the old data.
UpdateContext({Total8:""})
Problem: When I open one of the older records the old data is not available in the form. There is only the value of the last record. In the Common Data Service where my records are saved the values are correct.
As an example, I am saving this record:
When I open a new record, the values of the record 1 are still available. This should not be the case if my app worked properly.
For your Information:
If I enter the date/time without tapping the button, saving the record and opening a new record I don't have the problem. I think the "UpdateContext" code is not the code I should use here.
Can anyone help me solve the problem?
I don't think there's a problem with using the contexts in this way -- but remember that a context is just a variable. It isn't automatically linked to a datasource in any special way - so if you set it equal to Now(), it's going to keep that value until you do something different.
When you view an old record, you need to get the data from CDS and update your contexts to match the CDS data. Does this make sense?
Yeah thats my problem.
I want the variable to be linked to a datasource. Or is it possible to write the date/time into the fields without using a context variable?

How to get all rows from Page list and convert them to CSV utilizing pxConvertResultsToCSV

I have a Repeat grid layout, as a source is Report definition. The grid displays twenty row per page. So, if there are thirty-three rows, there are four pages.
I have got a task to export all grid's data to CSV. I have found out the pxConvertResultsToCSV activity. It requires to pass PageList with the properties to convert. I use pgRepPgSubSectionMySectionListB.pxResults to do this. But I have realized, that the property pxResults contains only first twenty elements of pgRepPgSubSectionMySectionListB. But I must export to CSV all the rows. How can I reach this? Thank you.
First run your report by calling pxRetrieveReportData activity of class Rule-Obj-Report-Definition in you acticity
Syntex:- call Rule-Obj-Report-Definition.pxRetrieveReportData
It will ask for parameters:-
pyReportName :- your report definition name
pyReportClass:- class of the report definition
pyPageName :- any page name for example ReportListExport. This page must be defined in Pages & Classes of class Code-Pega-List
After successful execution of this step, you will get ReportListExport.pxResults in Clipboard.
Now use this pxResults for export.
There is one more activity to export your Report in excel.
Call pzViewExportToExcel activity after running your report. And keep ReportListExport.pyReportDefinition as step page of this step.
This is preferred one.
This question is a bit old now so I'm sure the OP has probably solved the problem and moved on at this point. But for future viewers there is an easier way to solve this.
Pega includes a gadget called the "Record Editor" which can be used to display a report definition as an editable data table. It shows the provided report definition in a simple table as normal but users can also edit the rows, delete the rows and add new ones. It also includes import and export actions at the top so users can drop the entire resultset being shown in the table to CSV and then re-import changes back in after editing. You can find more information on this gadget and how to use it in this community article
If you simply want to provide an option at the top of a table sourced from a report definition that allows users to export the results as CSV without using the record editor gadget there is an API for that as well. The activity "pxDownloadDataRecordsAsCSV" in class "PegaAccel-Task-DataTableEditor" does this. It accepts the class and name of a report definition as parameters, runs that report and serves up the contents as a CSV file.
The second part here isn't too different from AJ's solution it's just an already existing parameterized activity you can use instead of writing one yourself.

Where to find data in database of sxc

I created an app with the sxc module.
Now I have like 500 empty rows which I want to delete.
I searched for them in the database to delete them all but I cannot seem to find them and I think it is a waste of time to delete them all 1 by 1.
It's data in "Manage content / data" table.
Let me know please.
I have another question:
If I edit an item. The title of the module gets changed with the first items 'name' field. How to avoid that? Is it an bug?
Thanks in advance.
Basically JKings answer is correct - this kind of bulk-operation can easily be done using export/import, because on re-import you can tell 2sxc to delete all items not found in the import. This ensures that 2sxc can take care of data integrity etc. Instructions https://2sxc.org/en/Learn/Content-Export-and-Import
So the correct steps are:
export the list
open in notepad, xml editor (or use excel, as shown in the link)
Remove all those you don't want
Re-import, but choose the option to "Remove all entities not found in import"
You're set :)

Read a CSV file that have indefinite number of columns every time and create a table based on column names in csv file

I have a requirement to load the csv into DB using oracle apex or pl/sql code, but the problem is they are asking to load the csv file which will not come with same number of columns and column names .
I should create table & upload data dynamically based on the file name and data that i'm uploading.
For every file i need to create a new table dynamically and insert data that are present in csv file.
For Example:
File1:
col1 col2 col3 col4 (NOTE: If i upload File 1, Table should be created dynamically based on the file name and table should contain same column name and data same as column headers of csv file . )
file 2:
col1 col2 col3 col4 col 5
file 3:
col4 col2 col1 col3
Depending on the columns and file name i need to create table for every file upload.
Can we load like this or not?
If yes, Please help me on this.
Regards,
Sachin.
((Where's the PL/SQL code in this solution!!??! Bear with me... the
answer is buried in here somewhere... I introduced some considerations
and assumptions you will need to think about before going into the
task. In the end, you'll find that Oracle APEX actually has a
built-in solution that satisfies exactly what you've specified... with
some caveats.))
If you are working within the Oracle APEX platform, you will have some advantages. APEX Version 4.2 and higher has a new page element called "Data Loading". The disadvantage however is that the definition of the upload target is fixed and not dynamic. You will need to know how your table is structured prior to loading the data.
One approach to overcome this is to build a generic, two-column table as your target, which will serve for all uploads. Column 1 will be your file-name and column two will be a single clob data type, which will contain the entire data file's contents including the header row. The "Data Loading" element will give the user the opportunity to verify and select this mapping convention in a couple of clicks.
At this point, it's mostly PL/SQL backend work doing the heavy lifting to parse and transform the data uploaded. As far as the dynamic table creation, I have noticed that the Oracle package, DBMS_SQL allows the execution of DDL SQL commands, which could be the route to making custom tables.
Alex Poole's comment is important as well, you will need to make some blanket assumption about the data type or have a provision to give more clues about what kind of data is contained. Assuming you can rely on a sample of existing data values is not good... what if all the values in your upload are null? I recommend perhaps a second column in the data input with a clue about the type of data for each column... just like the intended header names, maybe: AAAAA = for a five character column, # = for a numeric, MM/DD/YYYY = for a date with a specific masking.
The easier route:
You will need to allow your end-user access to a developer-role account on a workspace of your APEX server. It is not as scary as you think. With careful instruction and some simple precautions, I have been able to make this work with even the most non-technical of users. The reason for this is that there is a more powerful upload tool found under the following menu item:
SQL Workshop --> Utilities --> Data Workshop
There is a choice under "Data Load" --> "Spreadsheet Data"
The data load tool will automatically do the following:
Accept a CSV formatted file through a browse function on your client machine
Upload the file and parse the first record for the column layout (names)
Allow the user to create a new table from the uploaded file, or to map to an existing one.
For new tables, each column data type can be declared and also a specific numeric/date mask if additional conversion from the uploaded data is necessary.
Delimiter type, optional enclosures (like double quotes), decimal conventions and currency types can also be declared prior to parsing the uploaded file.
Once the user has identified all these mappings and settings, the table is created with the uploaded data. Any errors in record upload are reported immediately afterwards with detailed feedback on the failed records.
A security consideration to note:
You probably do not want to give end users access to your APEX server's backend... but you CAN create a new workspace... just for your end users... create a new database schema for receiving their uploads, maybe with some careful resource controls. Developer is the minimum role needed... but even if the end users see the other stuff there won't be access to anything important from an isolated workspace.
I have implemented the isolated workspace approach on a 4.0/4.1 release APEX platform a few years back, and it worked nicely. Our end user had control over the staging and quality checking of her data inputs (from excel spreadsheet/csv exports collected from a combination of sources). I suppose it may have been even better to cut her out of the picture entirely and focused on automating the export-review-upload process between our database and her other sources. In this case, the volume of data involved was not great enough (100's to 1000's of records) and the need for manual review and edit of the exported data was very important prior to pushing it into the database... so the human element was still important in this case - it is something you'll want to think about now.

Resources