Function of Rows, Rowsets in PeopleCode - peoplesoft

I'm trying to get a better understanding of what Rows and Rowsets are used for in PeopleCode? I've read through PeopleBooks and still don't feel like I have a good understanding. I'm looking to get more understanding of these as it pertains to Application Engine programs. Perhaps walking through an example may help. Here are some specific questions I have:
I understand that Rowsets, Row, Record, and Field are used to access component buffer data, but is this still the case for stand alone Application Engine programs run via Process Scheduler?
What would be the need or advantage to using these as opposed to using SQL objects/functions (CreateSQL, SQLExec, etc...)? I often see in AE programs where the CreateRowset object is instantiated and uses a .Fill method with a SQL WHERE Clause and I don't quite understand why a SQL was not used instead.
I've seen in PeopleBooks that a Row object in a component scroll is a row, how does a component scroll relate to the row? I've seen references to rows having different scroll levels, is this just a way of grouping and nesting related data?
After you have instantiated the CreateRowset object, what are typical uses of it in the program afterwards? How would you perform logic (If, Then, Else, etc..) on data retrieved by the rowset, or use it to update data?
I appreciate any insight you can share.

You can still use Rowsets, Rows, Records and fields in stand alone Application Engines. Application Engines do not have component buffer data as they are not running within the context of a component. Therefore to use these items you need to populate them using built-in methods like .fill() on a rowset, or .selectByKey() on a record.
The advantage of using rowsets over SQL is that it makes the CRUD easier. There are built-in methods for selecting, updating, inserting and deleting. Additionally you don't have to worry about making a large number of variables if there were multiple fields like you would with a SQL object. Another advantage is when you do the fill, the data is read into memory, where if you looped through the SQL, the SQL cursor would be open longer. The rowset, row, record and field objects also have a lot of other useful methods such as allowing you to executeEdits (validation) or copy from one rowset\row\record to another.
This question is a bit less clear to me but I'll try and explain. If you have a Page, it would have a level 0 row. It then could have multiple Level 1 rowsets. Under each of those it could have a level 2 rowsets.
Level0
/ \
Level1 Level1
/ \ / \
Level2 Level2 Level2 Level2
If one of your level1 rows had 3 rows, then you would find 3 rows in the Rowset associated with that level1. Not sure I explained this to answer what you need, please clarify if I can provide more info
Typically after I create a rowset, I would loop through it. Access the record on each row, do some processing with it. In the example below, I look through all locked accounts and prefix their description with LOCKED and then updated the database.
.
Local boolean &updateResult;
local integer &i;
local record &lockedAccount;
Local rowset &lockedAccounts;
&lockedAccounts = CreateRowset(RECORD.PSOPRDEFN);
&lockedAccounts.fill("WHERE acctlock = 1");
for &i = 1 to &lockedAccounts.ActiveRowCount
&lockedAccount = &lockedAccounts(&i).PSOPRDEFN;
if left(&lockedAccount.OPRDEFNDESCR.value,6) <> "LOCKED" then
&lockedAccount.OPRDEFNDESCR.value = "LOCKED " | &lockedAccount.OPRDEFNDESCR.value;
&updateResult = &lockedAccount.update();
if not &updateResult then
/* Error handle failed update */
end-if;
end-if;
End-for;

Related

MS Project: How to set daily actual work for a task using a JavaScript Add-In?

I want to synchronize data for actual work from a web-based application of my company with MS Project. I am currently developing an Add-In with JavaScript in order to achieve this:
The red circle in my screenshot shows the data that I want to set programmatically. However, I have no idea how to achieve this.
I understand that I can get Task GUIDs and then set task fields using the task GUID and the field ID. This way I can save the cumulative actual work, but not per day like in my screenshot.
The API Docs on the MS Office Website are rather hard to read and navigate. Any help would be apprechiated!
Let's first separate the language from the operation.
Operationally, based on your circle, you want to set work for a task to happen on individual days? This is done using timeScaleData, see https://learn.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa206255(v=office.11) . When I did something similar (in VBA), I had to (1) get an array of time scale values, then (2) walk/iterate through that array and set work to those days:
set timeScaleValsArry = myTask.Assignments(1).TimeScaleData(startDay, endDay, pjAssignmentTimeScaledWork, daily)
for a = 1 to timeScaleValsArry.Count
timeScaleValsArry[a].value = hoursToWorkThatDay
next
Breaking down the elements above:
myTask is the task (of type task) I want to manipulate.
Assignments is an array representing each resource assigned to the task; for my purposes, I only ever had 1 resource assigned, hence the index of (1).
TimeScaleData is the function that returns the the array starting on the day startDay (whatever you want that to be), endDay, pjAssignmentTimeScaledWork which tells this function what data we want to work with (being work, but there are alternates ), and daily which is the frequency you want to work with (for instance you can go down to minutes, or up to years).
Then the returned array timeScaleValsArry is walked, and inside the loop the daily assignment for each value is manipulated. You'd need to customize this part to meet your needs; alternatively, you don't even need to loop if you always had three days: just hard code the array indices.
As far as language, clearly this is do-able in VBA. Doing this in C# as a VSTO addin has very similar syntax. I'd presume for JavaScript (what are you using, ScriptLab?) would also have similar syntax.

PLSQL Procedure to change data across a full database

I've been asked to write a PLSQL procedure to 'clean up' codes in a database. The codes are varchar2 and are something like 00000001. They are used everywhere in the application. My new employer wants me to make the codes more readable as in turn the 00000001 into just 1 for everywhere they are used.
My question is how would one even go about that? I asked for clarification and it's still not clear and for fear of looking foolish I won't ask again. Any guidance would be welcome
let me start by saying that that sounds like a VERY BAD IDEA!!!!
if you persist it sound like you will need to use dynamic sql with the basic process of...
query all_tab_cols to get a list of columns( im hoping all your columns that use these codes have a naming standard.. ie xxx_CD )
loop over tab/cols to see if your value is there
update values in that table
...
profit ?
however then you get stuck by realities.. if the code is in a foreign key you cant just update it. you'd have to create a new parent record.. update all children to new parent then delete old parent.
you'd need to be very clear on what you are trying to achieve.. and more importantly, is there any value in it?
i suggest you start with a single codevalue to scope out the size of project.
manually start writing the updates you'd need for that 1 codevalue and then try to start automating it.

What does Managed="0" in List view XML mean?

I've written a Data Extender class and editor extension that properly displays a few additional columns for items as you browse lists in the CME (folders and structure groups). I had to register my class to handle commands like GetList, GetListSearch, GetListUserFavorites, and GetListCheckedOutItems.
What I've noticed is that the code gets run even when a list of say, schemas is loaded for a drop-down list in the CME (like when creating a new component and you get the list of schemas in a drop-down). so, even though my additional data columns aren't needed in that situation, the code is still being executed and it slows things down.
It seems that it's the GetList command called in those situations. So, I can't just skip processing based on the command. So, I started looking at the XML that the class receives for the list and I've noticed when the code is run for the drop-downs, there's a Managed="0" in the XML. For example:
For a Structure Group list: <tcm:ListItems Managed="64" ID="tcm:103-546-4">
For a Folder list: <tcm:ListItems Managed="16" ID="tcm:103-411-2">
But for a Schema list: <tcm:ListItems ID="tcm:0-103-1" Managed="0">
For a drop-down showing keyword values for a category: <tcm:ListItems Managed="0" ID="tcm:103-506-512">
So, can I just use this Managed="0" as a flag to indicate that the list being processed isn't going to show my additional columns and I can just quit processing?
Managed value is representation of what items can be created inside OrganizationItem:
64 means you can create pages
16 means you can create components
10, for example would mean you can create folders (2) + schemas (8)
518 - folders (2) + structure groups (4) + categories (512)
The value is 0 for non organizational items.
Value depends on the item itself (you can't create pages in folder, for example), as well as on security settings you have on publication and organizational item
Unfortunately CME can't offer right now that kind of granularity level to allow you to tell in a data extender where a particular WCF API call is coming from. Our WCF API is not context aware yet. It may change in the future.
Trusting Managed="0" is not a great idea.
The reason for that is the model lists are client cached per filter. In the current design the filter has CM related data and nothing related to the context the request is being fired from.
Typically the client user interface is reusing cached model data whenever is possible. For instance the same model list could be used in the CME dashboard and a drop down control placed into some item view, but with different xml list definitions: the first one will have more columns defined in the list definition than the latter. They are basically different views of the same data.
Therefore you may want to think of different solutions for your problem.
Now... where is the data behind those additional columns is coming from? Is it Tridion CM or a third party provider?
Sometimes the web server caching may provide an acceptable way to improve the response times. But that's the kind of design you should evaluate and decide upon.
I think you would have a more robust solution if you read the ID of the list, and only execute your code for lists of type 2 and 4 (Folders and Structure Groups respectively). but that won't help you with search views etc.
From previous experience and what User978511 says the Managed attribute is an indication of item types that can be created from the context of that list.
Unfortunately that means that the Managed attribute may well be 0 for any user that doesn't have sufficient rights to create items. E.g. check what Managed is in a Structure Group for a user that isn't allowed to create Pages or Structure Groups. It may well be 0 in that case too, meaning it is useless for your situation.
Update
You may be able to reach your goal better by looking at the columns parameter:
context.Parameters["columns"]
In a few tests I've run I get different values, depending on whether I get a list for the main list view, the tree or a drop down list.
543
23
7
Those values are a bit mask of these constants (from Constants.js):
/**
* Defines the column filter.
* Used to specify which attributes should be included in XML list data.
* #enum
*/
Tridion.Constants.ColumnFilter =
{
ID: 1,
ID_AND_TITLE: 3,
DEFAULT: 7,
EXTENDED: 15,
ALLOWED_ACTIONS: 16,
VERSIONS: 32,
INTERNALS: 64,
URL: 128,
XML_NAME: 256,
CHECK_OUT_USER: 512,
PUBTITLE_AND_ITEM_PATH: 1024
};
So from my limited testing it seems that drop downs request DEFAULT columns, while the main list view and the tree both have ALLOWED_ACTIONS in there. This makes sense to me, since the user gets can interact with the list items in the tree and list view, while they can only select them in the drop downs. So checking for the presence of ALLOWED_ACTIONS in the columns parameter might be one way to reduce the number of places where your data extender adds information.

Qt: QSqlTableModel + QTableView sync with PostgreSQL

I'm writing a database access app for storing some data and want to ask a few questions about the model/view architecture.
(Using: Qt 4.7.4, own build; PostgreSQL 9.0; Targets: WinXP, Win7 (32/64 bit))
Let me first explain what I am trying to achieve and where I am currently.
I have two pages (subclassed QWidgets inserted in a QStackedWidget) with a QTableView bound to a model. Each view is bound to a table in the PostgreSQL server. You can add/edit/delete/sort/filter items.
Each page can be seen by only one type of users, lets call the roles Role1 and Role2.
The submit strategies of everything connected to the model are OnManualSubmit.
(Transaction isolation level = Serializable.) When two users want to edit(for example) the same row, I want to do a "SELECT ... FOR UPDATE" query - to make sure that when someone edits something, he will merge his changes with newer ones (if any, just like in SVN for example). But I see only a submitAll() method the QSqlTableModel.
Maybe catching the signals beforeUpdate(), beforeDelete(), beforeInsert() and performing manually "SELECT ... FOR UPDATE" is one option.
The other way I think is to subclass QSqlTableModel. What is the clean and nice way to achieve this?
I want to periodically update the QSqlTableView for each of the pages (one page is seen at most, Role1 users have access only to Page1 and the same for Role2 => Page2).
The first thing that came to my mind is to use a QTimer and manually call select() of the QSqlTableModel, but... not sure if this is the cool way.
I also want to periodically check if the connection to the database is ok, but I think that a QTimer + QSqlDatabase::isOpen () will do.
Now, the 2 tables have the same primary keys and some columns are the same. I want when a user with Role1 changes a row in Table1 to automatically change corresponding columns of Table2 and vice versa. Should I create a trigger in Postgres?
BTW, the database is small - each of the two tables is around 3-4000 rows with ~10 columns (varchars mostly, 1 text and 2 date colunms).
Thanks for reading and Happy New Year! :)
I think you should consider doing something of the following:
Instead of using QSqlTableModel as a model I'd implement my own model as a subclass of QAbstractTableModel. This will allow you a lot of control over what you can do in terms of data manipulation.
One thing that this will require is for certain fields in the table you would need to implement subclass of QAbstractItemDelegate that will allow for modification of data in the table as I am fairly sure you don't want to allow users updating any field in the table as for example primary key is likely have to be left alone.
For question 2 I would suggest implementing a field called transaction_counter for every row so you don't have to select every row in the table just the updated ones the transaction_counter will be updated on every row update and the new one will be inserted on the new row insert. One thing that will be required is that the counter is unique across the table. For example if initial state of the table is: row1 has counter = 0 and row2 has counter = 0. If row1 is updated counter set to 1. When row1 is then updated again counter on it is set to 2. When row2 is now updated counter on it is set to 3, etc. You can certainly do the data refreshes now using QTimer and this will be much more advantageous to for example checking the data as one user may be updating the same table as another user with the same Role.
For Question 3. I don't see any reason why not custom models and especially if you decide to separate data from the model you can manipulate data separately from it's display. Sort of Data->Model->View->Controller implementation. Each one can be maintained separately as long as you have a feedback mechanism for your delegates.
For Question 4. The answer is sure or you can implement the trigger in your application.
Hope this helps. Have a great New Year!

Allow users to create new categories and fields on ASP.NET website

We have a db driven asp.net /sql server website and would like to investigate how we can allow users to create a new database category and fields - is this crazy?. Is there any examples of such organic websites out there - the fact that I havent seen any maybe suggest i am?
Interested in the best approach which would allow some level of control by Admin.
I've implemented things along these lines with a dictionary table, rather than a more traditional table.
The dictionary table might look something like this:
create table tblDictionary
(id uniqueidentifier, --Surrogate Key (PK)
itemid uniqueidentifier, --Think PK in a traditional database
colmn uniqueidentifier, --Think "column name" in a traditional database
value nvarchar, --Can hold either string or number
sortby integer) --Sorting columns may or may not be needed.
So, then, what would have been one row in a traditional table would become multiple rows:
Traditional Way (of course I'm not making up GUIDs):
ID Type Make Model Year Color
1 Car Ford Festiva 2010 Lime
...would become multiple rows in the dictionary:
ID ITEMID COLUMN VALUE
0 1 Type Car
1 1 CarMake Ford
2 1 CarModel Festiva
3 1 CarYear 2010
4 1 CarColor Lime
Your GUI can search for all records where itemid=1 and get all of the columns it needs.
Or it can search for all records where itemid in (select itemid from tblDictionary where column='Type' and value='Car' to get all columns for all cars.
In theory, you can put the user-defined types into the same table (Type='Type') as well as the user-defined columns that that Type has (Type='Column', Column='ColumnName'). This is where the sortby column comes into it - to help build the the GUI in the correct order, if you don't want to rely on something else.
A number of times, though, I have felt that storing the user-defined dictionary elements in the dictionary was a bit too much drinking-the-kool-aid. Those can be separate tables because you already know what structure they need at design time. :)
This method will never have the speed or quality of reporting that a traditional table would have. Those generally require the developer to have pre-knowledge of the structures. But if the requirement is flexibility, this can do the job.
Often enough, what starts out as a user-defined area of my sites has had a later project to normalize the data for reporting, etc. But this allows users to get started in a limited way and work out their requirements before engaging the developers.
After all that, I just want to mention a few more options which may or may not work for you:
If you have SharePoint, users already have the ability to create
their own lists in this way.
Excel documents in a shared folder that are saved in such a way
to allow multiple simultaneous edits would also serve the purpose.
Excel documents, stored on the webserver and accessed via ODBC
would also serve as single-table databases like this.

Resources