Is it possible to open Workfront closed PROJ or TSHET objects via the API? - workfront-api

I am in the process of integrating Workfront with my company's financial software and one of the processes we are trying to automate is the transfer of hours from one project to another in a scenario where hours need to be massaged for billing purposes. Generally, we are looking to transfer hours for a single user from one project to another. In cases where either the associated PROJ is closed or the associated TSHET is closed, is it possible to reopen these, at least temporarily via the API?

Sure, you can re-open them by changing the status. You can also subsequently re-close them once you're done with your tasks.
For a timesheet, you'll just change the status to 'O' as follows:
PUT https://<site>.my.workfront.com/attask/api/v9.0/TSHET/<uuid>?status=O&apiKey=<api key>
For a project, you'll need to know what type of status you want to go back to (you may have a workflow such as new->open->in progress->implementing->testing->closing->closed and you want to go back to 'closing'. Find the 3-character key for that status and update the status as follows:
PUT https://<site>.my.workfront.com/attask/api/v9.0/PROJ/<uuid>?status=<key>&apiKey=<api key>
Closing them again would just involve setting the timesheet to C and the project to whatever its old status was.

Related

How can I clear all transactions for a PeopleSoft Approval Process? Using SQL

New to AWE. I have a situation where data and/or configuration and/or code has become "broken" or scrambled or corrupted while building this approval process.
I'd like to clear out all stuck transactions and start again.
Monitor Approvals has the status for all approval process as status = 'Approved'. They are not. So, I'd like to just clear out all the transaction data and start again.
Which tables do I need to clear out (update and/or delete)?
I've found that since AWE is (for now) built on top of plain, old Workflow I was able to get myself out of trouble by simply updating the Worklist items' status to approved for the user, and clearing the header and XREF tables:
update psworklist set inststatus = 3 where oprid='TheUser';
delete from ps_your_xref;
delete from ps_your_header_rec;
Warning: This might well be a very cowboy approach, has worked fine for me in Development.

Any code can trigger a batching override action?

I am working on a project which will batching some 834 records in file.
I setup the batching trigger as when the record count reaches a number, a batch file will release. But I also want release a batch even the record count is not reached (for example, every night, release all queueing record as a final file).
I know it can be done by click the override button in Batch Configuration window, but it need be done automatically.
So, basically, my question is, what did BizTalk do when I clicked the override button? Does BizTalk prove anyway to let me do that in a program?
I must say I did not try to send a controlmessage to a batch setting as release per record count, if you know this works, please let me know.
You're almost there and to complete the process isn't that difficult.
Leave the Batch configuration at the records count as it is.
Then, setup a process where an External Release trigger is sent at the appropriate time. A Windows Scheduled Task is a viable option, it can copy a file to a File Receive Location.
This article describes how to create the trigger message: http://msdn.microsoft.com/en-us/library/bb246108.aspx

BizTalk TPE continuations and uncompleted activities

Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.

Send notifcation if expected message did not arrive in BizTalk

I have a BizTalk receive port monitoring an FTP location. I expect a file to arrive at least once per day in that location and for BizTalk to pick it up and kick off an orchestration. This part is working fine.
However, sometimes the sender fails to send a message during a day, in which case I want an email to sent to notify the users that something is amiss.
I could solve this outside of BizTalk, by creating a daily job that looks in our database for processed files and makes sure there is at least one in any given day. However, I'd prefer to solve this "in line" with the BizTalk solution that is already in place, and not deploy a separate, unrelated job which will increase maintenance headaches.
Is there any functionality in BizTalk that would allow me to send a notification if a receive port doesn't receive something in a given timeframe?
Short answer: Not really.
The logic you want to implement would require a customised version of the FTP Adapter. Depends on how comfortable you are rolling up your sleeves and getting into the Adapter SDK.
If you wanted to keep your solution "Purely BizTalk", you could set up a secondary Orchestration using a SQL Receive Location tied to a stored procedure. This stored procedure executes regularly and looks for records in your "Processed File" table received in the past (business) day. If none are found, it fabricates a record and returns it via the SQL Receive Location. This would be your trigger to send the email notification.
One solution, not elegant though, is to have a secondary FILE receive location, with a schedule window, outside your cutoff time.
Failure scenario:
In this FILE receive location, you have an intelligent/dummy message conforming to the same schema as FTP receive. The intelligent part is to have one of the fields in the message telling us when was the last time we received the file from FTP. The rest of the content is dummy.
Within your orchestration, you check where you received your file from. If its the secondary receive location (using the context property BTS.ReceiveLocationName), you check the date field of this dummy/intelligent message and if it is in past 24 hours ( or similar logic) send an email notifying you did not receive the file from the upstream FTP process and also save a copy of the dummy message (received) back to the secondary FILE receive location unchanged.
Success Scenario:
Apart from normal processing, you save a copy of the dummy/intelligent message to the secondary FILE receive location, with the datetime field reflecting when you processed the file you received from FTP receive location.
Initialising:
You start with a dummy/intelligent message in the secondary FILE receive location with the datetime field value well in the past ( assuming we never received the file from FTP) or with yesterday's date ( assuming we received a file successfully from FTP the day before.)
Overview:
Your orchestration has two trigger points.
When you receive a file via FTP
A scheduled FILE receive location, triggered after the cut-off time.

build queue issues in CC.net

Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.

Resources