ALM11 OTA API Error for Unattached test sets: No post allowed for object that has virtual father - hp-quality-center

In HP-ALM 11, moving test sets from the 'Unattached' folder to any other folder in the test lab, using the OTA API throws the below exception:
'No post allowed for object that has virtual father'
The exception is thrown at the 'post' below:
TS.TestSetFolder = DestFolder; TS.Post();
The same works in QC10. Need your help please.
Additional info:
Setting the destination test set folder for test sets is the way to move the test sets from one folder to another in the test lab, which works for test sets in any other folder except for the Unattached folder using ALM11 OTA API. The problem is not seen in QC10 (using QC10 OTA API).

Related

GetTwinAsync() are not supported in IoT edge Simulator v0.14.10

GetTwinAsync() returns always Twin object with empty properties, all properties of my IoT edge module are null when I run my IoT edge device in Simulator, in my Linux server works everything fine. I should wait also about 20 seconds to get a response from GetTwinAync().
If we look at it, this problem is expected. If you read this Understand and use module twins in IoT Hub document, then you will see that from module app, only has permission to read desired properties and read/write reported properties. If you check this image below you will understand better.
The lifecycle of a module twin is linked to the corresponding module identity. Modules twins are implicitly created and deleted when a module identity is created or deleted in IoT Hub.
To access all module properties, you can do it from solution back end and require the ServiceConnect permission. You will need Microsoft.Azure.Devices V1.16.0-preview-001 or later. The following is a console app code snippet.
...
RegistryManager registryManager = RegistryManager.CreateFromConnectionString(connectionString);
Module module;
try
{
module = await registryManager.AddModuleAsync(new Module(deviceID, moduleID));
}
catch (ModuleAlreadyExistsException)
{
module = await registryManager.GetModuleAsync(deviceID, moduleID);
}
...
For more detailed explanation and example check this Get started with IoT Hub module identity and module twin (.NET). If your issue still persist then you can open an issue on azure-iot-sdk-csharp repository.

terraform GCP VPC connector creation issue

Overview
I tried creating a VPC network, having a subnet and adding a Serverless VPC connector with terraform in GCP. I was following the official guide ( https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#terraform ) and initially everything was working well. After that I accidently commited my JSON key to github, someone stole it and used it for crypto, the project was disabled but shortly after that reinstated
After that my terraform VPC connector creations started to fail. I tried a lot of different things but nothing seems to work(running destroy, changine service accounts, changing names, deleting all of the terraform subfolders, deleting EVERY resource and restarting the process)
The errors I am getting are:
│ Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 13, message: An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
│
or
│ Error: Error creating Connector: googleapi: Error 409: Requested entity already exists
Today I tried to create VPC connector from the command line(gcloud) and from the UI tool. The errors persisted
Unknown error. Original error message: Operation failed: Insufficient CPU quota in region.
Max throughput of the connector per day over last seven days.
or
An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
errors while deleting:
│ Error: Error waiting for Deleting Network: The network resource 'projects/static-emblem-327016/global/networks/sun-serverless-network' is already being used by 'projects/static-emblem-327016/global/routes/default-route-5cbc9de02e21bb35'
│
I was lookint at this issue https://issuetracker.google.com/issues/164378672 In it I was problems with us-central1 but I tried a couple of different regions and still I have the same issue
Questions:
I am running out of ideas, I was wondering if this is an infrastructural issue, maybe I should dump the project and create a new one ? Where can I check if there are infra issues ? How can I resolve my issue?
I recently get this error Error: Error creating Connector: googleapi: Error 409: Requested entity already exists. So I can explain the root cause and it's fix.
What I was doing is like trying to create a GCP resource (Create PubSub topic) using terraform (plan and then apply).
But before executing the terraform apply, I created the resource manually long time back with the same name. I expected that the terraform plan or terraform apply will not try to create it again since the resource name is same. But instead of Refreshing state, I found it was trying to Creating the resource. The reason it that, terraform does not know about your resource history. Either you need to import your resource history using terraform import command or else delete the manually created resource and then run the terraform apply command.
The message “An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually” can indicate that you don't have enough resources in your project to create the connector. Please make sure you have enough Resource Quota available in your GCP project.
The message “googleapi: Error 409: Requested entity already exists” indicates that The resource that a client tried to create already exists.
If you want to know what the root cause is, you can check the logs of the VPC Connector creation in the System Event Audit Logs.
System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they aren't driven by direct user action. System Event audit logs are always written; you can't configure, exclude, or disable them. The instructions to access them are here.
On the other hand, generating and distributing service account keys poses severe security risks to your organization. They are long-lived credentials that are not automatically rotated. These keys can be leaked accidentally or maliciously allow attackers to gain access to your sensitive GCP resources. If you accidentally compromised your JSON Key, please read the recommendations in this link.
If you want to know more about the risk and alternatives to download Service Account, Key please follow this link. Please note that this is not GCP official documentation, so I cannot vouch for its accuracy.
I was able to resolve my issue. It turns out that I had deleted my default compute engine service account in panic. I was able to recover it and everything worked out from there. For more info go here: https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
you have to identify the default service acc for compute engine and undelete it:
gcloud beta iam service-accounts undelete ACCOUNT_ID

Git Remote Updates finder in Java

I have a requirement to find if there are any updates in Git Remote branch compared to the local branch cloned earlier. If there are any updates available, application must notify the user and on user consent, a pull request has to be performed in Java.
I tried using JGIT
org.eclipse.jgit.lib.BranchTrackingStatus.of(git.getRepository(), git.branchList().call().get(0).getName()).getBehindCount() to know if my local repository is behind the remote repository. This is always returning 0.
1st parameter to the function BranchTrackingStatus.of must be a Repository object, and the object passed is local repository object.
Appreciate any suggestions to tackle this scenario.
BranchTrackingStatus compares a local branch with the configured remote tracking branch, e.g. refs/heads/foo with refs/remotes/origin/foo (if the default naming convention is followed).
The method does not update the remote tracking branch. Unless you first fetch the branch in question from the remote repository, no change will be reported.
FetchResult result = git.fetch()
.setRemote("origin")
.setRefSpecs("refs/heads/foo:refs/remotes/origin/foo")
.call().
The result holds details of the succeeded operation or why it couldn't be completed.
Does that solve your problem?

Inconsistent Cognos errors

I am trying to do a couple of things within Cognos:
Load Framework Manager and view/modify SQL behind existing models and create new models
Modify existing reports through Report Studio via Cognos Connection
I was given an account on the Cognos application server and I installed Framework Manager. I was given the gateway URL and dispatcher URL from the System Admin and then transferred all of the project files to the server so that I could load the project in question. I'm able to open the .cpf file; however, when going into any models, I get the error:
Unable to access service at URL:
https://xxx.cognos.xxx.xxx:443/ibmcognos/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/close.xts
Please check that your gateway URI information is configured correctly and that the service is available.
For further information please contact your service administrator.
I then contacted the system admin and he indicated that the URL was correct.
Furthermore, now when I try to access Cognos Connection (which worked fine last week), I receive the error:
CM-REQ-4159
Content Manager returned an error in the response header. The error "cmAuthenticateFailed CM-CAM-4005 Unable to authenticate. Check your security directory server connection and confirm the credentials entered at login." can be found in the response SOAP header.
The odd thing is, another member of my team receives this error:
AAA-AUT-0016:
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/portal/images/msg_error.gif
The function call to 'Method.invoke(cmServiceInstance, queryRequest)' failed.
https://xxx.cognos.xxx.xxx/ps/images/space.gif
DetailsExpand:
CM-SYS-5192 An error occurred with Content Manager.
I've done some research (I'm not really familiar with Cognos or even networking) and found that these errors (the ones that I receive) are usually received when trying to run a single report; however, I can't even access FM models or Cognos Connection in general. I also don't understand how we can receive 2 different errors when accessing the same URLs from the same network.
Any guidance would be greatly appreciated. We are using Cognos 10.2.2.
http://www-01.ibm.com/support/docview.wss?uid=swg21624136
One possible reason is that the user does not have the required "Import relational metadata" capability.
Or maybe it is something to do with the registry
Note: Make sure you backup the registry before making any changes.
see http://www-01.ibm.com/support/docview.wss?uid=swg22015730
Open cmd and type "regedit".
Navigate to [HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl.
Right click "BMT.exe" = dword:00002af9.
Delete.
Re-launch Framework Manager.

Workflow manager 1.0 and external dll

We are trying to publish a workflow and couple of activities to workflow manager. Our scenario is such that we need to create object of classes which resides in an external dll and those objects are calling a service(WCF) to fetch some data.
We have placed the DLLs in C:\Program Files\Workflow Manager\1.0\Workflow\WFWebRoot\bin\ and C:\Program Files\Workflow Manager\1.0\Workflow\Artifacts folders. Also we have created the AllowedTypes.XML file and have placed it in both the above folders.
The problem which we are facing is that when we declare a variable of a type which is in external DLL and try to invoke a method using InvokeMethod activity(we have also added the InvokeMethod activity type in AllowedTypes.xml) then we receive the following exception on the activity.publish statement.
Workflow XAML failed validation due to the following errors:
Cannot create unknown type '{http://schemas.microsoft.com/netfx/2009/xaml/activities}Variable({wf://workflow.windows.net/$Activities}ObjectType)'. HTTP headers received from the server - ActivityId: 33bf5b07-9eda-4f63-bf58-6d85cbfdcd55. NodeId: MachineID. Scope: /WFMgrSample. Client ActivityId : af5e1771-90f7-4610-a2be-5f7b7ce48ee8.
any idea on what is wrong here!!!

Resources